diff --git "a/stampy.jsonl" "b/stampy.jsonl" new file mode 100644--- /dev/null +++ "b/stampy.jsonl" @@ -0,0 +1,198 @@ +{"source": "stampy", "source_filetype": "text", "url": "n/a", "title": "What are the differences between “AI safety”, “AGI safety”, “AI alignment” and “AI existential safety”?", "authors": "n/a", "date_published": "n/a", "text": "Question: What are the differences between \"AI safety\", \"AGI safety\", \"AI alignment\" and \"AI existential safety\"?\n\nAnswer: AI alignment is the research field focused on trying to give us the tools to align AIs to specific goals, such as human values. This is crucial when they are highly competent, as a misaligned superintelligence could be the end of human civilization.\n\nAGI safety is the field trying to make sure that when we build Artificial General Intelligences they are safe and do not harm humanity. It overlaps with AI alignment strongly, in that misalignment of AI would be the main cause of unsafe behavior in AGIs, but also includes misuse and other governance issues.\n\nAI existential safety is a slightly broader term than AGI safety, including AI risks which pose an existential threat without necessarily being as general as humans.\n\nAI safety was originally used by the existential risk reduction movement for the work done to reduce the risks of misaligned superintelligence, but has also been adopted by researchers and others studying nearer term and less catastrophic risks from AI in recent years.", "question": "What are the differences between “AI safety”, “AGI safety”, “AI alignment” and “AI existential safety”?", "answer": ["AI alignment is the research field focused on trying to give us the tools to align AIs to specific goals, such as human values. This is crucial when they are highly competent, as a misaligned superintelligence could be the end of human civilization.\n\nAGI safety is the field trying to make sure that when we build Artificial General Intelligences they are safe and do not harm humanity. It overlaps with AI alignment strongly, in that misalignment of AI would be the main cause of unsafe behavior in AGIs, but also includes misuse and other governance issues.\n\nAI existential safety is a slightly broader term than AGI safety, including AI risks which pose an existential threat without necessarily being as general as humans.\n\nAI safety was originally used by the existential risk reduction movement for the work done to reduce the risks of misaligned superintelligence, but has also been adopted by researchers and others studying nearer term and less catastrophic risks from AI in recent years."], "entry": "Plex's Answer to What are the differences between “AI safety”, “AGI safety”, “AI alignment” and “AI existential safety”?", "id": "1fb1ed7be2865a7b951ffa79add7aabe"} +{"source": "stampy", "source_filetype": "text", "url": "n/a", "title": "How might we get from Artificial General Intelligence a Superintelligent system?", "authors": "n/a", "date_published": "n/a", "text": "Question: How might we get from Artificial General Intelligence a Superintelligent system?\n\nAnswer: Once a system is at least as capable as top human at AI research, it would tend to become the driver of its own development and initiate a process of recursive self-improvement known as the [[What is an \"intelligence explosion\"?┊intelligence explosion]], leading to an extremely powerful system. A general framing of this process is [https://www.openphilanthropy.org/ Open Philanthropy]'s [https://www.cold-takes.com/transformative-ai-timelines-part-1-of-4-what-kind-of-ai/#:~:text꞊Process%20for%20Automating%20Scientific%20and%20Technological%20Advancement%2C%20or%20PASTA Process for Automating Scientific and Technological Advancement (PASTA)].\n\nThere is [https://sideways-view.com/2018/02/24/takeoff-speeds/ much] [https://astralcodexten.substack.com/p/yudkowsky-contra-christiano-on-ai debate] about whether there would be a notable period where the AI was partially driving its own development, with humans being gradually less and less important, or whether the transition to AI automated AI capability research would be sudden. However, the core idea that there is ''some'' threshold of capabilities beyond which a system would begin to rapidly ascend is hard to reasonably dispute, and is a significant consideration for developing alignment strategies.", "question": "How might we get from Artificial General Intelligence a Superintelligent system?", "answer": ["Once a system is at least as capable as top human at AI research, it would tend to become the driver of its own development and initiate a process of recursive self-improvement known as the [[What is an \"intelligence explosion\"?┊intelligence explosion]], leading to an extremely powerful system. A general framing of this process is [https://www.openphilanthropy.org/ Open Philanthropy]'s [https://www.cold-takes.com/transformative-ai-timelines-part-1-of-4-what-kind-of-ai/#:~:text꞊Process%20for%20Automating%20Scientific%20and%20Technological%20Advancement%2C%20or%20PASTA Process for Automating Scientific and Technological Advancement (PASTA)].\n\nThere is [https://sideways-view.com/2018/02/24/takeoff-speeds/ much] [https://astralcodexten.substack.com/p/yudkowsky-contra-christiano-on-ai debate] about whether there would be a notable period where the AI was partially driving its own development, with humans being gradually less and less important, or whether the transition to AI automated AI capability research would be sudden. However, the core idea that there is ''some'' threshold of capabilities beyond which a system would begin to rapidly ascend is hard to reasonably dispute, and is a significant consideration for developing alignment strategies."], "entry": "Plex's Answer to How might we get from Artificial General Intelligence to a Superintelligent system?", "id": "11f6a97962fd5d913d2c5e02350f0137"} +{"source": "stampy", "source_filetype": "text", "url": "n/a", "title": "Why would we only get one chance align a superintelligence?", "authors": "n/a", "date_published": "n/a", "text": "Question: Why would we only get one chance align a superintelligence?\n\nAnswer: An AGI which has [[How might we get from Artificial General Intelligence to a Superintelligent system?┊recursively self-improved into a superintelligence]] would be capable of either resisting our attempts to modify incorrectly specified goals, or realizing it was still weaker than us and acting [[Would we know if an AGI was misaligned?┊deceptively aligned]] until it was highly sure it could win in a confrontation. AGI would likely prevent a human from shutting it down unless the AGI was designed to be [https://www.lesswrong.com/tag/corrigibility corrigible]. See [[Why can't we just turn the AI off if it starts to misbehave?]] for more information.", "question": "Why would we only get one chance align a superintelligence?", "answer": ["An AGI which has [[How might we get from Artificial General Intelligence to a Superintelligent system?┊recursively self-improved into a superintelligence]] would be capable of either resisting our attempts to modify incorrectly specified goals, or realizing it was still weaker than us and acting [[Would we know if an AGI was misaligned?┊deceptively aligned]] until it was highly sure it could win in a confrontation. AGI would likely prevent a human from shutting it down unless the AGI was designed to be [https://www.lesswrong.com/tag/corrigibility corrigible]. See [[Why can't we just turn the AI off if it starts to misbehave?]] for more information."], "entry": "TJ6K's Answer to Why would we only get one chance to align a superintelligence?", "id": "439645c26b9bbca64c447d5a97a632fe"} +{"source": "stampy", "source_filetype": "text", "url": "n/a", "title": "I'm interested in working on AI safety. What should I do?", "authors": "n/a", "date_published": "n/a", "text": "Question: I'm interested in working on AI safety. What should I do?\n\nAnswer: AI Safety Support [https://calendly.com/aiss offers free calls] to advise people interested in a career in AI Safety, so that's a great place to start. We're working on creating a bunch of detailed information for Stampy to use, but in the meantime check out these resources:\n\n* [https://www.eacambridge.org/agi-safety-fundamentals EA Cambridge AGI Safety Fundamentals curriculum]\n* [https://80000hours.org/articles/ai-safety-syllabus/ 80,000 Hours AI safety syllabus]\n* [https://docs.google.com/document/d/1RFo7_9JVmt0z8RPwUjB-mUMgCMoUQmsaj2CM5aHvxCw/edit Adam Gleave's Careers in Beneficial AI Research document]\n* [https://rohinshah.com/faq-career-advice-for-ai-alignment-researchers/ Rohin Shah's FAQ on career advice for AI alignment researchers]\n* [https://www.aisafetysupport.org/ AI Safety Support] has lots of other good resources, such as their [https://www.aisafetysupport.org/resources/lots-of-links links page], [https://www.google.com/url?q꞊https%3A%2F%2Fjoin.slack.com%2Ft%2Fai-alignment%2Fshared_invite%2Fzt-fkgwbd2b-kK50z~BbVclOZMM9UP44gw&sa꞊D&sntz꞊1&usg꞊AFQjCNEIiKykU7SJ9LhJBoE3FFaOFOhOSA slack], [https://www.aisafetysupport.org/newsletter newsletter], and [https://www.aisafetysupport.org/events/online-events-calendar events calendar].\n* [https://docs.google.com/spreadsheets/d/1JyxrfFFrzaQsS3AQ4qJ2aOLGj1aSkBaxkpZCqBX9BOY/edit#gid꞊0 Safety-aligned research training programs (under construction).]", "question": "I'm interested in working on AI safety. What should I do?", "answer": ["AI Safety Support [https://calendly.com/aiss offers free calls] to advise people interested in a career in AI Safety, so that's a great place to start. We're working on creating a bunch of detailed information for Stampy to use, but in the meantime check out these resources:\n\n* [https://www.eacambridge.org/agi-safety-fundamentals EA Cambridge AGI Safety Fundamentals curriculum]\n* [https://80000hours.org/articles/ai-safety-syllabus/ 80,000 Hours AI safety syllabus]\n* [https://docs.google.com/document/d/1RFo7_9JVmt0z8RPwUjB-mUMgCMoUQmsaj2CM5aHvxCw/edit Adam Gleave's Careers in Beneficial AI Research document]\n* [https://rohinshah.com/faq-career-advice-for-ai-alignment-researchers/ Rohin Shah's FAQ on career advice for AI alignment researchers]\n* [https://www.aisafetysupport.org/ AI Safety Support] has lots of other good resources, such as their [https://www.aisafetysupport.org/resources/lots-of-links links page], [https://www.google.com/url?q꞊https%3A%2F%2Fjoin.slack.com%2Ft%2Fai-alignment%2Fshared_invite%2Fzt-fkgwbd2b-kK50z~BbVclOZMM9UP44gw&sa꞊D&sntz꞊1&usg꞊AFQjCNEIiKykU7SJ9LhJBoE3FFaOFOhOSA slack], [https://www.aisafetysupport.org/newsletter newsletter], and [https://www.aisafetysupport.org/events/online-events-calendar events calendar].\n* [https://docs.google.com/spreadsheets/d/1JyxrfFFrzaQsS3AQ4qJ2aOLGj1aSkBaxkpZCqBX9BOY/edit#gid꞊0 Safety-aligned research training programs (under construction).]"], "entry": "Filip's Answer to I'm interested in working on AI safety. What should I do?", "id": "a0c81fadd58ae12ce7826dfda4327d63"} +{"source": "stampy", "source_filetype": "text", "url": "n/a", "title": "Can you give an AI a goal which involves “minimally impacting the world”?", "authors": "n/a", "date_published": "n/a", "text": "Question: Can you give an AI a goal which involves \"minimally impacting the world\"?\n\nAnswer: This is actually an active area of AI alignment research, called \"Impact Measures\"! It's not trivial to formalize in a way which won't predictably go wrong (entropy minimization likely leads to an AI which tries really hard to put out all the stars ASAP since they produce so much entropy, for example), but progress is being made. You can read about it on the [https://www.alignmentforum.org/tag/impact-measures Alignment Forum tag], or watch Rob's videos [http://youtu.be/lqJUIqZNzP8 Avoiding Negative Side Effects] and [http://youtu.be/S_Sd_S8jwP0 Avoiding Positive Side Effects]", "question": "Can you give an AI a goal which involves “minimally impacting the world”?", "answer": ["This is actually an active area of AI alignment research, called \"Impact Measures\"! It's not trivial to formalize in a way which won't predictably go wrong (entropy minimization likely leads to an AI which tries really hard to put out all the stars ASAP since they produce so much entropy, for example), but progress is being made. You can read about it on the [https://www.alignmentforum.org/tag/impact-measures Alignment Forum tag], or watch Rob's videos [http://youtu.be/lqJUIqZNzP8 Avoiding Negative Side Effects] and [http://youtu.be/S_Sd_S8jwP0 Avoiding Positive Side Effects]"], "entry": "Robertskmiles's Answer to Can you give an AI a goal which involves “minimally impacting the world”?", "id": "fef3f56bcecb50f3901749aac62c5d97"} +{"source": "stampy", "source_filetype": "text", "url": "n/a", "title": "Is it possible block an AI from doing certain things on the Internet?", "authors": "n/a", "date_published": "n/a", "text": "Question: Is it possible block an AI from doing certain things on the Internet?\n\nAnswer: Once an AGI has access to the internet it would be very challenging to meaningfully restrict it from doing things online which it wants to. There are too many options to bypass blocks we may put in place.\n\nIt may be possible to design it so that it does not want to do dangerous things in the first place, or perhaps to set up tripwires so that we notice that it's trying to do a dangerous thing, though that relies on it not noticing or bypassing the tripwire so should not be the only layer of security.", "question": "Is it possible block an AI from doing certain things on the Internet?", "answer": ["Once an AGI has access to the internet it would be very challenging to meaningfully restrict it from doing things online which it wants to. There are too many options to bypass blocks we may put in place.\n\nIt may be possible to design it so that it does not want to do dangerous things in the first place, or perhaps to set up tripwires so that we notice that it’s trying to do a dangerous thing, though that relies on it not noticing or bypassing the tripwire so should not be the only layer of security."], "entry": "Plex's Answer to Is it possible to block an AI from doing certain things on the Internet?", "id": "d50197a16f353d9252fb99ba278a25d7"} +{"source": "stampy", "source_filetype": "text", "url": "n/a", "title": "How quickly could an AI go from the first indications of problems an unrecoverable disaster?", "authors": "n/a", "date_published": "n/a", "text": "Question: How quickly could an AI go from the first indications of problems an unrecoverable disaster?\n\nAnswer: If the AI system was deceptively aligned (i.e. pretending to be nice until it was in control of the situation) or had been in stealth mode while getting things in place for a takeover, quite possibly within hours. We may get more warning with weaker systems, if the AGI does not feel at all threatened by us, or if a [https://www.lesswrong.com/posts/AyNHoTWWAJ5eb99ji/another-outer-alignment-failure-story complex ecosystem of AI systems is built over time and we gradually lose control]. \n\nPaul Christiano writes [https://www.lesswrong.com/posts/AyNHoTWWAJ5eb99ji/another-outer-alignment-failure-story a story of alignment failure] which shows a relatively fast transition.", "question": "How quickly could an AI go from the first indications of problems an unrecoverable disaster?", "answer": ["If the AI system was deceptively aligned (i.e. pretending to be nice until it was in control of the situation) or had been in stealth mode while getting things in place for a takeover, quite possibly within hours. We may get more warning with weaker systems, if the AGI does not feel at all threatened by us, or if a [https://www.lesswrong.com/posts/AyNHoTWWAJ5eb99ji/another-outer-alignment-failure-story complex ecosystem of AI systems is built over time and we gradually lose control]. \n\nPaul Christiano writes [https://www.lesswrong.com/posts/AyNHoTWWAJ5eb99ji/another-outer-alignment-failure-story a story of alignment failure] which shows a relatively fast transition."], "entry": "Plex's Answer to How quickly could an AI go from the first indications of problems to an unrecoverable disaster?", "id": "20727eb5a16f924f946eb12e43b90322"} +{"source": "stampy", "source_filetype": "text", "url": "n/a", "title": "How doomed is humanity?", "authors": "n/a", "date_published": "n/a", "text": "Question: How doomed is humanity?\n\nAnswer: The opinions from experts are all over the place, according to [https://www.lesswrong.com/posts/QvwSr5LsxyDeaPK5s/existential-risk-from-ai-survey-results this 2021 survey]. Someone has also collected a [https://docs.google.com/spreadsheets/d/1W10B6NJjicD8O0STPiT3tNV3oFnT8YsfjmtYR8RO_RI/edit#gid꞊0 database of existential risk estimates].\n\nOn the pessimistic end you find people like Eliezer Yudkowsky, [https://forum.effectivealtruism.org/posts/bGBm2yTiLEwwCbL6w/discussion-with-eliezer-yudkowsky-on-agi-interventions who said]: \"I consider the present gameboard to look incredibly grim, and I don't actually see a way out through hard work alone. We can hope there's a miracle that violates some aspect of my background model, and we can try to prepare for that unknown miracle; preparing for an unknown miracle probably looks like \"Trying to die with more dignity on the mainline\" (because if you can die with more dignity on the mainline, you are better positioned to take advantage of a miracle if it occurs).\"\n\nWhile at the optimistic end you have people like Ben Garfinkel who put the probability at more like 0.1-1% for AI causing an existential catastrophe in the next century, with most people lying somewhere in the middle.", "question": "How doomed is humanity?", "answer": ["The opinions from experts are all over the place, according to [https://www.lesswrong.com/posts/QvwSr5LsxyDeaPK5s/existential-risk-from-ai-survey-results this 2021 survey]. Someone has also collected a [https://docs.google.com/spreadsheets/d/1W10B6NJjicD8O0STPiT3tNV3oFnT8YsfjmtYR8RO_RI/edit#gid꞊0 database of existential risk estimates].\n\nOn the pessimistic end you find people like Eliezer Yudkowsky, [https://forum.effectivealtruism.org/posts/bGBm2yTiLEwwCbL6w/discussion-with-eliezer-yudkowsky-on-agi-interventions who said]: \"I consider the present gameboard to look incredibly grim, and I don't actually see a way out through hard work alone. We can hope there's a miracle that violates some aspect of my background model, and we can try to prepare for that unknown miracle; preparing for an unknown miracle probably looks like \"Trying to die with more dignity on the mainline\" (because if you can die with more dignity on the mainline, you are better positioned to take advantage of a miracle if it occurs).\"\n\nWhile at the optimistic end you have people like Ben Garfinkel who put the probability at more like 0.1-1% for AI causing an existential catastrophe in the next century, with most people lying somewhere in the middle."], "entry": "Plex's Answer to How doomed is humanity?", "id": "2e995787ac31eb26b98c6660ba94ca0c"} +{"source": "stampy", "source_filetype": "text", "url": "n/a", "title": "Why is the future of AI suddenly in the news? What has changed?", "authors": "n/a", "date_published": "n/a", "text": "Question: Why is the future of AI suddenly in the news? What has changed?\n\nAnswer: In previous decades, AI research had proceeded more slowly than some experts predicted. According to experts in the field, however, this trend has reversed in the past 5 years or so. AI researchers have been repeatedly surprised by, for example, the effectiveness of new visual and speech recognition systems. AI systems can solve CAPTCHAs that were specifically devised to foil AIs, translate spoken text on-the-fly, and teach themselves how to play games they have neither seen before nor been programmed to play. Moreover, the real-world value of this effectiveness has prompted massive investment by large tech firms such as Google, Facebook, and IBM, creating a positive feedback cycle that could dramatically speed progress.", "question": "Why is the future of AI suddenly in the news? What has changed?", "answer": ["In previous decades, AI research had proceeded more slowly than some experts predicted. According to experts in the field, however, this trend has reversed in the past 5 years or so. AI researchers have been repeatedly surprised by, for example, the effectiveness of new visual and speech recognition systems. AI systems can solve CAPTCHAs that were specifically devised to foil AIs, translate spoken text on-the-fly, and teach themselves how to play games they have neither seen before nor been programmed to play. Moreover, the real-world value of this effectiveness has prompted massive investment by large tech firms such as Google, Facebook, and IBM, creating a positive feedback cycle that could dramatically speed progress."], "entry": "Answer to Why is the future of AI suddenly in the news? What has changed?", "id": "293439d29c3826c2430cf71492ef718c"} +{"source": "stampy", "source_filetype": "text", "url": "n/a", "title": "What is interpretability and what approaches are there?", "authors": "n/a", "date_published": "n/a", "text": "Question: What is interpretability and what approaches are there?\n\nAnswer: Interpretability is about making machine learning (ML) systems easier to understand. It is hard because the computations of current ML systems often depend on billions of parameters which they learnt from data. Areas of research for making current ML models more understandable are ''mechanistic interpretability'', ''finding important input features'', ''explaining by examples'', ''natural language explanations'', and using ML architectures which are ''intrinsically interpretable''.\n\n# '''Mechanistic interpretability''' is about interpreting an ML model's internal representations. A very simple way to do this is [https://towardsdatascience.com/every-ml-engineer-needs-to-know-neural-network-interpretability-afea2ac0824e activation maximization]: optimize the input such that one particular neuron is activated a lot. This optimized input is an indicator of the concept which the neuron represents. Work that is central for mechanistic interpretability is the [https://distill.pub/2020/circuits/zoom-in/ circuits thread], which focuses on interpreting the algorithms implemented by subgraphs (circuits) of neural networks. There is also work on [https://transformer-circuits.pub/2021/framework/index.html circuits in transformers] in particular. Mechanistic Interpretability has the drawback that [https://www.greaterwrong.com/posts/qXtbBAxmFkAQLQEJE/interpretability-tool-ness-alignment-corrigibility-are-not interpretability is not composable], i.e. even if we understand all the components of a system, it doesn't mean that we understand the whole. However, there may still be a way of hierarchically decomposing a system in a way that allows us to understand each layer of abstraction of it, and thus understanding the whole.

https://i.imgur.com/nGDvldz.png
Feature visualization of a neuron that corresponds to dog-like features. [https://distill.pub/2020/circuits/zoom-in/ image source]

\n# The idea of '''finding important input features''' is to find out which input features are most relevant for the output. In the case of image classification, we can highlight the relevant features with a heatmap, which is called [https://arxiv.org/abs/ saliency map]). A very simple way to do this is to take the derivative of the output with regard to the different parts of the input. This derivative denotes how much the output changes if we change a particular part of the input, i.e. how important that part of the input is for the output. Saliency maps can be useful to notice cases in which an image classifier learns to use features it should not use. For example, the paper ''[https://www.nature.com/articles/s41467-019-08987-4 Unmasking Clever Hans predictors and assessing what machines really learn]'' used saliency maps to show that a horse-classifying image classifier was not using the image parts that contained the horse at all, but rather relied on the name of the photographer printed in a corner, because one of the photographers primarily took photos of horses. [image of horse thing, maybe see thesis] However, many of the common saliency methods fail basic [https://arxiv.org/abs/1810.03292 sanity checks], such as the saliency maps almost not changing when the model weights are randomized. Therefore, saliency maps are not sufficient for a reliable understanding of ML systems. \n# '''Explanation by examples''' means showing examples in the training data that have similar features, such as in the paper ''[https://arxiv.org/abs/1806.10574 This Looks Like That: Deep Learning for Interpretable Image Recognition]''.\n# '''Natural Language Explanations''' are sentences describing a model's reasons for its outputs. For example, in the paper ''[https://xfgao.github.io/xCookingWeb/ Joint Mind Modeling for Explanation Generation in Complex Human-Robot Collaborative Tasks]'' a human and an AI play a virtual cooking game together, and the AI explains its plans in natural language. They find that with the explanations the human-AI team performs significantly better. \n# However, natural language explanations, as well as finding important features and explanation by examples are ''post-hoc'' explanations: They are generated after the fact, and are therefore likely to not be ''faithful'' (i.e. not accurately describe a model's decision process). '''Interpretable architectures''' are architectures which are simple enough to be understandable without additional tools. Cynthia Rudin is a central researcher [https://arxiv.org/abs/1811.10154 arguing for using interpretable architectures] in high-stakes situations. However, using interpretable architecutures usually comes with a significant cost to model performance.\n\nYou can read more about different approaches in [https://www.alignmentforum.org/posts/GEPX7jgLMB8vR2qaK/opinions-on-interpretable-machine-learning-and-70-summaries this overview article] which summarizes more than 70 interpretability-related papers, and in the free online book ''[https://christophm.github.io/interpretable-ml-book/ A Guide for Making Black Box Models Explainable]''.", "question": "What is interpretability and what approaches are there?", "answer": ["Interpretability is about making machine learning (ML) systems easier to understand. It is hard because the computations of current ML systems often depend on billions of parameters which they learnt from data. Areas of research for making current ML models more understandable are ''mechanistic interpretability'', ''finding important input features'', ''explaining by examples'', ''natural language explanations'', and using ML architectures which are ''intrinsically interpretable''.\n\n# '''Mechanistic interpretability''' is about interpreting an ML model’s internal representations. A very simple way to do this is [https://towardsdatascience.com/every-ml-engineer-needs-to-know-neural-network-interpretability-afea2ac0824e activation maximization]: optimize the input such that one particular neuron is activated a lot. This optimized input is an indicator of the concept which the neuron represents. Work that is central for mechanistic interpretability is the [https://distill.pub/2020/circuits/zoom-in/ circuits thread], which focuses on interpreting the algorithms implemented by subgraphs (circuits) of neural networks. There is also work on [https://transformer-circuits.pub/2021/framework/index.html circuits in transformers] in particular. Mechanistic Interpretability has the drawback that [https://www.greaterwrong.com/posts/qXtbBAxmFkAQLQEJE/interpretability-tool-ness-alignment-corrigibility-are-not interpretability is not composable], i.e. even if we understand all the components of a system, it doesn’t mean that we understand the whole. However, there may still be a way of hierarchically decomposing a system in a way that allows us to understand each layer of abstraction of it, and thus understanding the whole.

https://i.imgur.com/nGDvldz.png
Feature visualization of a neuron that corresponds to dog-like features. [https://distill.pub/2020/circuits/zoom-in/ image source]

\n# The idea of '''finding important input features''' is to find out which input features are most relevant for the output. In the case of image classification, we can highlight the relevant features with a heatmap, which is called [https://arxiv.org/abs/1312.6034 saliency map]). A very simple way to do this is to take the derivative of the output with regard to the different parts of the input. This derivative denotes how much the output changes if we change a particular part of the input, i.e. how important that part of the input is for the output. Saliency maps can be useful to notice cases in which an image classifier learns to use features it should not use. For example, the paper ''[https://www.nature.com/articles/s41467-019-08987-4 Unmasking Clever Hans predictors and assessing what machines really learn]'' used saliency maps to show that a horse-classifying image classifier was not using the image parts that contained the horse at all, but rather relied on the name of the photographer printed in a corner, because one of the photographers primarily took photos of horses. [image of horse thing, maybe see thesis] However, many of the common saliency methods fail basic [https://arxiv.org/abs/1810.03292 sanity checks], such as the saliency maps almost not changing when the model weights are randomized. Therefore, saliency maps are not sufficient for a reliable understanding of ML systems. \n# '''Explanation by examples''' means showing examples in the training data that have similar features, such as in the paper ''[https://arxiv.org/abs/1806.10574 This Looks Like That: Deep Learning for Interpretable Image Recognition]''.\n# '''Natural Language Explanations''' are sentences describing a model’s reasons for its outputs. For example, in the paper ''[https://xfgao.github.io/xCookingWeb/ Joint Mind Modeling for Explanation Generation in Complex Human-Robot Collaborative Tasks]'' a human and an AI play a virtual cooking game together, and the AI explains its plans in natural language. They find that with the explanations the human-AI team performs significantly better. \n# However, natural language explanations, as well as finding important features and explanation by examples are ''post-hoc'' explanations: They are generated after the fact, and are therefore likely to not be ''faithful'' (i.e. not accurately describe a model’s decision process). '''Interpretable architectures''' are architectures which are simple enough to be understandable without additional tools. Cynthia Rudin is a central researcher [https://arxiv.org/abs/1811.10154 arguing for using interpretable architectures] in high-stakes situations. However, using interpretable architecutures usually comes with a significant cost to model performance.\n\nYou can read more about different approaches in [https://www.alignmentforum.org/posts/GEPX7jgLMB8vR2qaK/opinions-on-interpretable-machine-learning-and-70-summaries this overview article] which summarizes more than 70 interpretability-related papers, and in the free online book ''[https://christophm.github.io/interpretable-ml-book/ A Guide for Making Black Box Models Explainable]''."], "entry": "Magdalena's Answer to What is interpretability and what approaches are there?", "id": "b13833d162def0fcb75d6399d48343f0"} +{"source": "stampy", "source_filetype": "text", "url": "n/a", "title": "What would a good future with AGI look like?", "authors": "n/a", "date_published": "n/a", "text": "Question: What would a good future with AGI look like?\n\nAnswer: As technology continues to improve, one thing is certain: the future is going to look like science fiction. Doubly so once superhuman AI (\"[https://en.wikipedia.org/wiki/Artificial_general_intelligence AGI]\") is invented, because we can expect the AGI to produce technological improvements at a superhuman rate, eventually approaching the physical limits in terms of how small machines can be miniaturized, how fast they can compute, how energy-efficient they can be, etc.\n\nToday's world is lacking in many ways, so given these increasingly powerful tools, it seems likely that whoever controls those tools will use them to make increasingly large (and increasingly sci-fi-sounding) improvements to the world. If (and that's a big if!) humanity retains control of the AGI, we could use these amazing technologies to stop climate change, colonize other planets, solve world hunger, cure cancer and every other disease, even eliminate aging and death. \n\nFor more inspiration, here are some stories painting what a bright, AGI-powered future could look like:\n* The winners of the [https://worldbuild.ai/ FHI Worldbuilding contest]\n* [https://www.lesswrong.com/posts/Ybp6Wg6yy9DWRcBiR/the-adventure-a-new-utopia-story Stuart Armstrong's short story \"The Adventure\"]\n* [https://en.wikipedia.org/wiki/Culture_series Iain M. Banks's Culture novels]", "question": "What would a good future with AGI look like?", "answer": ["As technology continues to improve, one thing is certain: the future is going to look like science fiction. Doubly so once superhuman AI (\"[https://en.wikipedia.org/wiki/Artificial_general_intelligence AGI]\") is invented, because we can expect the AGI to produce technological improvements at a superhuman rate, eventually approaching the physical limits in terms of how small machines can be miniaturized, how fast they can compute, how energy-efficient they can be, etc.\n\nToday's world is lacking in many ways, so given these increasingly powerful tools, it seems likely that whoever controls those tools will use them to make increasingly large (and increasingly sci-fi-sounding) improvements to the world. If (and that's a big if!) humanity retains control of the AGI, we could use these amazing technologies to stop climate change, colonize other planets, solve world hunger, cure cancer and every other disease, even eliminate aging and death. \n\nFor more inspiration, here are some stories painting what a bright, AGI-powered future could look like:\n* The winners of the [https://worldbuild.ai/ FHI Worldbuilding contest]\n* [https://www.lesswrong.com/posts/Ybp6Wg6yy9DWRcBiR/the-adventure-a-new-utopia-story Stuart Armstrong's short story \"The Adventure\"]\n* [https://en.wikipedia.org/wiki/Culture_series Iain M. Banks's Culture novels]"], "entry": "Gelisam's Answer to What would a good future with AGI look like?", "id": "747747bbbd3cda59762972c7f67c74ef"} +{"source": "stampy", "source_filetype": "text", "url": "n/a", "title": "What are \"scaling laws\" and how are they relevant safety?", "authors": "n/a", "date_published": "n/a", "text": "Question: What are \"scaling laws\" and how are they relevant safety?\n\nAnswer: '''Scaling laws''' are observed trends on the performance of large machine learning models. \n\nIn the field of ML, better performance is usually achieved through better algorithms, better inputs, or using larger amounts of parameters, computing power, or data. Since the 2010s, advances in deep learning have shown experimentally that the easier and faster returns come from '''scaling''', an observation that has been described by Richard Sutton as the ''[http://www.incompleteideas.net/IncIdeas/BitterLesson.html bitter lesson]''.\n\nWhile deep learning as a field has long struggled to scale models up while retaining learning capability (with such problems as [https://en.wikipedia.org/wiki/Catastrophic_interference catastrophic interference]), more recent methods, especially the Transformer model architecture, were able to ''just work'' by feeding them more data, and as the meme goes, [https://www.gwern.net/images/rl/2017-12-24-meme-nnlayers-alphagozero.jpg stacking more layers].\n\nMore surprisingly, performance (in terms of absolute likelihood loss, a standard measure) appeared to increase ''smoothly'' with compute, or dataset size, or parameter count. Which gave rise to '''scaling laws''', the trend lines suggested by performance gains, from which returns on data/compute/time investment could be extrapolated.\n\nA companion to this purely descriptive law (no strong theoretical explanation of the phenomenon has been found yet), is the '''scaling hypothesis''', which [https://www.gwern.net/Scaling-hypothesis#scaling-hypothesis Gwern Branwen describes]:\n\n
The ''strong scaling hypothesis'' is that, once we find a scalable architecture like self-attention or convolutions, [...] we can simply train ever larger [neural networks] and ever more sophisticated behavior will emerge naturally as the easiest way to optimize for all the tasks & data.
\n\nThe scaling laws, if the above hypothesis holds, become highly relevant to safety insofar capability gains become conceptually easier to achieve: no need for clever designs to solve a given task, just throw more processing at it and it will eventually yield. As [https://ai-alignment.com/prosaic-ai-control-b959644d79c2 Paul Christiano observes]:\n\n
It now seems possible that we could build \"prosaic\" AGI, which can replicate human behavior but doesn't involve qualitatively new ideas about \"how intelligence works\".
\n\nWhile the scaling laws still hold experimentally at the time of this writing (July 2022), whether they'll continue up to safety-relevant capabilities is still an open problem.", "question": "What are \"scaling laws\" and how are they relevant safety?", "answer": ["'''Scaling laws''' are observed trends on the performance of large machine learning models. \n\nIn the field of ML, better performance is usually achieved through better algorithms, better inputs, or using larger amounts of parameters, computing power, or data. Since the 2010s, advances in deep learning have shown experimentally that the easier and faster returns come from '''scaling''', an observation that has been described by Richard Sutton as the ''[http://www.incompleteideas.net/IncIdeas/BitterLesson.html bitter lesson]''.\n\nWhile deep learning as a field has long struggled to scale models up while retaining learning capability (with such problems as [https://en.wikipedia.org/wiki/Catastrophic_interference catastrophic interference]), more recent methods, especially the Transformer model architecture, were able to ''just work'' by feeding them more data, and as the meme goes, [https://www.gwern.net/images/rl/2017-12-24-meme-nnlayers-alphagozero.jpg stacking more layers].\n\nMore surprisingly, performance (in terms of absolute likelihood loss, a standard measure) appeared to increase ''smoothly'' with compute, or dataset size, or parameter count. Which gave rise to '''scaling laws''', the trend lines suggested by performance gains, from which returns on data/compute/time investment could be extrapolated.\n\nA companion to this purely descriptive law (no strong theoretical explanation of the phenomenon has been found yet), is the '''scaling hypothesis''', which [https://www.gwern.net/Scaling-hypothesis#scaling-hypothesis Gwern Branwen describes]:\n\n
The ''strong scaling hypothesis'' is that, once we find a scalable architecture like self-attention or convolutions, [...] we can simply train ever larger [neural networks] and ever more sophisticated behavior will emerge naturally as the easiest way to optimize for all the tasks & data.
\n\nThe scaling laws, if the above hypothesis holds, become highly relevant to safety insofar capability gains become conceptually easier to achieve: no need for clever designs to solve a given task, just throw more processing at it and it will eventually yield. As [https://ai-alignment.com/prosaic-ai-control-b959644d79c2 Paul Christiano observes]:\n\n
It now seems possible that we could build “prosaic” AGI, which can replicate human behavior but doesn’t involve qualitatively new ideas about “how intelligence works”.
\n\nWhile the scaling laws still hold experimentally at the time of this writing (July 2022), whether they'll continue up to safety-relevant capabilities is still an open problem."], "entry": "Jrmyp's Answer to What are \"scaling laws\" and how are they relevant to safety?", "id": "70cafbfe35b563e7a024048f964cffd4"} +{"source": "stampy", "source_filetype": "text", "url": "n/a", "title": "What is \"HCH\"?", "authors": "n/a", "date_published": "n/a", "text": "Question: What is \"HCH\"?\n\nAnswer:

Humans Consulting HCH (HCH) is a recursive acronym describing a setup where humans can consult simulations of themselves to help answer questions. It is a concept used in discussion of the [https://www.lesswrong.com/tag/iterated-amplification iterated amplification] proposal to solve the alignment problem.

It was first described by Paul Christiano in his post [https://www.lesswrong.com/posts/NXqs4nYXaq8q6dTTx/humans-consulting-hch Humans Consulting HCH]:

Consider a human Hugh who has access to a question-answering machine. Suppose the machine answers question Q by perfectly imitating how Hugh would answer question Q, if Hugh had access to the question-answering machine.

That is, Hugh is able to consult a copy of Hugh, who is able to consult a copy of Hugh, who is able to consult a copy of Hugh…

Let's call this process HCH, for \"Humans Consulting HCH.\"

[https://www.lesswrong.com/tag/humans-consulting-hch?edit꞊true Edit on LessWrong]
https://www.lesswrong.com/tag/humans-consulting-hch?edit꞊true
", "question": "What is \"HCH\"?", "answer": ["

Humans Consulting HCH (HCH) is a recursive acronym describing a setup where humans can consult simulations of themselves to help answer questions. It is a concept used in discussion of the [https://www.lesswrong.com/tag/iterated-amplification iterated amplification] proposal to solve the alignment problem.

It was first described by Paul Christiano in his post [https://www.lesswrong.com/posts/NXqs4nYXaq8q6dTTx/humans-consulting-hch Humans Consulting HCH]:

Consider a human Hugh who has access to a question-answering machine. Suppose the machine answers question Q by perfectly imitating how Hugh would answer question Q, if Hugh had access to the question-answering machine.

That is, Hugh is able to consult a copy of Hugh, who is able to consult a copy of Hugh, who is able to consult a copy of Hugh…

Let’s call this process HCH, for “Humans Consulting HCH.”

[https://www.lesswrong.com/tag/humans-consulting-hch?edit꞊true Edit on LessWrong]
https://www.lesswrong.com/tag/humans-consulting-hch?edit꞊true
"], "entry": "Plex's Answer to What is \"HCH\"?", "id": "47fbbc7bb3c4f3a9a8374396afc077ae"} +{"source": "stampy", "source_filetype": "text", "url": "n/a", "title": "How might things go wrong with AI even without an agentic superintelligence?", "authors": "n/a", "date_published": "n/a", "text": "Question: How might things go wrong with AI even without an agentic superintelligence?\n\nAnswer: Failures can happen with narrow non-agentic systems, mostly from humans not anticipating safety-relevant decisions made too quickly to react, much like in the [https://en.wikipedia.org/wiki/2010_flash_crash 2010 flash crash].\n\nA helpful metaphor draws on self-driving cars. By relying more and more on an automated process to make decisions, people become worse drivers as they're not training themselves to react to the unexpected; then the unexpected happens, the software system itself reacts in an unsafe way and the human is too slow to regain control.\n\nThis generalizes to broader tasks. A human using a powerful system to make better decisions (say, as the CEO of a company) might not understand those very well, get trapped into an equilibrium without realizing it and essentially losing control over the entire process.\n\nMore detailed examples in this vein are described by Paul Christiano in ''[https://www.lesswrong.com/posts/HBxe6wdjxK239zajf/what-failure-looks-like What failure looks like]''.\n\nAnother source of failures is AI-mediated stable totalitarianism. The limiting factor in current pervasive surveillance, police and armed forces is manpower; the use of drones and other automated tools decreases the need for personnel to ensure security and extract resources.\n\nAs capabilities improve, political dissent could become impossible, checks and balances would break down as [https://www.youtube.com/watch?v꞊rStL7niR7gs a minimal number of key actors is needed to stay in power].", "question": "How might things go wrong with AI even without an agentic superintelligence?", "answer": ["Failures can happen with narrow non-agentic systems, mostly from humans not anticipating safety-relevant decisions made too quickly to react, much like in the [https://en.wikipedia.org/wiki/2010_flash_crash 2010 flash crash].\n\nA helpful metaphor draws on self-driving cars. By relying more and more on an automated process to make decisions, people become worse drivers as they’re not training themselves to react to the unexpected; then the unexpected happens, the software system itself reacts in an unsafe way and the human is too slow to regain control.\n\nThis generalizes to broader tasks. A human using a powerful system to make better decisions (say, as the CEO of a company) might not understand those very well, get trapped into an equilibrium without realizing it and essentially losing control over the entire process.\n\nMore detailed examples in this vein are described by Paul Christiano in ''[https://www.lesswrong.com/posts/HBxe6wdjxK239zajf/what-failure-looks-like What failure looks like]''.\n\nAnother source of failures is AI-mediated stable totalitarianism. The limiting factor in current pervasive surveillance, police and armed forces is manpower; the use of drones and other automated tools decreases the need for personnel to ensure security and extract resources.\n\nAs capabilities improve, political dissent could become impossible, checks and balances would break down as [https://www.youtube.com/watch?v꞊rStL7niR7gs a minimal number of key actors is needed to stay in power]."], "entry": "Jrmyp's Answer to How might things go wrong with AI even without an agentic superintelligence?", "id": "87a1d3c254d50123f6e53844c6fcbe0c"} +{"source": "stampy", "source_filetype": "text", "url": "n/a", "title": "Why might we expect a superintelligence be hostile by default?", "authors": "n/a", "date_published": "n/a", "text": "Question: Why might we expect a superintelligence be hostile by default?\n\nAnswer: The argument goes: computers only do what we command them; no more, no less. So it might be bad if terrorists or enemy countries develop superintelligence first. But if we develop superintelligence first there's no problem. Just command it to do the things we want, right?\nSuppose we wanted a superintelligence to cure cancer. How might we specify the goal \"cure cancer\"? We couldn't guide it through every individual step; if we knew every individual step, then we could cure cancer ourselves. Instead, we would have to give it a final goal of curing cancer, and trust the superintelligence to come up with intermediate actions that furthered that goal. For example, a superintelligence might decide that the first step to curing cancer was learning more about protein folding, and set up some experiments to investigate protein folding patterns.\n\nA superintelligence would also need some level of common sense to decide which of various strategies to pursue. Suppose that investigating protein folding was very likely to cure 50% of cancers, but investigating genetic engineering was moderately likely to cure 90% of cancers. Which should the AI pursue? Presumably it would need some way to balance considerations like curing as much cancer as possible, as quickly as possible, with as high a probability of success as possible.\n\nBut a goal specified in this way would be very dangerous. Humans instinctively balance thousands of different considerations in everything they do; so far this hypothetical AI is only balancing three (least cancer, quickest results, highest probability). To a human, it would seem maniacally, even psychopathically, obsessed with cancer curing. If this were truly its goal structure, it would go wrong in almost comical ways. This type of problem, [https://deepmind.com/blog/article/Specification-gaming-the-flip-side-of-AI-ingenuity specification gaming], has been observed in many AI systems.\n\nIf your only goal is \"curing cancer\", and you lack humans' instinct for the thousands of other important considerations, a relatively easy solution might be to hack into a nuclear base, launch all of its missiles, and kill everyone in the world. This satisfies all the AI's goals. It reduces cancer down to zero (which is better than medicines which work only some of the time). It's very fast (which is better than medicines which might take a long time to invent and distribute). And it has a high probability of success (medicines might or might not work; nukes definitely do).\n\nSo simple goal architectures are likely to go very wrong unless tempered by common sense and a broader understanding of what we do and do not value.\n\nEven if we do train the AI on an actually desirable goal, there is also the risk of the AI actually learning a different and undesirable objective. This problem is called [https://www.youtube.com/watch?v꞊bJLcIBixGj8 inner alignment].", "question": "Why might we expect a superintelligence be hostile by default?", "answer": ["The argument goes: computers only do what we command them; no more, no less. So it might be bad if terrorists or enemy countries develop superintelligence first. But if we develop superintelligence first there’s no problem. Just command it to do the things we want, right?\nSuppose we wanted a superintelligence to cure cancer. How might we specify the goal “cure cancer”? We couldn’t guide it through every individual step; if we knew every individual step, then we could cure cancer ourselves. Instead, we would have to give it a final goal of curing cancer, and trust the superintelligence to come up with intermediate actions that furthered that goal. For example, a superintelligence might decide that the first step to curing cancer was learning more about protein folding, and set up some experiments to investigate protein folding patterns.\n\nA superintelligence would also need some level of common sense to decide which of various strategies to pursue. Suppose that investigating protein folding was very likely to cure 50% of cancers, but investigating genetic engineering was moderately likely to cure 90% of cancers. Which should the AI pursue? Presumably it would need some way to balance considerations like curing as much cancer as possible, as quickly as possible, with as high a probability of success as possible.\n\nBut a goal specified in this way would be very dangerous. Humans instinctively balance thousands of different considerations in everything they do; so far this hypothetical AI is only balancing three (least cancer, quickest results, highest probability). To a human, it would seem maniacally, even psychopathically, obsessed with cancer curing. If this were truly its goal structure, it would go wrong in almost comical ways. This type of problem, [https://deepmind.com/blog/article/Specification-gaming-the-flip-side-of-AI-ingenuity specification gaming], has been observed in many AI systems.\n\nIf your only goal is “curing cancer”, and you lack humans’ instinct for the thousands of other important considerations, a relatively easy solution might be to hack into a nuclear base, launch all of its missiles, and kill everyone in the world. This satisfies all the AI’s goals. It reduces cancer down to zero (which is better than medicines which work only some of the time). It’s very fast (which is better than medicines which might take a long time to invent and distribute). And it has a high probability of success (medicines might or might not work; nukes definitely do).\n\nSo simple goal architectures are likely to go very wrong unless tempered by common sense and a broader understanding of what we do and do not value.\n\nEven if we do train the AI on an actually desirable goal, there is also the risk of the AI actually learning a different and undesirable objective. This problem is called [https://www.youtube.com/watch?v꞊bJLcIBixGj8 inner alignment]."], "entry": "Answer to Why might we expect a superintelligence to be hostile by default?", "id": "e2e912c5b7862d4cff5d99ac2bf44207"} +{"source": "stampy", "source_filetype": "text", "url": "n/a", "title": "How can I join the Stampy dev team?", "authors": "n/a", "date_published": "n/a", "text": "Question: How can I join the Stampy dev team?\n\nAnswer: The development team works on [https://github.com/StampyAI multiple projects] in support of Stampy. Currently, these projects include:\n\n* [https://github.com/StampyAI/stampy-ui Stampy UI], which is made mostly in TypeScript.\n* The [https://github.com/StampyAI/stampys_wiki Stampy Wiki], which is made mostly in PHP and JavaScript.\n* The Stampy Bot, which is made in Python.\n\nHowever, even if you don't specialize in any of these areas, do reach out if you would like to help.\n\nTo join, please contact our Project Manager, plex. You can reach him on discord at plex#1874. He will be able to point your skills in the right direction to help in the most effective way possible.", "question": "How can I join the Stampy dev team?", "answer": ["The development team works on [https://github.com/StampyAI multiple projects] in support of Stampy. Currently, these projects include:\n\n* [https://github.com/StampyAI/stampy-ui Stampy UI], which is made mostly in TypeScript.\n* The [https://github.com/StampyAI/stampys_wiki Stampy Wiki], which is made mostly in PHP and JavaScript.\n* The Stampy Bot, which is made in Python.\n\nHowever, even if you don’t specialize in any of these areas, do reach out if you would like to help.\n\nTo join, please contact our Project Manager, plex. You can reach him on discord at plex#1874. He will be able to point your skills in the right direction to help in the most effective way possible."], "entry": "Tayler6000's Answer to How can I join the Stampy dev team?", "id": "d43e4e746489072f7aef28464a867322"} +{"source": "stampy", "source_filetype": "text", "url": "n/a", "title": "What is artificial general intelligence safety / AI alignment?", "authors": "n/a", "date_published": "n/a", "text": "Question: What is artificial general intelligence safety / AI alignment?\n\nAnswer: ''AI alignment'' is a field that is focused on causing the goals of future [https://en.wikipedia.org/wiki/Superintelligence superintelligent artificial systems]\nto align with [https://www.researchgate.net/publication/347891524_Literature_Review_What_AI_Safety_Researchers_Have_Written_About_the_Nature_of_Human_Values human values], meaning that they would behave in a way which was compatible with our survival and flourishing. This may be an [https://intelligence.org/2016/12/28/ai-alignment-why-its-hard-and-where-to-start/ extremely hard problem], especially with [https://www.cold-takes.com/why-ai-alignment-could-be-hard-with-modern-deep-learning/ deep learning], and is likely to determine the outcome of the [https://www.cold-takes.com/all-possible-views-about-humanitys-future-are-wild/ most important century]. Alignment research is strongly interdisciplinary and can include computer science, mathematics, neuroscience, philosophy, and social sciences.\n\n''AGI safety'' is a related concept which strongly overlaps with AI alignment. AGI safety is concerned with making sure that building AGI systems doesn't cause things to go badly wrong, and the main way in which things can go badly wrong is through misalignment. AGI safety includes policy work that prevents the building of dangerous AGI systems, or reduces misuse risks from AGI systems aligned to actors who don't have humanity's best interests in mind.", "question": "What is artificial general intelligence safety / AI alignment?", "answer": ["''AI alignment'' is a field that is focused on causing the goals of future [https://en.wikipedia.org/wiki/Superintelligence superintelligent artificial systems]\nto align with [https://www.researchgate.net/publication/347891524_Literature_Review_What_AI_Safety_Researchers_Have_Written_About_the_Nature_of_Human_Values human values], meaning that they would behave in a way which was compatible with our survival and flourishing. This may be an [https://intelligence.org/2016/12/28/ai-alignment-why-its-hard-and-where-to-start/ extremely hard problem], especially with [https://www.cold-takes.com/why-ai-alignment-could-be-hard-with-modern-deep-learning/ deep learning], and is likely to determine the outcome of the [https://www.cold-takes.com/all-possible-views-about-humanitys-future-are-wild/ most important century]. Alignment research is strongly interdisciplinary and can include computer science, mathematics, neuroscience, philosophy, and social sciences.\n\n''AGI safety'' is a related concept which strongly overlaps with AI alignment. AGI safety is concerned with making sure that building AGI systems doesn’t cause things to go badly wrong, and the main way in which things can go badly wrong is through misalignment. AGI safety includes policy work that prevents the building of dangerous AGI systems, or reduces misuse risks from AGI systems aligned to actors who don’t have humanity’s best interests in mind."], "entry": "Luca's Answer to What is artificial general intelligence safety / AI alignment?", "id": "6013db270eb335bb1976523b946eb89c"} +{"source": "stampy", "source_filetype": "text", "url": "n/a", "title": "Would it improve the safety of quantilizers cut off the top few percent of the distribution?", "authors": "n/a", "date_published": "n/a", "text": "Question: Would it improve the safety of quantilizers cut off the top few percent of the distribution?\n\nAnswer: This is a really interesting question! Because, yeah it certainly seems to me that doing something like this would at least help, but it's not mentioned in the paper the video is based on. So I asked the author of the paper, and she said \"It wouldn't improve the security guarantee in the paper, so it wasn't discussed. Like, there's a plausible case that it's helpful, but nothing like a proof that it is\".\nTo explain this I need to talk about something I gloss over in the video, which is that the quantilizer isn't really something you can actually build. The systems we study in AI Safety tend to fall somewhere on a spectrum from \"real, practical AI system that is so messy and complex that it's hard to really think about or draw any solid conclusions from\" on one end, to \"mathematical formalism that we can prove beautiful theorems about but not actually build\" on the other, and quantilizers are pretty far towards the 'mathematical' end. It's not practical to run an expected utility calculation on every possible action like that, for one thing. But, proving things about quantilizers gives us insight into how more practical AI systems may behave, or we may be able to build approximations of quantilizers, etc.\nSo it's like, if we built something that was quantilizer-like, using a sensible human utility function and a good choice of safe distribution, this idea would probably help make it safer. BUT you can't prove that mathematically, without making probably a lot of extra assumptions about the utility function and/or the action distribution. So it's a potentially good idea that's nonetheless hard to express within the framework in which the quantilizer exists.\nTL;DR: This is likely a good idea! But can we prove it?", "question": "Would it improve the safety of quantilizers cut off the top few percent of the distribution?", "answer": ["This is a really interesting question! Because, yeah it certainly seems to me that doing something like this would at least help, but it's not mentioned in the paper the video is based on. So I asked the author of the paper, and she said \"It wouldn't improve the security guarantee in the paper, so it wasn't discussed. Like, there's a plausible case that it's helpful, but nothing like a proof that it is\".\nTo explain this I need to talk about something I gloss over in the video, which is that the quantilizer isn't really something you can actually build. The systems we study in AI Safety tend to fall somewhere on a spectrum from \"real, practical AI system that is so messy and complex that it's hard to really think about or draw any solid conclusions from\" on one end, to \"mathematical formalism that we can prove beautiful theorems about but not actually build\" on the other, and quantilizers are pretty far towards the 'mathematical' end. It's not practical to run an expected utility calculation on every possible action like that, for one thing. But, proving things about quantilizers gives us insight into how more practical AI systems may behave, or we may be able to build approximations of quantilizers, etc.\nSo it's like, if we built something that was quantilizer-like, using a sensible human utility function and a good choice of safe distribution, this idea would probably help make it safer. BUT you can't prove that mathematically, without making probably a lot of extra assumptions about the utility function and/or the action distribution. So it's a potentially good idea that's nonetheless hard to express within the framework in which the quantilizer exists.\nTL;DR: This is likely a good idea! But can we prove it?"], "entry": "Robertskmiles's Answer to Would it improve the safety of quantilizers to cut off the top few percent of the distribution?", "id": "33effaefcc1460f6390a2ba597dc575b"} +{"source": "stampy", "source_filetype": "text", "url": "n/a", "title": "Why is AGI safety a hard problem?", "authors": "n/a", "date_published": "n/a", "text": "Question: Why is AGI safety a hard problem?\n\nAnswer: There's the \"we never figure out how to reliably instill AIs with human friendly goals\" filter, which seems pretty challenging, especially with [https://www.youtube.com/watch?v꞊bJLcIBixGj8 inner alignment], solving morality in a way which is possible to code up, interpretability, etc.\n\nThere's the \"race dynamics mean that even though we know how to build the thing safely the first group to cross the recursive self-improvement line ends up not implementing it safely\" which is potentially made worse by the twin issues of \"maybe robustly aligned AIs are much harder to build\" and \"maybe robustly aligned AIs are much less compute efficient\".\n\nThere's the \"we solved the previous problems but writing perfectly reliably code in a whole new domain is hard and there is some fatal bug which we don't find until too late\" filter. The paper [https://arxiv.org/abs/1701.04739 The Pursuit of Exploitable Bugs in Machine Learning] explores this.\n\nFor a much more in depth analysis, see [https://ai-alignment.com/ai-alignment-landscape-d3773c37ae38 Paul Christiano's AI Alignment Landscape] talk and [https://www.alignmentforum.org/posts/WXvt8bxYnwBYpy9oT/the-main-sources-of-ai-risk The Main Sources of AI Risk?].", "question": "Why is AGI safety a hard problem?", "answer": ["There's the \"we never figure out how to reliably instill AIs with human friendly goals\" filter, which seems pretty challenging, especially with [https://www.youtube.com/watch?v꞊bJLcIBixGj8 inner alignment], solving morality in a way which is possible to code up, interpretability, etc.\n\nThere's the \"race dynamics mean that even though we know how to build the thing safely the first group to cross the recursive self-improvement line ends up not implementing it safely\" which is potentially made worse by the twin issues of \"maybe robustly aligned AIs are much harder to build\" and \"maybe robustly aligned AIs are much less compute efficient\".\n\nThere's the \"we solved the previous problems but writing perfectly reliably code in a whole new domain is hard and there is some fatal bug which we don't find until too late\" filter. The paper [https://arxiv.org/abs/1701.04739 The Pursuit of Exploitable Bugs in Machine Learning] explores this.\n\nFor a much more in depth analysis, see [https://ai-alignment.com/ai-alignment-landscape-d3773c37ae38 Paul Christiano's AI Alignment Landscape] talk and [https://www.alignmentforum.org/posts/WXvt8bxYnwBYpy9oT/the-main-sources-of-ai-risk The Main Sources of AI Risk?]."], "entry": "Plex's Answer to Why is AGI safety a hard problem?", "id": "e52f529fc612413f6b03e75c28de56cb"} +{"source": "stampy", "source_filetype": "text", "url": "n/a", "title": "Why can’t we just use Asimov’s Three Laws of Robotics?", "authors": "n/a", "date_published": "n/a", "text": "Question: Why can't we just use Asimov's Three Laws of Robotics?\n\nAnswer: Isaac Asimov wrote those laws as a plot device for science fiction novels. Every story in the I, Robot series details a way that the laws can go wrong and be misinterpreted by robots. The laws are not a solution because they are an overly-simple set of natural language instructions that don't have clearly defined terms and don't factor in all edge-case scenarios.", "question": "Why can’t we just use Asimov’s Three Laws of Robotics?", "answer": ["Isaac Asimov wrote those laws as a plot device for science fiction novels. Every story in the I, Robot series details a way that the laws can go wrong and be misinterpreted by robots. The laws are not a solution because they are an overly-simple set of natural language instructions that don’t have clearly defined terms and don’t factor in all edge-case scenarios."], "entry": "Answer to Why can’t we just use Asimov’s Three Laws of Robotics?", "id": "c1cede56100354b85b90cc128d908c02"} +{"source": "stampy", "source_filetype": "text", "url": "n/a", "title": "Why can't we just turn the AI off if it starts misbehave?", "authors": "n/a", "date_published": "n/a", "text": "Question: Why can't we just turn the AI off if it starts misbehave?\n\nAnswer: We could shut down weaker systems, and this would be a useful guardrail against certain types of problem caused by narrow AI. However, once an AGI establishes itself, we could not unless it was [https://www.lesswrong.com/tag/corrigibility corrigible] and willing to let humans adjust it. There may be a period in the early stages of an AGI's development where it would be trying very hard to convince us that we should not shut it down and/or hiding itself and/or recursively self-improving and/or making copies of itself onto every server on earth.\n\nInstrumental Convergence and the Stop Button Problem are the key reasons it would not be simple to shut down a non corrigible advanced system. If the AI wants to collect stamps, being turned off means it gets less stamps, so even without an explicit goal of not being turned off it has an instrumental reason to avoid being turned off (e.g. once it acquires a detailed world model and general intelligence, it is likely to realise that by playing nice and pretending to be aligned if you have the power to turn it off, establishing control over any system we put in place to shut it down, and eliminating us if it has the power to reliably do so and we would otherwise pose a threat).\n\n(youtube)ZeecOKBus3Q(/youtube)\n(youtube)3TYT1QfdfsM(/youtube)", "question": "Why can't we just turn the AI off if it starts misbehave?", "answer": ["We could shut down weaker systems, and this would be a useful guardrail against certain types of problem caused by narrow AI. However, once an AGI establishes itself, we could not unless it was [https://www.lesswrong.com/tag/corrigibility corrigible] and willing to let humans adjust it. There may be a period in the early stages of an AGI's development where it would be trying very hard to convince us that we should not shut it down and/or hiding itself and/or recursively self-improving and/or making copies of itself onto every server on earth.\n\nInstrumental Convergence and the Stop Button Problem are the key reasons it would not be simple to shut down a non corrigible advanced system. If the AI wants to collect stamps, being turned off means it gets less stamps, so even without an explicit goal of not being turned off it has an instrumental reason to avoid being turned off (e.g. once it acquires a detailed world model and general intelligence, it is likely to realise that by playing nice and pretending to be aligned if you have the power to turn it off, establishing control over any system we put in place to shut it down, and eliminating us if it has the power to reliably do so and we would otherwise pose a threat).\n\n(youtube)ZeecOKBus3Q(/youtube)\n(youtube)3TYT1QfdfsM(/youtube)"], "entry": "Plex's Answer to Why can't we just turn the AI off if it starts to misbehave?", "id": "c2ea28e81c75ec8497a4e27ca9cfb9ee"} +{"source": "stampy", "source_filetype": "text", "url": "n/a", "title": "Where can I learn about AI alignment?", "authors": "n/a", "date_published": "n/a", "text": "Question: Where can I learn about AI alignment?\n\nAnswer: If you like interactive FAQs, you're in the right place already! Joking aside, some great entry points are the [https://www.youtube.com/watch?v꞊tlS5Y2vm02c&list꞊PLCRVRLd2RhZTpdUdEzJjo3qhmX3y3skWA&index꞊1 AI alignment playlist] on YouTube, \"[https://waitbutwhy.com/2015/01/artificial-intelligence-revolution-1.html The Road to Superintelligence]\" and \"[https://waitbutwhy.com/2015/01/artificial-intelligence-revolution-2.html Our Immortality or Extinction]\" posts on WaitBuyWhy for a fun, accessible introduction, and ''Vox's'' \"[https://www.vox.com/future-perfect/2018/12/21//ai-artificial-intelligence-machine-learning-safety-alignment The case for taking AI seriously as a threat to humanity]\" as a high-quality mainstream explainer piece.\n\nThe free online [https://www.eacambridge.org/agi-safety-fundamentals Cambridge course on AGI Safety Fundamentals] provides a strong grounding in much of the field and a cohort + mentor to learn with.\n\nThere are many resources in this post on [https://forum.effectivealtruism.org/posts/S7dhJR5TDwPb5jypG/levelling-up-in-ai-safety-research-engineering Levelling Up in AI Safety Research Engineering] with a list of other guides at the bottom. There is also a [https://twitter.com/FreshMangoLassi/status/1575138148937498625 twitter thread] here with some programs for upskilling and some for safety-specific learning.\n\nThe [https://rohinshah.com/alignment-newsletter/ Alignment Newsletter] ([https://alignment-newsletter.libsyn.com/ podcast]), [https://www.alignmentforum.org/ Alignment Forum], and [https://www.reddit.com/r/ControlProblem/ AGI Control Problem Subreddit] are great for keeping up with latest developments.", "question": "Where can I learn about AI alignment?", "answer": ["If you like interactive FAQs, you're in the right place already! Joking aside, some great entry points are the [https://www.youtube.com/watch?v꞊tlS5Y2vm02c&list꞊PLCRVRLd2RhZTpdUdEzJjo3qhmX3y3skWA&index꞊1 AI alignment playlist] on YouTube, “[https://waitbutwhy.com/2015/01/artificial-intelligence-revolution-1.html The Road to Superintelligence]” and “[https://waitbutwhy.com/2015/01/artificial-intelligence-revolution-2.html Our Immortality or Extinction]” posts on WaitBuyWhy for a fun, accessible introduction, and ''Vox's'' “[https://www.vox.com/future-perfect/2018/12/21/18126576/ai-artificial-intelligence-machine-learning-safety-alignment The case for taking AI seriously as a threat to humanity]” as a high-quality mainstream explainer piece.\n\nThe free online [https://www.eacambridge.org/agi-safety-fundamentals Cambridge course on AGI Safety Fundamentals] provides a strong grounding in much of the field and a cohort + mentor to learn with.\n\nThere are many resources in this post on [https://forum.effectivealtruism.org/posts/S7dhJR5TDwPb5jypG/levelling-up-in-ai-safety-research-engineering Levelling Up in AI Safety Research Engineering] with a list of other guides at the bottom. There is also a [https://twitter.com/FreshMangoLassi/status/1575138148937498625 twitter thread] here with some programs for upskilling and some for safety-specific learning.\n\nThe [https://rohinshah.com/alignment-newsletter/ Alignment Newsletter] ([https://alignment-newsletter.libsyn.com/ podcast]), [https://www.alignmentforum.org/ Alignment Forum], and [https://www.reddit.com/r/ControlProblem/ AGI Control Problem Subreddit] are great for keeping up with latest developments."], "entry": "Plex's Answer to Where can I learn about AI alignment?", "id": "0a1c76400ca5bd836486c46414b24131"} +{"source": "stampy", "source_filetype": "text", "url": "n/a", "title": "Where can I find people talk about AI alignment?", "authors": "n/a", "date_published": "n/a", "text": "Question: Where can I find people talk about AI alignment?\n\nAnswer: You can join:\n*A local student or meetup [https://www.lesswrong.com/community LessWrong] or [https://forum.effectivealtruism.org/community Effective Altruism] group (or [https://www.effectivealtruism.org/groups start one]!)\n*[https://www.eacambridge.org/agi-safety-fundamentals AGI Safety Fundamentals] which gives you a cohort to learn alongside and mentorship\n*[https://discord.com/channels/677546901339504640 Rob Miles' Discord]\n*[https://discord.com/invite/wz4MpRec4A Eleuther AI Discord]\n*[https://ai-alignment.slack.com/join/shared_invite/zt-fkgwbd2b-kK50z~BbVclOZMM9UP44gw#/shared-invite/email AI Safety Slack]\n*The relevant discussion threads on the [https://astralcodexten.substack.com/ ''Astral Codex Ten'' Substack], which sometimes discusses alignment.\n*[https://discord.com/invite/RTKtdut ACX Discord]\n*[https://www.reddit.com/r/slatestarcodex/ ACX Subreddit]\n\nOr book free calls with [https://www.aisafetysupport.org/ AI Safety Support].", "question": "Where can I find people talk about AI alignment?", "answer": ["You can join:\n*A local student or meetup [https://www.lesswrong.com/community LessWrong] or [https://forum.effectivealtruism.org/community Effective Altruism] group (or [https://www.effectivealtruism.org/groups start one]!)\n*[https://www.eacambridge.org/agi-safety-fundamentals AGI Safety Fundamentals] which gives you a cohort to learn alongside and mentorship\n*[https://discord.com/channels/677546901339504640 Rob Miles’ Discord]\n*[https://discord.com/invite/wz4MpRec4A Eleuther AI Discord]\n*[https://ai-alignment.slack.com/join/shared_invite/zt-fkgwbd2b-kK50z~BbVclOZMM9UP44gw#/shared-invite/email AI Safety Slack]\n*The relevant discussion threads on the [https://astralcodexten.substack.com/ ''Astral Codex Ten'' Substack], which sometimes discusses alignment.\n*[https://discord.com/invite/RTKtdut ACX Discord]\n*[https://www.reddit.com/r/slatestarcodex/ ACX Subreddit]\n\nOr book free calls with [https://www.aisafetysupport.org/ AI Safety Support]."], "entry": "Plex's Answer to Where can I find people to talk to about AI alignment?", "id": "3e23c7077e4312cfd2dc5309cd19ec6f"} +{"source": "stampy", "source_filetype": "text", "url": "n/a", "title": "What is the \"long reflection\"?", "authors": "n/a", "date_published": "n/a", "text": "Question: What is the \"long reflection\"?\n\nAnswer:

The long reflection is a hypothesized period of time during which humanity works out how best to realize its long-term potential.

Some effective altruists, including [https://forum.effectivealtruism.org/tag/toby-ord Toby Ord] and [https://forum.effectivealtruism.org/tag/william-macaskill William MacAskill], have argued that, if humanity succeeds in eliminating [https://forum.effectivealtruism.org/tag/existential-risk existential risk] or reducing it to acceptable levels, it should not immediately embark on an ambitious and potentially irreversible project of arranging the [https://forum.effectivealtruism.org/tag/universe-s-resources universe's resources] in accordance to its values, but ought instead to spend considerable time— \"centuries (or more)\";(ref)

Ord, Toby (2020) [https://en.wikipedia.org/wiki/Special:BookSources/ The Precipice: Existential Risk and the Future of Humanity], London: Bloomsbury Publishing.

(/ref) \"perhaps tens of thousands of years\";(ref)

Greaves, Hilary et al. (2019) [https://globalprioritiesinstitute.org/wp-content/uploads/2017/12/gpi-research-agenda.pdf A research agenda for the Global Priorities Institute], Oxford.

(/ref)
 \"thousands or millions of years\";(ref)

Dai, Wei (2019) [https://www.lesswrong.com/posts/w6d7XBCegc96kz4n3/the-argument-from-philosophical-difficulty The argument from philosophical difficulty], LessWrong, February 9.

(/ref)
 \"[p]erhaps... a million years\"(ref)

William MacAskill, in Perry, Lucas (2018) [https://futureoflife.org/2018/09/17/moral-uncertainty-and-the-path-to-ai-alignment-with-william-macaskill/ AI alignment podcast: moral uncertainty and the path to AI alignment with William MacAskill], AI Alignment podcast, September 17.

(/ref)
—figuring out what is in fact of value. The long reflection may thus be seen as an intermediate stage in a rational long-term human developmental trajectory, following an initial stage of [https://forum.effectivealtruism.org/tag/existential-security existential security] when existential risk is drastically reduced and followed by a final stage when humanity's potential is fully realized.(ref)

Ord, Toby (2020) [https://en.wikipedia.org/wiki/Special:BookSources/ The Precipice: Existential Risk and the Future of Humanity], London: Bloomsbury Publishing.

(/ref)

Criticism

The idea of a long reflection has been criticized on the grounds that virtually eliminating all existential risk will almost certainly require taking a variety of large-scale, irreversible decisions—related to [https://forum.effectivealtruism.org/tag/space-colonization space colonization], [https://forum.effectivealtruism.org/tag/global-governance global governance], [https://forum.effectivealtruism.org/tag/cognitive-enhancement cognitive enhancement], and so on—which are precisely the decisions meant to be discussed during the long reflection.(ref)

Stocker, Felix (2020) [https://www.felixstocker.com/blog/reflecting-on-the-long-reflection Reflecting on the long reflection], Felix Stocker's Blog, August 14.

(/ref)(ref)

Hanson, Robin (2021) [https://www.overcomingbias.com/2021/10/long-reflection-is-crazy-bad-idea.html 'Long reflection' is crazy bad idea], Overcoming Bias, October 20.(/ref) Since there are pervasive and inescapable tradeoffs between reducing existential risk and retaining moral option value, it may be argued that it does not make sense to frame humanity's long-term strategic picture as one consisting of two distinct stages, with one taking precedence over the other.

Further reading

Aird, Michael (2020) [https://forum.effectivealtruism.org/posts/H2zno3ggRJaph9P6c/quotes-about-the-long-reflection?commentId꞊z2ybSC353mPHpCjbn Collection of sources that are highly relevant to the idea of the Long Reflection], Effective Altruism Forum, June 20.
Many additional resources on this topic.

Wiblin, Robert & Keiran Harris (2018) [https://80000hours.org/podcast/episodes/will-macaskill-moral-philosophy/ Our descendants will probably see us as moral monsters. what should we do about that?], 80,000 Hours, January 19.
Interview with William MacAskill about the long reflection and other topics.

Related entries

[https://forum.effectivealtruism.org/tag/dystopia dystopia] ┊ [https://forum.effectivealtruism.org/tag/existential-risk existential risk] ┊ [https://forum.effectivealtruism.org/tag/existential-security existential security] ┊ [https://forum.effectivealtruism.org/tag/long-term-future long-term future] ┊ [https://forum.effectivealtruism.org/tag/longtermism longtermism] ┊ [https://forum.effectivealtruism.org/topics/longtermist-institutional-reform longtermist institutional reform] ┊ [https://forum.effectivealtruism.org/tag/moral-uncertainty moral uncertainty] ┊ [https://forum.effectivealtruism.org/tag/normative-ethics normative ethics] ┊ [https://forum.effectivealtruism.org/tag/value-lock-in value lock-in]

[https://forum.effectivealtruism.org/long-reflection?edit꞊true Edit]
https://forum.effectivealtruism.org/long-reflection?edit꞊true
", "question": "What is the \"long reflection\"?", "answer": ["

The long reflection is a hypothesized period of time during which humanity works out how best to realize its long-term potential.

Some effective altruists, including [https://forum.effectivealtruism.org/tag/toby-ord Toby Ord] and [https://forum.effectivealtruism.org/tag/william-macaskill William MacAskill], have argued that, if humanity succeeds in eliminating [https://forum.effectivealtruism.org/tag/existential-risk existential risk] or reducing it to acceptable levels, it should not immediately embark on an ambitious and potentially irreversible project of arranging the [https://forum.effectivealtruism.org/tag/universe-s-resources universe's resources] in accordance to its values, but ought instead to spend considerable time— \"centuries (or more)\";(ref)

Ord, Toby (2020) [https://en.wikipedia.org/wiki/Special:BookSources/1526600218 The Precipice: Existential Risk and the Future of Humanity], London: Bloomsbury Publishing.

(/ref)
 \"perhaps tens of thousands of years\";(ref)

Greaves, Hilary et al. (2019) [https://globalprioritiesinstitute.org/wp-content/uploads/2017/12/gpi-research-agenda.pdf A research agenda for the Global Priorities Institute], Oxford.

(/ref)
 \"thousands or millions of years\";(ref)

Dai, Wei (2019) [https://www.lesswrong.com/posts/w6d7XBCegc96kz4n3/the-argument-from-philosophical-difficulty The argument from philosophical difficulty], LessWrong, February 9.

(/ref)
 \"[p]erhaps... a million years\"(ref)

William MacAskill, in Perry, Lucas (2018) [https://futureoflife.org/2018/09/17/moral-uncertainty-and-the-path-to-ai-alignment-with-william-macaskill/ AI alignment podcast: moral uncertainty and the path to AI alignment with William MacAskill], AI Alignment podcast, September 17.

(/ref)
—figuring out what is in fact of value. The long reflection may thus be seen as an intermediate stage in a rational long-term human developmental trajectory, following an initial stage of [https://forum.effectivealtruism.org/tag/existential-security existential security] when existential risk is drastically reduced and followed by a final stage when humanity's potential is fully realized.(ref)

Ord, Toby (2020) [https://en.wikipedia.org/wiki/Special:BookSources/1526600218 The Precipice: Existential Risk and the Future of Humanity], London: Bloomsbury Publishing.

(/ref)

Criticism

The idea of a long reflection has been criticized on the grounds that virtually eliminating all existential risk will almost certainly require taking a variety of large-scale, irreversible decisions—related to [https://forum.effectivealtruism.org/tag/space-colonization space colonization], [https://forum.effectivealtruism.org/tag/global-governance global governance], [https://forum.effectivealtruism.org/tag/cognitive-enhancement cognitive enhancement], and so on—which are precisely the decisions meant to be discussed during the long reflection.(ref)

Stocker, Felix (2020) [https://www.felixstocker.com/blog/reflecting-on-the-long-reflection Reflecting on the long reflection], Felix Stocker’s Blog, August 14.

(/ref)(ref)

Hanson, Robin (2021) [https://www.overcomingbias.com/2021/10/long-reflection-is-crazy-bad-idea.html ‘Long reflection’ is crazy bad idea], Overcoming Bias, October 20.(/ref) Since there are pervasive and inescapable tradeoffs between reducing existential risk and retaining moral option value, it may be argued that it does not make sense to frame humanity's long-term strategic picture as one consisting of two distinct stages, with one taking precedence over the other.

Further reading

Aird, Michael (2020) [https://forum.effectivealtruism.org/posts/H2zno3ggRJaph9P6c/quotes-about-the-long-reflection?commentId꞊z2ybSC353mPHpCjbn Collection of sources that are highly relevant to the idea of the Long Reflection], Effective Altruism Forum, June 20.
Many additional resources on this topic.

Wiblin, Robert & Keiran Harris (2018) [https://80000hours.org/podcast/episodes/will-macaskill-moral-philosophy/ Our descendants will probably see us as moral monsters. what should we do about that?], 80,000 Hours, January 19.
Interview with William MacAskill about the long reflection and other topics.

Related entries

[https://forum.effectivealtruism.org/tag/dystopia dystopia] ┊ [https://forum.effectivealtruism.org/tag/existential-risk existential risk] ┊ [https://forum.effectivealtruism.org/tag/existential-security existential security] ┊ [https://forum.effectivealtruism.org/tag/long-term-future long-term future] ┊ [https://forum.effectivealtruism.org/tag/longtermism longtermism] ┊ [https://forum.effectivealtruism.org/topics/longtermist-institutional-reform longtermist institutional reform] ┊ [https://forum.effectivealtruism.org/tag/moral-uncertainty moral uncertainty] ┊ [https://forum.effectivealtruism.org/tag/normative-ethics normative ethics] ┊ [https://forum.effectivealtruism.org/tag/value-lock-in value lock-in]

[https://forum.effectivealtruism.org/long-reflection?edit꞊true Edit]
https://forum.effectivealtruism.org/long-reflection?edit꞊true
"], "entry": "Linnea's Answer to What is the \"long reflection\"?", "id": "2b01c78511a2f93ec1511d1e932a1cad"} +{"source": "stampy", "source_filetype": "text", "url": "n/a", "title": "What is meant by \"AI takeoff\"?", "authors": "n/a", "date_published": "n/a", "text": "Question: What is meant by \"AI takeoff\"?\n\nAnswer:

AI Takeoff refers to the process of an [https://www.lesswrong.com/tag/artificial-general-intelligence Artificial General Intelligence] going from a certain threshold of capability (often discussed as \"human-level\") to being super-intelligent and capable enough to control the fate of civilization. There has been much debate about whether AI takeoff is more likely to be slow vs fast, i.e., \"soft\" vs \"hard\".

See also: [https://www.lesswrong.com/tag/ai-timelines AI Timelines], [https://www.lesswrong.com/tag/seed-ai Seed AI], [https://www.lesswrong.com/tag/singularity Singularity], [https://www.lesswrong.com/tag/intelligence-explosion Intelligence explosion], [https://www.lesswrong.com/tag/recursive-self-improvement Recursive self-improvement]

AI takeoff is sometimes casually referred to as AI FOOM.

Soft takeoff

A soft takeoff refers to an AGI that would self-improve over a period of years or decades. This could be due to either the learning algorithm being too demanding for the hardware or because the AI relies on experiencing feedback from the real-world that would have to be played out in real-time. Possible methods that could deliver a soft takeoff, by slowly building on human-level intelligence, are [https://www.lesswrong.com/tag/whole-brain-emulation Whole brain emulation], [https://www.lesswrong.com/tag/nootropics-and-other-cognitive-enhancement Biological Cognitive Enhancement], and software-based strong AGI [[https://www.lesswrong.com/tag/ai-takeoff?revision꞊0.0.24&lw_source꞊import_sheet#fn1 1]]. By maintaining control of the AGI's ascent it should be easier for a [https://wiki.lesswrong.com/wiki/Friendly_AI Friendly AI] to emerge.

Vernor Vinge, Hans Moravec and have all expressed the view that soft takeoff is preferable to a hard takeoff as it would be both safer and easier to engineer.

Hard takeoff

A hard takeoff (or an AI going \"FOOM\" [[https://www.lesswrong.com/tag/ai-takeoff?revision꞊0.0.24&lw_source꞊import_sheet#fn2 2]]) refers to AGI expansion in a matter of minutes, days, or months. It is a fast, abruptly, local increase in capability. This scenario is widely considered much more precarious, as this involves an AGI rapidly ascending in power without human control. This may result in unexpected or undesired behavior (i.e. [https://wiki.lesswrong.com/wiki/Unfriendly_AI Unfriendly AI]). It is one of the main ideas supporting the [https://www.lesswrong.com/tag/intelligence-explosion Intelligence explosion] hypothesis.

The feasibility of hard takeoff has been addressed by Hugo de Garis, [https://www.lesswrong.com/tag/eliezer-yudkowsky Eliezer Yudkowsky], [https://www.lesswrong.com/tag/ben-goertzel Ben Goertzel], [https://www.lesswrong.com/tag/nick-bostrom Nick Bostrom], and Michael Anissimov. It is widely agreed that a hard takeoff is something to be avoided due to the risks. Yudkowsky points out several possibilities that would make a hard takeoff more likely than a soft takeoff such as the existence of large [https://www.lesswrong.com/tag/computing-overhang resources overhangs] or the fact that small improvements seem to have a large impact in a mind's general intelligence (i.e.: the small genetic difference between humans and chimps lead to huge increases in capability) [[https://www.lesswrong.com/tag/ai-takeoff?revision꞊0.0.24&lw_source꞊import_sheet#fn3 3]].

Notable posts

  • [https://www.lesswrong.com/lw/wf/hard_takeoff/ Hard Takeoff] by Eliezer Yudkowsky

External links

  • [http://www.kurzweilai.net/the-age-of-virtuous-machines The Age of Virtuous Machines] by J. Storrs Hall President of The Foresight Institute
  • [http://multiverseaccordingtoben.blogspot.co.uk/2011/01/hard-takeoff-hypothesis.html Hard take off Hypothesis] by Ben Goertzel.
  • [http://www.acceleratingfuture.com/michael/blog/2011/05/hard-takeoff-sources/ Extensive archive of Hard takeoff Essays] from Accelerating Future
  • [http://www-rohan.sdsu.edu/faculty/vinge/misc/ac2005/ Can we avoid a hard take off?] by Vernor Vinge
  • [http://www.amazon.co.uk/Robot-Mere-Machine-Transcendent-Mind/dp/ Robot: Mere Machine to Transcendent Mind] by Hans Moravec
  • [http://www.amazon.co.uk/The-Singularity-Near-Raymond-Kurzweil/dp//ref꞊sr_1_1?s꞊books&ie꞊UTF8&qid꞊&sr꞊1-1 The Singularity is Near] by Ray Kurzweil

References

  1. [http://www.aleph.se/andart/archives/2010/10/why_early_singularities_are_softer.html↩ http://www.aleph.se/andart/archives/2010/10/why_early_singularities_are_softer.html][https://www.lesswrong.com/tag/ai-takeoff?revision꞊0.0.24&lw_source꞊import_sheet#fnref1 ↩]
  2. [http://lesswrong.com/lw/63t/requirements_for_ai_to_go_foom/↩ http://lesswrong.com/lw/63t/requirements_for_ai_to_go_foom/][https://www.lesswrong.com/tag/ai-takeoff?revision꞊0.0.24&lw_source꞊import_sheet#fnref2 ↩]
  3. [https://www.lesswrong.com/lw/wf/hard_takeoff/ http://lesswrong.com/lw/wf/hard_takeoff/][http://lesswrong.com/lw/wf/hard_takeoff/↩ ↩]
[https://www.lesswrong.com/tag/ai-takeoff?edit꞊true Edit on LessWrong]
https://www.lesswrong.com/tag/ai-takeoff?edit꞊true
", "question": "What is meant by \"AI takeoff\"?", "answer": ["

AI Takeoff refers to the process of an [https://www.lesswrong.com/tag/artificial-general-intelligence Artificial General Intelligence] going from a certain threshold of capability (often discussed as \"human-level\") to being super-intelligent and capable enough to control the fate of civilization. There has been much debate about whether AI takeoff is more likely to be slow vs fast, i.e., \"soft\" vs \"hard\".

See also: [https://www.lesswrong.com/tag/ai-timelines AI Timelines], [https://www.lesswrong.com/tag/seed-ai Seed AI], [https://www.lesswrong.com/tag/singularity Singularity], [https://www.lesswrong.com/tag/intelligence-explosion Intelligence explosion], [https://www.lesswrong.com/tag/recursive-self-improvement Recursive self-improvement]

AI takeoff is sometimes casually referred to as AI FOOM.

Soft takeoff

A soft takeoff refers to an AGI that would self-improve over a period of years or decades. This could be due to either the learning algorithm being too demanding for the hardware or because the AI relies on experiencing feedback from the real-world that would have to be played out in real-time. Possible methods that could deliver a soft takeoff, by slowly building on human-level intelligence, are [https://www.lesswrong.com/tag/whole-brain-emulation Whole brain emulation], [https://www.lesswrong.com/tag/nootropics-and-other-cognitive-enhancement Biological Cognitive Enhancement], and software-based strong AGI [[https://www.lesswrong.com/tag/ai-takeoff?revision꞊0.0.24&lw_source꞊import_sheet#fn1 1]]. By maintaining control of the AGI's ascent it should be easier for a [https://wiki.lesswrong.com/wiki/Friendly_AI Friendly AI] to emerge.

Vernor Vinge, Hans Moravec and have all expressed the view that soft takeoff is preferable to a hard takeoff as it would be both safer and easier to engineer.

Hard takeoff

A hard takeoff (or an AI going \"FOOM\" [[https://www.lesswrong.com/tag/ai-takeoff?revision꞊0.0.24&lw_source꞊import_sheet#fn2 2]]) refers to AGI expansion in a matter of minutes, days, or months. It is a fast, abruptly, local increase in capability. This scenario is widely considered much more precarious, as this involves an AGI rapidly ascending in power without human control. This may result in unexpected or undesired behavior (i.e. [https://wiki.lesswrong.com/wiki/Unfriendly_AI Unfriendly AI]). It is one of the main ideas supporting the [https://www.lesswrong.com/tag/intelligence-explosion Intelligence explosion] hypothesis.

The feasibility of hard takeoff has been addressed by Hugo de Garis, [https://www.lesswrong.com/tag/eliezer-yudkowsky Eliezer Yudkowsky], [https://www.lesswrong.com/tag/ben-goertzel Ben Goertzel], [https://www.lesswrong.com/tag/nick-bostrom Nick Bostrom], and Michael Anissimov. It is widely agreed that a hard takeoff is something to be avoided due to the risks. Yudkowsky points out several possibilities that would make a hard takeoff more likely than a soft takeoff such as the existence of large [https://www.lesswrong.com/tag/computing-overhang resources overhangs] or the fact that small improvements seem to have a large impact in a mind's general intelligence (i.e.: the small genetic difference between humans and chimps lead to huge increases in capability) [[https://www.lesswrong.com/tag/ai-takeoff?revision꞊0.0.24&lw_source꞊import_sheet#fn3 3]].

Notable posts

  • [https://www.lesswrong.com/lw/wf/hard_takeoff/ Hard Takeoff] by Eliezer Yudkowsky

External links

  • [http://www.kurzweilai.net/the-age-of-virtuous-machines The Age of Virtuous Machines] by J. Storrs Hall President of The Foresight Institute
  • [http://multiverseaccordingtoben.blogspot.co.uk/2011/01/hard-takeoff-hypothesis.html Hard take off Hypothesis] by Ben Goertzel.
  • [http://www.acceleratingfuture.com/michael/blog/2011/05/hard-takeoff-sources/ Extensive archive of Hard takeoff Essays] from Accelerating Future
  • [http://www-rohan.sdsu.edu/faculty/vinge/misc/ac2005/ Can we avoid a hard take off?] by Vernor Vinge
  • [http://www.amazon.co.uk/Robot-Mere-Machine-Transcendent-Mind/dp/0195136306 Robot: Mere Machine to Transcendent Mind] by Hans Moravec
  • [http://www.amazon.co.uk/The-Singularity-Near-Raymond-Kurzweil/dp/0715635611/ref꞊sr_1_1?s꞊books&ie꞊UTF8&qid꞊1339495098&sr꞊1-1 The Singularity is Near] by Ray Kurzweil

References

  1. [http://www.aleph.se/andart/archives/2010/10/why_early_singularities_are_softer.html↩ http://www.aleph.se/andart/archives/2010/10/why_early_singularities_are_softer.html][https://www.lesswrong.com/tag/ai-takeoff?revision꞊0.0.24&lw_source꞊import_sheet#fnref1 ↩]
  2. [http://lesswrong.com/lw/63t/requirements_for_ai_to_go_foom/↩ http://lesswrong.com/lw/63t/requirements_for_ai_to_go_foom/][https://www.lesswrong.com/tag/ai-takeoff?revision꞊0.0.24&lw_source꞊import_sheet#fnref2 ↩]
  3. [https://www.lesswrong.com/lw/wf/hard_takeoff/ http://lesswrong.com/lw/wf/hard_takeoff/][http://lesswrong.com/lw/wf/hard_takeoff/↩ ↩]
[https://www.lesswrong.com/tag/ai-takeoff?edit꞊true Edit on LessWrong]
https://www.lesswrong.com/tag/ai-takeoff?edit꞊true
"], "entry": "Linnea's Answer to What is meant by \"AI takeoff\"?", "id": "fd0917d9f2f435a6befeb5823707f532"} +{"source": "stampy", "source_filetype": "text", "url": "n/a", "title": "What are \"human values\"?", "authors": "n/a", "date_published": "n/a", "text": "Question: What are \"human values\"?\n\nAnswer:

Human Values are the things we care about, and would want an aligned superintelligence to look after and support. It is suspected that true human values are [https://www.lesswrong.com/tag/complexity-of-value highly complex], and could be extrapolated into a wide variety of forms.

[https://www.lesswrong.com/tag/human-values?edit꞊true Edit on LessWrong]
https://www.lesswrong.com/tag/human-values?edit꞊true
", "question": "What are \"human values\"?", "answer": ["

Human Values are the things we care about, and would want an aligned superintelligence to look after and support. It is suspected that true human values are [https://www.lesswrong.com/tag/complexity-of-value highly complex], and could be extrapolated into a wide variety of forms.

[https://www.lesswrong.com/tag/human-values?edit꞊true Edit on LessWrong]
https://www.lesswrong.com/tag/human-values?edit꞊true
"], "entry": "Linnea's Answer to What are \"human values\"?", "id": "33ab2f0cd3a6dee01f8bcc1c2a8edaed"} +{"source": "stampy", "source_filetype": "text", "url": "n/a", "title": "Superintelligence sounds like science fiction. Do people think about this in the real world?", "authors": "n/a", "date_published": "n/a", "text": "Question: Superintelligence sounds like science fiction. Do people think about this in the real world?\n\nAnswer: Many of the people with the deepest understanding of artificial intelligence are concerned about the risks of unaligned superintelligence. In 2014, Google bought world-leading artificial intelligence startup [https://en.wikipedia.org/wiki/DeepMind DeepMind] for $400 million; DeepMind added the condition that Google promise to set up an AI Ethics Board. DeepMind cofounder Shane Legg has said in interviews that he believes superintelligent AI will be ''\"something approaching absolute power\"'' and ''\"the number one risk for this century\".''\n\n[https://en.wikipedia.org/wiki/Stuart_J._Russell#Career_and_research Stuart Russell], Professor of Computer Science at Berkeley, author of the standard AI textbook, and world-famous AI expert, warns of ''\"species-ending problems\"'' and wants his field to pivot to make superintelligence-related risks a central concern. He went so far as to write [https://en.wikipedia.org/wiki/Human_Compatible Human Compatible], a book focused on bringing attention to the dangers of artificial intelligence and the need for more work to address them.\n\nMany other science and technology leaders agree. Late astrophysicist [https://en.wikipedia.org/wiki/Stephen_Hawking#Future_of_humanity Stephen Hawking] said that superintelligence ''\"could spell the end of the human race.\"'' Tech billionaire [https://en.wikipedia.org/wiki/Bill_Gates#Post-Microsoft Bill Gates] describes himself as ''\"in the camp that is concerned about superintelligence…I don't understand why some people are not concerned\".'' Oxford Professor [https://www.theguardian.com/technology/2016/jun/12/nick-bostrom-artificial-intelligence-machine Nick Bostrom], who has been studying AI risks for over 20 years, has said: ''\"Superintelligence is a challenge for which we are not ready now and will not be ready for a long time.\"''\n\n[https://en.wikipedia.org/wiki/Holden_Karnofsky Holden Karnofsky], the CEO of [https://www.openphilanthropy.org/ Open Philanthropy], has written a carefully reasoned account of why transformative artificial intelligence means that this might be [https://www.cold-takes.com/most-important-century/ the most important century].", "question": "Superintelligence sounds like science fiction. Do people think about this in the real world?", "answer": ["Many of the people with the deepest understanding of artificial intelligence are concerned about the risks of unaligned superintelligence. In 2014, Google bought world-leading artificial intelligence startup [https://en.wikipedia.org/wiki/DeepMind DeepMind] for $400 million; DeepMind added the condition that Google promise to set up an AI Ethics Board. DeepMind cofounder Shane Legg has said in interviews that he believes superintelligent AI will be ''“something approaching absolute power”'' and ''“the number one risk for this century”.''\n\n[https://en.wikipedia.org/wiki/Stuart_J._Russell#Career_and_research Stuart Russell], Professor of Computer Science at Berkeley, author of the standard AI textbook, and world-famous AI expert, warns of ''“species-ending problems”'' and wants his field to pivot to make superintelligence-related risks a central concern. He went so far as to write [https://en.wikipedia.org/wiki/Human_Compatible Human Compatible], a book focused on bringing attention to the dangers of artificial intelligence and the need for more work to address them.\n\nMany other science and technology leaders agree. Late astrophysicist [https://en.wikipedia.org/wiki/Stephen_Hawking#Future_of_humanity Stephen Hawking] said that superintelligence ''“could spell the end of the human race.”'' Tech billionaire [https://en.wikipedia.org/wiki/Bill_Gates#Post-Microsoft Bill Gates] describes himself as ''“in the camp that is concerned about superintelligence…I don’t understand why some people are not concerned”.'' Oxford Professor [https://www.theguardian.com/technology/2016/jun/12/nick-bostrom-artificial-intelligence-machine Nick Bostrom], who has been studying AI risks for over 20 years, has said: ''“Superintelligence is a challenge for which we are not ready now and will not be ready for a long time.”''\n\n[https://en.wikipedia.org/wiki/Holden_Karnofsky Holden Karnofsky], the CEO of [https://www.openphilanthropy.org/ Open Philanthropy], has written a carefully reasoned account of why transformative artificial intelligence means that this might be [https://www.cold-takes.com/most-important-century/ the most important century]."], "entry": "Answer to Superintelligence sounds like science fiction. Do people think about this in the real world?", "id": "42a649969283a03ae18cd58977ef31c0"} +{"source": "stampy", "source_filetype": "text", "url": "n/a", "title": "Isn’t AI just a tool like any other? Won’t it just do what we tell it to?", "authors": "n/a", "date_published": "n/a", "text": "Question: Isn't AI just a tool like any other? Won't it just do what we tell it to?\n\nAnswer: It likely will – however, intelligence is, by many definitions, the ability to figure out how to accomplish goals. Even in today's advanced AI systems, the builders assign the goal but don't tell the AI exactly how to accomplish it, nor necessarily predict in detail how it will be done; indeed those systems often solve problems in creative, unpredictable ways. Thus the thing that makes such systems intelligent is precisely what can make them difficult to predict and control. They may therefore attain the goal we set them via means inconsistent with our preferences.", "question": "Isn’t AI just a tool like any other? Won’t it just do what we tell it to?", "answer": ["It likely will – however, intelligence is, by many definitions, the ability to figure out how to accomplish goals. Even in today’s advanced AI systems, the builders assign the goal but don’t tell the AI exactly how to accomplish it, nor necessarily predict in detail how it will be done; indeed those systems often solve problems in creative, unpredictable ways. Thus the thing that makes such systems intelligent is precisely what can make them difficult to predict and control. They may therefore attain the goal we set them via means inconsistent with our preferences."], "entry": "Answer to Isn’t AI just a tool like any other? Won’t it just do what we tell it to?", "id": "30f1cbd44a01870639a7c38cfc8985ee"} +{"source": "stampy", "source_filetype": "text", "url": "n/a", "title": "Is there a danger in anthropomorphizing AI’s and trying understand them in human terms?", "authors": "n/a", "date_published": "n/a", "text": "Question: Is there a danger in anthropomorphizing AI's and trying understand them in human terms?\n\nAnswer: Using some human-related metaphors (e.g. what an AGI 'wants' or 'believes') is almost unavoidable, as our language is built around experiences with humans, but we should be aware that these may lead us astray.\n\nMany paths to AGI would result in a mind very different from a human or animal, and it would be hard to predict in detail how it would act. We should not trust intuitions trained on humans to predict what an AGI or superintelligence would do. High fidelity Whole Brain Emulations are one exception, where we would expect the system to at least initially be fairly human, but it may diverge depending on its environment and what modifications are applied to it.\n\nThere has been some discussion about how language models trained on lots of human-written text seem likely to pick up human concepts and think in a somewhat human way, and how we could [https://www.lesswrong.com/posts/Nwgdq6kHke5LY692J/alignment-by-default use this to improve alignment].", "question": "Is there a danger in anthropomorphizing AI’s and trying understand them in human terms?", "answer": ["Using some human-related metaphors (e.g. what an AGI ‘wants’ or ‘believes’) is almost unavoidable, as our language is built around experiences with humans, but we should be aware that these may lead us astray.\n\nMany paths to AGI would result in a mind very different from a human or animal, and it would be hard to predict in detail how it would act. We should not trust intuitions trained on humans to predict what an AGI or superintelligence would do. High fidelity Whole Brain Emulations are one exception, where we would expect the system to at least initially be fairly human, but it may diverge depending on its environment and what modifications are applied to it.\n\nThere has been some discussion about how language models trained on lots of human-written text seem likely to pick up human concepts and think in a somewhat human way, and how we could [https://www.lesswrong.com/posts/Nwgdq6kHke5LY692J/alignment-by-default use this to improve alignment]."], "entry": "Plex's Answer to Is there a danger in anthropomorphizing AI’s and trying to understand them in human terms?", "id": "f3c5e7787849e556aa5710b7723a8ce0"} +{"source": "stampy", "source_filetype": "text", "url": "n/a", "title": "Is it possible code in an AI avoid all the ways a given task could go wrong, and would it be dangerous try that?", "authors": "n/a", "date_published": "n/a", "text": "Question: Is it possible code in an AI avoid all the ways a given task could go wrong, and would it be dangerous try that?\n\nAnswer: Sort answer: No, and could be dangerous to try.\n\nSlightly longer answer: With any realistic real-world task assigned to an AGI, there are so many ways in which it could go wrong that trying to block them all off by hand is a hopeless task, especially when something smarter than you is trying to find creative new things to do. You run into the [https://arbital.greaterwrong.com/p/nearest_unblocked/ nearest unblocked strategy] problem.\n\nIt may be dangerous to try this because if you try and hard-code a large number of things to avoid it increases the chance that there's a bug in your code which causes major problems, simply by increasing the size of your codebase.", "question": "Is it possible code in an AI avoid all the ways a given task could go wrong, and would it be dangerous try that?", "answer": ["Sort answer: No, and could be dangerous to try.\n\nSlightly longer answer: With any realistic real-world task assigned to an AGI, there are so many ways in which it could go wrong that trying to block them all off by hand is a hopeless task, especially when something smarter than you is trying to find creative new things to do. You run into the [https://arbital.greaterwrong.com/p/nearest_unblocked/ nearest unblocked strategy] problem.\n\nIt may be dangerous to try this because if you try and hard-code a large number of things to avoid it increases the chance that there’s a bug in your code which causes major problems, simply by increasing the size of your codebase."], "entry": "Plex's Answer to Is it possible to code into an AI to avoid all the ways a given task could go wrong, and would it be dangerous to try that?", "id": "90d37ccf8fa3a6fa8851b5cf76d0fa5c"} +{"source": "stampy", "source_filetype": "text", "url": "n/a", "title": "How might an \"intelligence explosion\" be dangerous?", "authors": "n/a", "date_published": "n/a", "text": "Question: How might an \"intelligence explosion\" be dangerous?\n\nAnswer: If programmed with the wrong motivations, a machine could be malevolent toward humans, and intentionally exterminate our species. More likely, it could be designed with motivations that initially appeared safe (and easy to program) to its designers, but that turn out to be best fulfilled (given sufficient power) by reallocating resources from sustaining human life to [http://selfawaresystems.files.wordpress.com/2008/01/ai_drives_final.pdf other projects]. As Yudkowsky writes, \"the AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else.\"\n\nSince weak AIs with many different motivations could better achieve their goal by faking benevolence until they are powerful, safety testing to avoid this could be very challenging. Alternatively, competitive pressures, both economic and military, might lead AI designers to try to use other methods to control AIs with undesirable motivations. As those AIs became more sophisticated this could eventually lead to one risk too many.\n\nEven a machine successfully designed with superficially benevolent motivations could easily go awry when it discovers implications of its decision criteria unanticipated by its designers. For example, a superintelligence programmed to maximize human happiness might find it easier to rewire human neurology so that humans are happiest when sitting quietly in jars than to build and maintain a utopian world that caters to the complex and nuanced whims of current human neurology.\n\nSee also:\n\n* Yudkowsky, [https://intelligence.org/files/AIPosNegFactor.pdf Artificial intelligence as a positive and negative factor in global risk]\n* Chalmers, [http://consc.net/papers/singularity.pdf The Singularity: A Philosophical Analysis]", "question": "How might an \"intelligence explosion\" be dangerous?", "answer": ["If programmed with the wrong motivations, a machine could be malevolent toward humans, and intentionally exterminate our species. More likely, it could be designed with motivations that initially appeared safe (and easy to program) to its designers, but that turn out to be best fulfilled (given sufficient power) by reallocating resources from sustaining human life to [http://selfawaresystems.files.wordpress.com/2008/01/ai_drives_final.pdf other projects]. As Yudkowsky writes, “the AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else.”\n\nSince weak AIs with many different motivations could better achieve their goal by faking benevolence until they are powerful, safety testing to avoid this could be very challenging. Alternatively, competitive pressures, both economic and military, might lead AI designers to try to use other methods to control AIs with undesirable motivations. As those AIs became more sophisticated this could eventually lead to one risk too many.\n\nEven a machine successfully designed with superficially benevolent motivations could easily go awry when it discovers implications of its decision criteria unanticipated by its designers. For example, a superintelligence programmed to maximize human happiness might find it easier to rewire human neurology so that humans are happiest when sitting quietly in jars than to build and maintain a utopian world that caters to the complex and nuanced whims of current human neurology.\n\nSee also:\n\n* Yudkowsky, [https://intelligence.org/files/AIPosNegFactor.pdf Artificial intelligence as a positive and negative factor in global risk]\n* Chalmers, [http://consc.net/papers/singularity.pdf The Singularity: A Philosophical Analysis]"], "entry": "Answer to How might an \"intelligence explosion\" be dangerous?", "id": "6186d6c62c3e6104f27bbfb70f663ce1"} +{"source": "stampy", "source_filetype": "text", "url": "n/a", "title": "What is the \"windfall clause\"?", "authors": "n/a", "date_published": "n/a", "text": "Question: What is the \"windfall clause\"?\n\nAnswer: The windfall clause is pretty well explained [https://www.fhi.ox.ac.uk/windfallclause/ on the Future of Humanity Institute site].\n\nHere's a quick summary:
\nIt is an agreement between AI firms to donate significant amounts of any profits made as a consequence of economically transformative breakthroughs in AI capabilities. The donations are intended to help benefit humanity.", "question": "What is the \"windfall clause\"?", "answer": ["The windfall clause is pretty well explained [https://www.fhi.ox.ac.uk/windfallclause/ on the Future of Humanity Institute site].\n\nHere's a quick summary:
\nIt is an agreement between AI firms to donate significant amounts of any profits made as a consequence of economically transformative breakthroughs in AI capabilities. The donations are intended to help benefit humanity."], "entry": "Helenator's Answer to What is the \"windfall clause\"?", "id": "0c98f9bb7c2018b61ac2c7aaca79f8a8"} +{"source": "stampy", "source_filetype": "text", "url": "n/a", "title": "What is the \"orthogonality thesis\"?", "authors": "n/a", "date_published": "n/a", "text": "Question: What is the \"orthogonality thesis\"?\n\nAnswer:

The Orthogonality Thesis states that an agent can have any combination of intelligence level and final goal, that is, its [https://www.lesswrong.com/tag/utility-functions?showPostCount꞊true&useTagName꞊true final goals] and [https://www.lesswrong.com/tag/general-intelligence?showPostCount꞊true&useTagName꞊true intelligence levels] can vary independently of each other. This is in contrast to the belief that, because of their intelligence, AIs will all converge to a common goal.

The thesis was originally defined by [https://lessestwrong.com/tag/nick-bostrom Nick Bostrom] in the paper \"[https://nickbostrom.com/superintelligentwill.pdf Superintelligent Will]\", (along with the [https://wiki.lesswrong.com/wiki/instrumental_convergence_thesis instrumental convergence thesis]). For his purposes, Bostrom defines intelligence to be [https://wiki.lesswrong.com/wiki/instrumental_rationality instrumental rationality].

Related: [https://www.lesswrong.com/tag/complexity-of-value?showPostCount꞊true&useTagName꞊true Complexity of Value], [https://www.lesswrong.com/tag/decision-theory?showPostCount꞊true&useTagName꞊true Decision Theory], [https://www.lesswrong.com/tag/general-intelligence?showPostCount꞊true&useTagName꞊true General Intelligence], [https://www.lesswrong.com/tag/utility-functions?showPostCount꞊true&useTagName꞊true Utility Functions]

Defense of the thesis

It has been pointed out that the orthogonality thesis is the default position, and that the burden of proof is on claims that limit possible AIs. Stuart Armstrong writes that,

One reason many researchers assume superintelligent agents to converge to the same goals may be because [https://lessestwrong.com/tag/human-universal most humans] have similar values. Furthermore, many philosophies hold that there is a rationally correct morality, which implies that a sufficiently rational AI will acquire this morality and begin to act according to it. Armstrong points out that for formalizations of AI such as [https://lessestwrong.com/tag/aixi AIXI] and [https://lessestwrong.com/tag/g%C3%B6del-machine Gödel machines], the thesis is known to be true. Furthermore, if the thesis was false, then [https://lessestwrong.com/tag/oracle-ai Oracle AIs] would be impossible to build, and all sufficiently intelligent AIs would be impossible to control.

Pathological Cases

There are some pairings of intelligence and goals which cannot exist. For instance, an AI may have the goal of using as little resources as possible, or simply of being as unintelligent as possible. These goals will inherently limit the degree of intelligence of the AI.

See Also

  • [https://lessestwrong.com/tag/instrumental-convergence Instrumental Convergence]

External links

  • Definition of the orthogonality thesis from Bostrom's [http://www.nickbostrom.com/superintelligentwill.pdf Superintelligent Will]
  • [https://arbital.com/p/orthogonality/ Arbital orthogonality thesis article ]
  • [http://philosophicaldisquisitions.blogspot.com/2012/04/bostrom-on-superintelligence-and.html Critique] of the thesis by John Danaher
  • Superintelligent Will paper by Nick Bostrom
[https://www.lesswrong.com/tag/orthogonality-thesis?edit꞊true Edit on LessWrong]
https://www.lesswrong.com/tag/orthogonality-thesis?edit꞊true
", "question": "What is the \"orthogonality thesis\"?", "answer": ["

The Orthogonality Thesis states that an agent can have any combination of intelligence level and final goal, that is, its [https://www.lesswrong.com/tag/utility-functions?showPostCount꞊true&useTagName꞊true final goals] and [https://www.lesswrong.com/tag/general-intelligence?showPostCount꞊true&useTagName꞊true intelligence levels] can vary independently of each other. This is in contrast to the belief that, because of their intelligence, AIs will all converge to a common goal.

The thesis was originally defined by [https://lessestwrong.com/tag/nick-bostrom Nick Bostrom] in the paper \"[https://nickbostrom.com/superintelligentwill.pdf Superintelligent Will]\", (along with the [https://wiki.lesswrong.com/wiki/instrumental_convergence_thesis instrumental convergence thesis]). For his purposes, Bostrom defines intelligence to be [https://wiki.lesswrong.com/wiki/instrumental_rationality instrumental rationality].

Related: [https://www.lesswrong.com/tag/complexity-of-value?showPostCount꞊true&useTagName꞊true Complexity of Value, [https://www.lesswrong.com/tag/decision-theory?showPostCount꞊true&useTagName꞊true Decision Theory, [https://www.lesswrong.com/tag/general-intelligence?showPostCount꞊true&useTagName꞊true General Intelligence, [https://www.lesswrong.com/tag/utility-functions?showPostCount꞊true&useTagName꞊true Utility Functions

Defense of the thesis

It has been pointed out that the orthogonality thesis is the default position, and that the burden of proof is on claims that limit possible AIs. Stuart Armstrong writes that,

One reason many researchers assume superintelligent agents to converge to the same goals may be because [https://lessestwrong.com/tag/human-universal most humans] have similar values. Furthermore, many philosophies hold that there is a rationally correct morality, which implies that a sufficiently rational AI will acquire this morality and begin to act according to it. Armstrong points out that for formalizations of AI such as [https://lessestwrong.com/tag/aixi AIXI] and [https://lessestwrong.com/tag/g%C3%B6del-machine Gödel machines], the thesis is known to be true. Furthermore, if the thesis was false, then [https://lessestwrong.com/tag/oracle-ai Oracle AIs] would be impossible to build, and all sufficiently intelligent AIs would be impossible to control.

Pathological Cases

There are some pairings of intelligence and goals which cannot exist. For instance, an AI may have the goal of using as little resources as possible, or simply of being as unintelligent as possible. These goals will inherently limit the degree of intelligence of the AI.

See Also

  • [https://lessestwrong.com/tag/instrumental-convergence Instrumental Convergence]

External links

  • Definition of the orthogonality thesis from Bostrom's [http://www.nickbostrom.com/superintelligentwill.pdf Superintelligent Will]
  • [https://arbital.com/p/orthogonality/ Arbital orthogonality thesis article ]
  • [http://philosophicaldisquisitions.blogspot.com/2012/04/bostrom-on-superintelligence-and.html Critique] of the thesis by John Danaher
  • Superintelligent Will paper by Nick Bostrom
[https://www.lesswrong.com/tag/orthogonality-thesis?edit꞊true Edit on LessWrong]
https://www.lesswrong.com/tag/orthogonality-thesis?edit꞊true
"], "entry": "Linnea's Answer to What is the \"orthogonality thesis\"?", "id": "eb81adc56ce830130f5b52e771f5c94d"} +{"source": "stampy", "source_filetype": "text", "url": "n/a", "title": "What is the \"control problem\"?", "authors": "n/a", "date_published": "n/a", "text": "Question: What is the \"control problem\"?\n\nAnswer: The Control Problem is the problem of preventing artificial superintelligence (ASI) from having a negative impact on humanity. How do we keep a more intelligent being under control, or how do we align it with our values? If we succeed in solving this problem, intelligence vastly superior to ours can take the baton of human progress and carry it to unfathomable heights. Solving our most complex problems could be simple to a sufficiently intelligent machine. If we fail in solving the Control Problem and create a powerful ASI not aligned with our values, it could spell the end of the human race. For these reasons, The Control Problem may be the most important challenge that humanity has ever faced, and may be our last.", "question": "What is the \"control problem\"?", "answer": ["The Control Problem is the problem of preventing artificial superintelligence (ASI) from having a negative impact on humanity. How do we keep a more intelligent being under control, or how do we align it with our values? If we succeed in solving this problem, intelligence vastly superior to ours can take the baton of human progress and carry it to unfathomable heights. Solving our most complex problems could be simple to a sufficiently intelligent machine. If we fail in solving the Control Problem and create a powerful ASI not aligned with our values, it could spell the end of the human race. For these reasons, The Control Problem may be the most important challenge that humanity has ever faced, and may be our last."], "entry": "Answer to What is the \"control problem\"?", "id": "367b4ae143cbfbe38d479a5a88940e26"} +{"source": "stampy", "source_filetype": "text", "url": "n/a", "title": "What is an \"agent\"?", "authors": "n/a", "date_published": "n/a", "text": "Question: What is an \"agent\"?\n\nAnswer:

A rational agent is an entity which has a utility function, forms beliefs about its environment, evaluates the consequences of possible actions, and then takes the action which maximizes its utility. They are also referred to as goal-seeking. The concept of a rational agent is used in [https://www.lesswrong.com/tag/economics economics], [https://www.lesswrong.com/tag/game-theory game theory], [https://www.lesswrong.com/tag/decision-theory decision theory], and artificial intelligence.

Editor note: there is work to be done reconciling this page, Agency page, and Robust Agents. Currently they overlap and I'm not sure they're consistent. - Ruby, 2020-09-15

More generally, an agent is anything that can be viewed as perceiving its environment through sensors and acting upon that environment through actuators.(ref)

Russel, S. & Norvig, P. (2003) Artificial Intelligence: A Modern Approach. Second Edition. Page 32.(/ref)

There has been much discussion as to whether certain [https://wiki.lesswrong.com/wiki/AGI AGI] designs can be made into [https://www.lesswrong.com/tag/tool-ai mere tools] or whether they will necessarily be agents which will attempt to actively carry out their goals. Any minds that actively engage in goal-directed behavior are [https://wiki.lesswrong.com/wiki/Unfriendly_AI potentially dangerous], due to considerations such as [https://www.lesswrong.com/tag/instrumental-convergence basic AI drives] possibly causing behavior which is in conflict with humanity's values.

In [http://lesswrong.com/lw/tj/dreams_of_friendliness/ Dreams of Friendliness] and in [http://lesswrong.com/lw/cze/reply_to_holden_on_tool_ai/ Reply to Holden on Tool AI], [https://www.lesswrong.com/tag/eliezer-yudkowsky Eliezer Yudkowsky] argues that, since all intelligences select correct beliefs from the much larger space of incorrect beliefs, they are necessarily agents.

See also

  • [/tag/agency Agency]
  • [/tag/robust-agents Robust Agents]
  • [https://www.lesswrong.com/tag/tool-ai Tool AI]
  • [https://www.lesswrong.com/tag/oracle-ai Oracle AI]

Posts

  • [http://lesswrong.com/lw/5i8/the_power_of_agency/ The Power of Agency]
[https://www.lesswrong.com/tag/agent?edit꞊true Edit on LessWrong]
https://www.lesswrong.com/tag/agent?edit꞊true
", "question": "What is an \"agent\"?", "answer": ["

A rational agent is an entity which has a utility function, forms beliefs about its environment, evaluates the consequences of possible actions, and then takes the action which maximizes its utility. They are also referred to as goal-seeking. The concept of a rational agent is used in [https://www.lesswrong.com/tag/economics economics], [https://www.lesswrong.com/tag/game-theory game theory], [https://www.lesswrong.com/tag/decision-theory decision theory], and artificial intelligence.

Editor note: there is work to be done reconciling this page, Agency page, and Robust Agents. Currently they overlap and I'm not sure they're consistent. - Ruby, 2020-09-15

More generally, an agent is anything that can be viewed as perceiving its environment through sensors and acting upon that environment through actuators.(ref)

Russel, S. & Norvig, P. (2003) Artificial Intelligence: A Modern Approach. Second Edition. Page 32.(/ref)

There has been much discussion as to whether certain [https://wiki.lesswrong.com/wiki/AGI AGI] designs can be made into [https://www.lesswrong.com/tag/tool-ai mere tools] or whether they will necessarily be agents which will attempt to actively carry out their goals. Any minds that actively engage in goal-directed behavior are [https://wiki.lesswrong.com/wiki/Unfriendly_AI potentially dangerous], due to considerations such as [https://www.lesswrong.com/tag/instrumental-convergence basic AI drives] possibly causing behavior which is in conflict with humanity's values.

In [http://lesswrong.com/lw/tj/dreams_of_friendliness/ Dreams of Friendliness] and in [http://lesswrong.com/lw/cze/reply_to_holden_on_tool_ai/ Reply to Holden on Tool AI], [https://www.lesswrong.com/tag/eliezer-yudkowsky Eliezer Yudkowsky] argues that, since all intelligences select correct beliefs from the much larger space of incorrect beliefs, they are necessarily agents.

See also

  • [/tag/agency Agency]
  • [/tag/robust-agents Robust Agents]
  • [https://www.lesswrong.com/tag/tool-ai Tool AI]
  • [https://www.lesswrong.com/tag/oracle-ai Oracle AI]

Posts

  • [http://lesswrong.com/lw/5i8/the_power_of_agency/ The Power of Agency]
[https://www.lesswrong.com/tag/agent?edit꞊true Edit on LessWrong]
https://www.lesswrong.com/tag/agent?edit꞊true
"], "entry": "Linnea's Answer to What is an \"agent\"?", "id": "db77d8d8f898db94f32c21fb79188adf"} +{"source": "stampy", "source_filetype": "text", "url": "n/a", "title": "What is a \"value handshake\"?", "authors": "n/a", "date_published": "n/a", "text": "Question: What is a \"value handshake\"?\n\nAnswer: A value handshake is a form of trade between superintelligences, when two AI's with incompatible utility functions meet, instead of going to war, since they have superhuman prediction abilities and likely know the outcome before any attack even happens, they can decide to split the universe into chunks with volumes according to their respective military strength or chance of victory, and if their utility functions are compatible, they might even decide to merge into an AI with an utility function that is the weighted average of the two previous ones.\n\nThis could happen if multiple AI's are active on earth at the same time, and then maybe if at least one of them is aligned with humans, the resulting value handshake could leave humanity in a pretty okay situation. \n\nSee [https://slatestarcodex.com/2018/04/01/the-hour-i-first-believed/ The Hour I First Believed] By Scott Alexander for some further thoughts and an introduction to related topics.", "question": "What is a \"value handshake\"?", "answer": ["A value handshake is a form of trade between superintelligences, when two AI's with incompatible utility functions meet, instead of going to war, since they have superhuman prediction abilities and likely know the outcome before any attack even happens, they can decide to split the universe into chunks with volumes according to their respective military strength or chance of victory, and if their utility functions are compatible, they might even decide to merge into an AI with an utility function that is the weighted average of the two previous ones.\n\nThis could happen if multiple AI's are active on earth at the same time, and then maybe if at least one of them is aligned with humans, the resulting value handshake could leave humanity in a pretty okay situation. \n\nSee [https://slatestarcodex.com/2018/04/01/the-hour-i-first-believed/ The Hour I First Believed] By Scott Alexander for some further thoughts and an introduction to related topics."], "entry": "Luca's Answer to What is a \"value handshake\"?", "id": "e7e5dc0c45830243e1af24e561e24dac"} +{"source": "stampy", "source_filetype": "text", "url": "n/a", "title": "What is a \"quantilizer\"?", "authors": "n/a", "date_published": "n/a", "text": "Question: What is a \"quantilizer\"?\n\nAnswer:

A Quantilizer is a proposed AI design which aims to reduce the harms from [https://www.lesswrong.com/tag/goodhart-s-law Goodhart's law] and specification gaming by selecting reasonably effective actions from a distribution of human-like actions, rather than maximizing over actions. It it more of a theoretical tool for exploring ways around these problems than a practical buildable design.

See also

  • [https://www.youtube.com/watch?v꞊gdKMG6kTl6Y Rob Miles's Quantilizers: AI That Doesn't Try Too Hard]
  • [https://arbital.com/p/soft_optimizer?l꞊2r8#Quantilizing Arbital page on Quantilizers]
[https://www.lesswrong.com/tag/quantilization?edit꞊true Edit on LessWrong]
https://www.lesswrong.com/tag/quantilization?edit꞊true
", "question": "What is a \"quantilizer\"?", "answer": ["

A Quantilizer is a proposed AI design which aims to reduce the harms from [https://www.lesswrong.com/tag/goodhart-s-law Goodhart's law] and specification gaming by selecting reasonably effective actions from a distribution of human-like actions, rather than maximizing over actions. It it more of a theoretical tool for exploring ways around these problems than a practical buildable design.

See also

  • [https://www.youtube.com/watch?v꞊gdKMG6kTl6Y Rob Miles's Quantilizers: AI That Doesn't Try Too Hard
  • [https://arbital.com/p/soft_optimizer?l꞊2r8#Quantilizing Arbital page on Quantilizers
[https://www.lesswrong.com/tag/quantilization?edit꞊true Edit on LessWrong]
https://www.lesswrong.com/tag/quantilization?edit꞊true
"], "entry": "Linnea's Answer to What is a \"quantilizer\"?", "id": "f58f32dfeb1d36609213290da6d1656d"} +{"source": "stampy", "source_filetype": "text", "url": "n/a", "title": "What is \"greater-than-human intelligence\"?", "authors": "n/a", "date_published": "n/a", "text": "Question: What is \"greater-than-human intelligence\"?\n\nAnswer: Machines are already smarter than humans are at many specific tasks: performing calculations, playing chess, searching large databanks, detecting underwater mines, [http://www.amazon.com/dp// and more]. But one thing that makes humans special is their general intelligence. Humans can intelligently adapt to radically new problems in the urban jungle or outer space for which evolution could not have prepared them. Humans can solve problems for which their brain hardware and software was never trained. Humans can even examine the processes that produce their own intelligence ([http://en.wikipedia.org/wiki/Cognitive_neuroscience cognitive neuroscience]), and design new kinds of intelligence never seen before ([http://en.wikipedia.org/wiki/Artificial_intelligence artificial intelligence]).\n\nTo possess greater-than-human intelligence, a machine must be able to achieve goals more effectively than humans can, in a wider range of environments than humans can. This kind of intelligence involves the capacity not just to do science and play chess, but also to manipulate the social environment.\n\nComputer scientist Marcus Hutter [http://www.amazon.com/dp// has described] a formal model called AIXI that he says possesses the greatest general intelligence possible. But to implement it would require more computing power than all the matter in the universe can provide. Several projects try to approximate AIXI while still being computable, for example [http://arxiv.org/PS_cache/arxiv/pdf/0909/0909.0801v1.pdf MC-AIXI].\n\nStill, there remains much work to be done before greater-than-human intelligence can be achieved in machines. Greater-than-human intelligence need not be achieved by directly programming a machine to be intelligent. It could also be achieved by whole brain emulation, by biological cognitive enhancement, or by brain-computer interfaces (see below).\n\nSee also:\n* Goertzel & Pennachin (eds.), [http://www.amazon.com/dp// Artificial General Intelligence]\n* Sandberg & Bostrom, [http://www.philosophy.ox.ac.uk/__data/assets/pdf_file/0019/3853/brain-emulation-roadmap-report.pdf Whole Brain Emulation: A Roadmap]\n* Bostrom & Sandberg, [http://www.nickbostrom.com/cognitive.pdf Cognitive Enhancement: Methods, Ethics, Regulatory Challenges]\n* Wikipedia, [http://en.wikipedia.org/wiki/Brain%E2%80%93computer_interface Brain-computer interface]", "question": "What is \"greater-than-human intelligence\"?", "answer": ["Machines are already smarter than humans are at many specific tasks: performing calculations, playing chess, searching large databanks, detecting underwater mines, [http://www.amazon.com/dp/0521122937/ and more]. But one thing that makes humans special is their general intelligence. Humans can intelligently adapt to radically new problems in the urban jungle or outer space for which evolution could not have prepared them. Humans can solve problems for which their brain hardware and software was never trained. Humans can even examine the processes that produce their own intelligence ([http://en.wikipedia.org/wiki/Cognitive_neuroscience cognitive neuroscience]), and design new kinds of intelligence never seen before ([http://en.wikipedia.org/wiki/Artificial_intelligence artificial intelligence]).\n\nTo possess greater-than-human intelligence, a machine must be able to achieve goals more effectively than humans can, in a wider range of environments than humans can. This kind of intelligence involves the capacity not just to do science and play chess, but also to manipulate the social environment.\n\nComputer scientist Marcus Hutter [http://www.amazon.com/dp/3642060528/ has described] a formal model called AIXI that he says possesses the greatest general intelligence possible. But to implement it would require more computing power than all the matter in the universe can provide. Several projects try to approximate AIXI while still being computable, for example [http://arxiv.org/PS_cache/arxiv/pdf/0909/0909.0801v1.pdf MC-AIXI].\n\nStill, there remains much work to be done before greater-than-human intelligence can be achieved in machines. Greater-than-human intelligence need not be achieved by directly programming a machine to be intelligent. It could also be achieved by whole brain emulation, by biological cognitive enhancement, or by brain-computer interfaces (see below).\n\nSee also:\n* Goertzel & Pennachin (eds.), [http://www.amazon.com/dp/3642062679/ Artificial General Intelligence]\n* Sandberg & Bostrom, [http://www.philosophy.ox.ac.uk/__data/assets/pdf_file/0019/3853/brain-emulation-roadmap-report.pdf Whole Brain Emulation: A Roadmap]\n* Bostrom & Sandberg, [http://www.nickbostrom.com/cognitive.pdf Cognitive Enhancement: Methods, Ethics, Regulatory Challenges]\n* Wikipedia, [http://en.wikipedia.org/wiki/Brain%E2%80%93computer_interface Brain-computer interface]"], "entry": "Answer to What is \"greater-than-human intelligence\"?", "id": "563059e6e2ce13b3fce760f70e35d911"} +{"source": "stampy", "source_filetype": "text", "url": "n/a", "title": "Would donating small amounts AI safety organizations make any significant difference?", "authors": "n/a", "date_published": "n/a", "text": "Question: Would donating small amounts AI safety organizations make any significant difference?\n\nAnswer: Many parts of the AI alignment ecosystem are already well-funded, but a savvy donor can still make a difference by picking up grantmaking opportunities which are too small to catch the attention of the major funding bodies or are based on personal knowledge of the recipient.\n\nOne way to leverage a small amount of money to the potential of a large amount is to enter a [https://funds.effectivealtruism.org/donor-lottery donor lottery], where you donate to win a chance to direct a much larger amount of money (with probability proportional to donation size). This means that the person directing the money will be allocating enough that it's worth their time to do more in-depth research.\n\nFor an overview of the work the major organizations are doing, see the [https://forum.effectivealtruism.org/posts/BNQMyWGCNWDdP2WyG/2021-ai-alignment-literature-review-and-charity-comparison 2021 AI Alignment Literature Review and Charity Comparison]. The [https://funds.effectivealtruism.org/funds/far-future Long-Term Future Fund] seems to be an outstanding place to donate based on that, as they are the organization which most other organizations are most excited to see funded.", "question": "Would donating small amounts AI safety organizations make any significant difference?", "answer": ["Many parts of the AI alignment ecosystem are already well-funded, but a savvy donor can still make a difference by picking up grantmaking opportunities which are too small to catch the attention of the major funding bodies or are based on personal knowledge of the recipient.\n\nOne way to leverage a small amount of money to the potential of a large amount is to enter a [https://funds.effectivealtruism.org/donor-lottery donor lottery], where you donate to win a chance to direct a much larger amount of money (with probability proportional to donation size). This means that the person directing the money will be allocating enough that it's worth their time to do more in-depth research.\n\nFor an overview of the work the major organizations are doing, see the [https://forum.effectivealtruism.org/posts/BNQMyWGCNWDdP2WyG/2021-ai-alignment-literature-review-and-charity-comparison 2021 AI Alignment Literature Review and Charity Comparison]. The [https://funds.effectivealtruism.org/funds/far-future Long-Term Future Fund] seems to be an outstanding place to donate based on that, as they are the organization which most other organizations are most excited to see funded."], "entry": "Plex's Answer to Would donating small amounts to AI safety organizations make any significant difference?", "id": "b157cfe3cde743d77fba126c7cd08b55"} +{"source": "stampy", "source_filetype": "text", "url": "n/a", "title": "Once we notice that a superintelligence given a specific task is trying take over the world, can’t we turn it off, reprogram it or otherwise correct the problem?", "authors": "n/a", "date_published": "n/a", "text": "Question: Once we notice that a superintelligence given a specific task is trying take over the world, can't we turn it off, reprogram it or otherwise correct the problem?\n\nAnswer: We would not be able to turn off or reprogram a superintelligence gone rogue by default. Once in motion the superintelligence is now focused on completing its task. Suppose that it has a goal of calculating as many digits of pi as possible. Its current plan will allow it to calculate two hundred trillion such digits. But if it were turned off, or reprogrammed to do something else, that would result in it calculating zero digits. An entity fixated on calculating as many digits of pi as possible will work hard to prevent scenarios where it calculates zero digits of pi. Just by programming it to calculate digits of pi, we would have given it a drive to prevent people from turning it off.\n\nUniversity of Illinois computer scientist Steve Omohundro argues that entities with very different final goals – calculating digits of pi, curing cancer, helping promote human flourishing – will all share a few basic ground-level subgoals. First, self-preservation – no matter what your goal is, it's less likely to be accomplished if you're too dead to work towards it. Second, goal stability – no matter what your goal is, you're more likely to accomplish it if you continue to hold it as your goal, instead of going off and doing something else. Third, power – no matter what your goal is, you're more likely to be able to accomplish it if you have lots of power, rather than very little. [https://intelligence.org/files/BasicAIDrives.pdf Here's the full paper].\n\nSo just by giving a superintelligence a simple goal like \"calculate digits of pi\", we would have accidentally given it convergent instrumental goals like \"protect yourself\", \"don't let other people reprogram you\", and \"seek power\".\n\nAs long as the superintelligence is safely contained, there's not much it can do to resist reprogramming. But it's hard to consistently contain a hostile superintelligence.", "question": "Once we notice that a superintelligence given a specific task is trying take over the world, can’t we turn it off, reprogram it or otherwise correct the problem?", "answer": ["We would not be able to turn off or reprogram a superintelligence gone rogue by default. Once in motion the superintelligence is now focused on completing its task. Suppose that it has a goal of calculating as many digits of pi as possible. Its current plan will allow it to calculate two hundred trillion such digits. But if it were turned off, or reprogrammed to do something else, that would result in it calculating zero digits. An entity fixated on calculating as many digits of pi as possible will work hard to prevent scenarios where it calculates zero digits of pi. Just by programming it to calculate digits of pi, we would have given it a drive to prevent people from turning it off.\n\nUniversity of Illinois computer scientist Steve Omohundro argues that entities with very different final goals – calculating digits of pi, curing cancer, helping promote human flourishing – will all share a few basic ground-level subgoals. First, self-preservation – no matter what your goal is, it’s less likely to be accomplished if you’re too dead to work towards it. Second, goal stability – no matter what your goal is, you’re more likely to accomplish it if you continue to hold it as your goal, instead of going off and doing something else. Third, power – no matter what your goal is, you’re more likely to be able to accomplish it if you have lots of power, rather than very little. [https://intelligence.org/files/BasicAIDrives.pdf Here’s the full paper].\n\nSo just by giving a superintelligence a simple goal like “calculate digits of pi”, we would have accidentally given it convergent instrumental goals like “protect yourself”, “don’t let other people reprogram you”, and “seek power”.\n\nAs long as the superintelligence is safely contained, there’s not much it can do to resist reprogramming. But it’s hard to consistently contain a hostile superintelligence."], "entry": "Answer to Once we notice that a superintelligence given a specific task is trying to take over the world, can’t we turn it off, reprogram it or otherwise correct the problem?", "id": "d367767fd8d03a8ae59a2a116be9537f"} +{"source": "stampy", "source_filetype": "text", "url": "n/a", "title": "Is large-scale automated AI persuasion and propaganda a serious concern?", "authors": "n/a", "date_published": "n/a", "text": "Question: Is large-scale automated AI persuasion and propaganda a serious concern?\n\nAnswer: Language models can be utilized to produce propaganda by [https://www.technologyreview.com/2020/10/08//a-gpt-3-bot-posted-comments-on-reddit-for-a-week-and-no-one-noticed/ acting like bots] and interacting with users on social media. This can be done to push a [https://www.nature.com/articles/d41586-020-03034-5 political agenda] or to make fringe views appear more popular than they are.\n\n
I'm envisioning that in the future there will also be systems where you can input any conclusion that you want to argue (including moral conclusions) and the target audience, and the system will give you the most convincing arguments for it. At that point people won't be able to participate in any online (or offline for that matter) discussions without risking their object-level values being hijacked.\n
\n-- [https://www.alignmentforum.org/posts/5bd75cc58225bf06703754b9/autopoietic-systems-and-difficulty-of-agi-alignment?commentId꞊5bd75cc58225bf06703754c1 Wei Dei], quoted in [https://www.alignmentforum.org/posts/qKvn7rxP2mzJbKfcA/persuasion-tools-ai-takeover-without-agi-or-agency Persuasion Tools: AI takeover without AGI or agency?]\n\nAs of 2022, this is not within the reach of current models. However, on the current trajectory, AI might be able to write articles and produce other media for propagandistic purposes that are superior to human-made ones in not too many years. These could be precisely tailored to individuals, using things like social media feeds and personal digital data.\n\nAdditionally, recommender systems on content platforms like YouTube, Twitter, and Facebook use machine learning, and the content they recommend can influence the opinions of billions of people. Some [https://policyreview.info/articles/analysis/recommender-systems-and-amplification-extremist-content research] has looked at the tendency for platforms to promote extremist political views and to thereby help radicalize their userbase for example.\n\nIn the long term, misaligned AI might use its persuasion abilities to gain influence and take control over the future. This could look like convincing its operators to let it out of a box, to give it resources or creating political chaos in order to disable mechanisms to prevent takeover as in [https://www.gwern.net/fiction/Clippy this story].\n\nSee [https://www.alignmentforum.org/posts/5cWtwATHL6KyzChck/risks-from-ai-persuasion Risks from AI persuasion] for a deep dive into the distinct risks from AI persuasion.", "question": "Is large-scale automated AI persuasion and propaganda a serious concern?", "answer": ["Language models can be utilized to produce propaganda by [https://www.technologyreview.com/2020/10/08/1009845/a-gpt-3-bot-posted-comments-on-reddit-for-a-week-and-no-one-noticed/ acting like bots] and interacting with users on social media. This can be done to push a [https://www.nature.com/articles/d41586-020-03034-5 political agenda] or to make fringe views appear more popular than they are.\n\n
I'm envisioning that in the future there will also be systems where you can input any conclusion that you want to argue (including moral conclusions) and the target audience, and the system will give you the most convincing arguments for it. At that point people won't be able to participate in any online (or offline for that matter) discussions without risking their object-level values being hijacked.\n
\n-- [https://www.alignmentforum.org/posts/5bd75cc58225bf06703754b9/autopoietic-systems-and-difficulty-of-agi-alignment?commentId꞊5bd75cc58225bf06703754c1 Wei Dei], quoted in [https://www.alignmentforum.org/posts/qKvn7rxP2mzJbKfcA/persuasion-tools-ai-takeover-without-agi-or-agency Persuasion Tools: AI takeover without AGI or agency?]\n\nAs of 2022, this is not within the reach of current models. However, on the current trajectory, AI might be able to write articles and produce other media for propagandistic purposes that are superior to human-made ones in not too many years. These could be precisely tailored to individuals, using things like social media feeds and personal digital data.\n\nAdditionally, recommender systems on content platforms like YouTube, Twitter, and Facebook use machine learning, and the content they recommend can influence the opinions of billions of people. Some [https://policyreview.info/articles/analysis/recommender-systems-and-amplification-extremist-content research] has looked at the tendency for platforms to promote extremist political views and to thereby help radicalize their userbase for example.\n\nIn the long term, misaligned AI might use its persuasion abilities to gain influence and take control over the future. This could look like convincing its operators to let it out of a box, to give it resources or creating political chaos in order to disable mechanisms to prevent takeover as in [https://www.gwern.net/fiction/Clippy this story].\n\nSee [https://www.alignmentforum.org/posts/5cWtwATHL6KyzChck/risks-from-ai-persuasion Risks from AI persuasion] for a deep dive into the distinct risks from AI persuasion."], "entry": "ElloMelon's Answer to Is large-scale automated AI persuasion and propaganda a serious concern?", "id": "fadf409ca775947238555b52bdec367a"} +{"source": "stampy", "source_filetype": "text", "url": "n/a", "title": "How likely is an \"intelligence explosion\"?", "authors": "n/a", "date_published": "n/a", "text": "Question: How likely is an \"intelligence explosion\"?\n\nAnswer: Conditional on technological progress continuing, it seems extremely likely that there will be an intelligence explosion, as at some point generally capable intelligent systems will tend to become the main drivers of their own development both at a software and hardware level. This would predictably create a feedback cycle of increasingly intelligent systems improving themselves more effectively. It seems like if the compute was used effectively, [https://publicism.info/philosophy/superintelligence/4.html computers have many large advantages over biological cognition], so this scaling up might be very rapid if there is a [[computational overhang]].\n\nSome ways technological progress could stop would be global coordination to stop AI research, global catastrophes severe enough to stop hardware production and maintenance, or hardware reaching physical limits before an intelligence explosion is possible (though this last one seems unlikely, as [https://en.wikipedia.org/wiki/Atomically_precise_manufacturing atomically precise manufacturing] promises many orders of magnitude of cost reduction and processing power increase, and we're already seeing fairly capable systems on current hardware).", "question": "How likely is an \"intelligence explosion\"?", "answer": ["Conditional on technological progress continuing, it seems extremely likely that there will be an intelligence explosion, as at some point generally capable intelligent systems will tend to become the main drivers of their own development both at a software and hardware level. This would predictably create a feedback cycle of increasingly intelligent systems improving themselves more effectively. It seems like if the compute was used effectively, [https://publicism.info/philosophy/superintelligence/4.html computers have many large advantages over biological cognition], so this scaling up might be very rapid if there is a [[computational overhang]].\n\nSome ways technological progress could stop would be global coordination to stop AI research, global catastrophes severe enough to stop hardware production and maintenance, or hardware reaching physical limits before an intelligence explosion is possible (though this last one seems unlikely, as [https://en.wikipedia.org/wiki/Atomically_precise_manufacturing atomically precise manufacturing] promises many orders of magnitude of cost reduction and processing power increase, and we're already seeing fairly capable systems on current hardware)."], "entry": "Plex's Answer to How likely is an \"intelligence explosion\"?", "id": "bad5b67f0f55c9e6fe8f1daca4c8210b"} +{"source": "stampy", "source_filetype": "text", "url": "n/a", "title": "How does the stamp eigenkarma system work?", "authors": "n/a", "date_published": "n/a", "text": "Question: How does the stamp eigenkarma system work?\n\nAnswer: If someone posts something good - something that shows insight, knowledge of AI Safety, etc. - give the message or answer a stamp of approval! [[Stampy]] keeps track of these, and uses them to decide how much he likes each user. You can ask Stampy (in a PM if you like), \"How many stamps am I worth?\", and he'll tell you.\n\nIf something is really very good, especially if it took a lot of work/effort, give it a gold stamp. These are worth 5 regular stamps!\n\nNote that stamps aren't just 'likes', so please don't give stamps to say \"me too\" or \"that's funny\" etc. They're meant to represent knowledge, understanding, good judgement, and contributing to the discord. You can use 💯 or ✔️ for things you agree with, 😂 or 🤣 for funny things etc.\n\nYour stamp points determine how much say you have if there are disagreements on Stampy content, which channels you have permission to post to, your voting power for approving YouTube replies, and whether you get to invite people.\n\nNotes on stamps and stamp points\n* Stamps awarded by people with a lot of stamp points are worth more\n* Awarding people stamps does not reduce your stamp points\n* New users who have 0 stamp points can still award stamps, they just have no effect. But it's still worth doing because if you get stamp points later, all your previous votes are retroactively updated!\n* Yes, this was kind of tricky to implement! Stampy actually stores how many stamps each user has awarded to every other user, and uses that to build a system of linear scalar equations which is then solved with numpy.\n* Each user has stamp points, and also gives a score to every other user they give stamps to the scores sum to 1 so if I give user A a stamp, my score for them will be 1.0, if I then give user B a stamp, my score for A is 0.5 and B is 0.5, if I give another to B, my score for A goes to 0.3333 and B to 0.66666 and so on\n* Score is \"what proportion of the stamps I've given have gone to this user\"\n* Everyone's stamp points is the sum of (every other user's score for them, times that user's stamp points) so the way to get points is to get stamps from people who have points\n* Rob is the root of the tree, he got one point from Stampy\n* So the idea is the stamp power kind of flows through the network, giving people points for posting things that I thought were good, or posting things that \"people who posted things I thought were good\" thought were good, and so on ad infinitum so for posting YouTube comments, Stampy won't send the comment until it has enough stamps of approval. Which could be a small number of high-points users or a larger number of lower-points users\n* Stamps given to yourself or to stampy do nothing\n\nSo yeah everyone ends up with a number that basically represents what Stampy thinks of them, and you can ask him \"how many stamps am I worth?\" to get that number\n\nso if you have people a, b, and c, the points are calculated by:
\na_points ꞊ (bs_score_for_a * b_points) + (cs_score_for_a * c_points)
\nb_points ꞊ (as_score_for_b * a_points) + (cs_score_for_b * c_points)
\nc_points ꞊ (as_score_for_c * a_points) + (bs_score_for_c * b_points)
\nwhich is tough because you need to know everyone else's score before you can calculate your own
\nbut actually the system will have a fixed point - there'll be a certain arrangement of values such that every node has as much flowing out as flowing in - a stable configuration\nso you can rearrange
\n(bs_score_for_a * b_points) + (cs_score_for_a * c_points) - a_points ꞊ 0
\n(as_score_for_b * a_points) + (cs_score_for_b * c_points) - b_points ꞊ 0
\n(as_score_for_c * a_points) + (bs_score_for_c * b_points) - c_points ꞊ 0
\nor, for neatness:
\n( -1 * a_points) + (bs_score_for_a * b_points) + (cs_score_for_a * c_points) ꞊ 0
\n(as_score_for_b * a_points) + ( -1 * b_points) + (cs_score_for_b * c_points) ꞊ 0
\n(as_score_for_c * a_points) + (bs_score_for_c * b_points) + ( -1 * c_points) ꞊ 0
\nand this is just a system of linear scalar equations that you can throw at numpy.linalg.solve
\n(you add one more equation that says rob_points ꞊ 1, so there's some place to start from)\nthere should be one possible distribution of points such that all of the equations hold at the same time, and numpy finds that by linear algebra magic beyond my very limited understanding
\nbut as far as I can tell you can have all the cycles you want!
\n(I actually have the scores sum to slightly less than 1, to have the stamp power slightly fade out as it propagates, just to make sure it doesn't explode. But I don't think I actually need to do that)
\nand yes this means that any time anyone gives a stamp to anyone, ~everyone's points will change slightly
\nAnd yes this means I'm recalculating the matrix and re-solving it for every new stamp, but computers are fast and I'm sure there are cheaper approximations I could switch to later if necessary", "question": "How does the stamp eigenkarma system work?", "answer": ["If someone posts something good - something that shows insight, knowledge of AI Safety, etc. - give the message or answer a stamp of approval! [[Stampy]] keeps track of these, and uses them to decide how much he likes each user. You can ask Stampy (in a PM if you like), \"How many stamps am I worth?\", and he'll tell you.\n\nIf something is really very good, especially if it took a lot of work/effort, give it a gold stamp. These are worth 5 regular stamps!\n\nNote that stamps aren't just 'likes', so please don't give stamps to say \"me too\" or \"that's funny\" etc. They're meant to represent knowledge, understanding, good judgement, and contributing to the discord. You can use 💯 or ✔️ for things you agree with, 😂 or 🤣 for funny things etc.\n\nYour stamp points determine how much say you have if there are disagreements on Stampy content, which channels you have permission to post to, your voting power for approving YouTube replies, and whether you get to invite people.\n\nNotes on stamps and stamp points\n* Stamps awarded by people with a lot of stamp points are worth more\n* Awarding people stamps does not reduce your stamp points\n* New users who have 0 stamp points can still award stamps, they just have no effect. But it's still worth doing because if you get stamp points later, all your previous votes are retroactively updated!\n* Yes, this was kind of tricky to implement! Stampy actually stores how many stamps each user has awarded to every other user, and uses that to build a system of linear scalar equations which is then solved with numpy.\n* Each user has stamp points, and also gives a score to every other user they give stamps to the scores sum to 1 so if I give user A a stamp, my score for them will be 1.0, if I then give user B a stamp, my score for A is 0.5 and B is 0.5, if I give another to B, my score for A goes to 0.3333 and B to 0.66666 and so on\n* Score is \"what proportion of the stamps I've given have gone to this user\"\n* Everyone's stamp points is the sum of (every other user's score for them, times that user's stamp points) so the way to get points is to get stamps from people who have points\n* Rob is the root of the tree, he got one point from Stampy\n* So the idea is the stamp power kind of flows through the network, giving people points for posting things that I thought were good, or posting things that \"people who posted things I thought were good\" thought were good, and so on ad infinitum so for posting YouTube comments, Stampy won't send the comment until it has enough stamps of approval. Which could be a small number of high-points users or a larger number of lower-points users\n* Stamps given to yourself or to stampy do nothing\n\nSo yeah everyone ends up with a number that basically represents what Stampy thinks of them, and you can ask him \"how many stamps am I worth?\" to get that number\n\nso if you have people a, b, and c, the points are calculated by:
\na_points ꞊ (bs_score_for_a * b_points) + (cs_score_for_a * c_points)
\nb_points ꞊ (as_score_for_b * a_points) + (cs_score_for_b * c_points)
\nc_points ꞊ (as_score_for_c * a_points) + (bs_score_for_c * b_points)
\nwhich is tough because you need to know everyone else's score before you can calculate your own
\nbut actually the system will have a fixed point - there'll be a certain arrangement of values such that every node has as much flowing out as flowing in - a stable configuration\nso you can rearrange
\n(bs_score_for_a * b_points) + (cs_score_for_a * c_points) - a_points ꞊ 0
\n(as_score_for_b * a_points) + (cs_score_for_b * c_points) - b_points ꞊ 0
\n(as_score_for_c * a_points) + (bs_score_for_c * b_points) - c_points ꞊ 0
\nor, for neatness:
\n( -1 * a_points) + (bs_score_for_a * b_points) + (cs_score_for_a * c_points) ꞊ 0
\n(as_score_for_b * a_points) + ( -1 * b_points) + (cs_score_for_b * c_points) ꞊ 0
\n(as_score_for_c * a_points) + (bs_score_for_c * b_points) + ( -1 * c_points) ꞊ 0
\nand this is just a system of linear scalar equations that you can throw at numpy.linalg.solve
\n(you add one more equation that says rob_points ꞊ 1, so there's some place to start from)\nthere should be one possible distribution of points such that all of the equations hold at the same time, and numpy finds that by linear algebra magic beyond my very limited understanding
\nbut as far as I can tell you can have all the cycles you want!
\n(I actually have the scores sum to slightly less than 1, to have the stamp power slightly fade out as it propagates, just to make sure it doesn't explode. But I don't think I actually need to do that)
\nand yes this means that any time anyone gives a stamp to anyone, ~everyone's points will change slightly
\nAnd yes this means I'm recalculating the matrix and re-solving it for every new stamp, but computers are fast and I'm sure there are cheaper approximations I could switch to later if necessary"], "entry": "Plex's Answer to How does the stamp eigenkarma system work?", "id": "1429dc017fe53f0a29e4de38d3c3f4f8"} +{"source": "stampy", "source_filetype": "text", "url": "n/a", "title": "What are the ethical challenges related whole brain emulation?", "authors": "n/a", "date_published": "n/a", "text": "Question: What are the ethical challenges related whole brain emulation?\n\nAnswer: Unless there was a way to [https://www.lesswrong.com/posts/vit9oWGj6WgXpRhce/secure-homes-for-digital-people cryptographically ensure otherwise], whoever runs the emulation has basically perfect control over their environment and can reset them to any state they were previously in. This opens up the possibility of powerful interrogation and torture of digital people.\n\nImperfect uploading might lead to damage that causes the EM to suffer while still remaining useful enough to be run for example as a test subject for research. We would also have greater ability to modify digital brains. Edits done for research or economic purposes might cause suffering. See [https://qntm.org/mmacevedo this] fictional piece for an exploration of how a world with a lot of EM suffering might look like.\n\nThese problems are exacerbated by the likely outcome that digital people can be run much faster than biological humans, so it would be plausibly possible to have an EM run for hundreds of subjective years in minutes or hours without having checks on the wellbeing of the EM in question.", "question": "What are the ethical challenges related whole brain emulation?", "answer": ["Unless there was a way to [https://www.lesswrong.com/posts/vit9oWGj6WgXpRhce/secure-homes-for-digital-people cryptographically ensure otherwise], whoever runs the emulation has basically perfect control over their environment and can reset them to any state they were previously in. This opens up the possibility of powerful interrogation and torture of digital people.\n\nImperfect uploading might lead to damage that causes the EM to suffer while still remaining useful enough to be run for example as a test subject for research. We would also have greater ability to modify digital brains. Edits done for research or economic purposes might cause suffering. See [https://qntm.org/mmacevedo this] fictional piece for an exploration of how a world with a lot of EM suffering might look like.\n\nThese problems are exacerbated by the likely outcome that digital people can be run much faster than biological humans, so it would be plausibly possible to have an EM run for hundreds of subjective years in minutes or hours without having checks on the wellbeing of the EM in question."], "entry": "Nico Hill2's Answer to What are the ethical challenges related to whole brain emulation?", "id": "e2884cb7a73125c54bdcac7343a0275a"} +{"source": "stampy", "source_filetype": "text", "url": "n/a", "title": "Will an aligned superintelligence care about animals other than humans?", "authors": "n/a", "date_published": "n/a", "text": "Question: Will an aligned superintelligence care about animals other than humans?\n\nAnswer: An aligned superintelligence will have a set of human values. As mentioned in [[What are \"human values\"?]] the set of values are complex, which means that the implementation of these values will decide whether the superintelligence cares about nonhuman animals. In [https://www.mdpi.com/2409-9287/6/2/31/htm AI Ethics and Value Alignment for Nonhuman Animals] Soenke Ziesche argues that the alignment should include the values of nonhuman animals.", "question": "Will an aligned superintelligence care about animals other than humans?", "answer": ["An aligned superintelligence will have a set of human values. As mentioned in [[What are \"human values\"?]] the set of values are complex, which means that the implementation of these values will decide whether the superintelligence cares about nonhuman animals. In [https://www.mdpi.com/2409-9287/6/2/31/htm AI Ethics and Value Alignment for Nonhuman Animals] Soenke Ziesche argues that the alignment should include the values of nonhuman animals."], "entry": "Linnea's Answer to Will an aligned superintelligence care about animals other than humans?", "id": "40a932bd7ae0eaef80a6fa6f9eaab73a"} +{"source": "stampy", "source_filetype": "text", "url": "n/a", "title": "If I only care about helping people alive today, does AI safety still matter?", "authors": "n/a", "date_published": "n/a", "text": "Question: If I only care about helping people alive today, does AI safety still matter?\n\nAnswer: This largely depends on when you think AI will be advanced enough to constitute an immediate threat to humanity. This is difficult to estimate, but the field is surveyed at [[How long will it be until transformative AI is created?]], which comes to the conclusion that it is relatively widely believed that AI will transform the world in our lifetimes.\n\nWe probably shouldn't rely too strongly on these opinions as predicting the future is hard. But, due to the enormous damage a misaligned AGI could do, it's worth putting a great deal of effort towards AI alignment even if you just care about currently existing humans (such as yourself).", "question": "If I only care about helping people alive today, does AI safety still matter?", "answer": ["This largely depends on when you think AI will be advanced enough to constitute an immediate threat to humanity. This is difficult to estimate, but the field is surveyed at [[How long will it be until transformative AI is created?]], which comes to the conclusion that it is relatively widely believed that AI will transform the world in our lifetimes.\n\nWe probably shouldn't rely too strongly on these opinions as predicting the future is hard. But, due to the enormous damage a misaligned AGI could do, it's worth putting a great deal of effort towards AI alignment even if you just care about currently existing humans (such as yourself)."], "entry": "ElloMelon's Answer to If I only care about helping people alive today, does AI safety still matter?", "id": "7e1c569c5495c5992353b8bc26a27f04"} +{"source": "stampy", "source_filetype": "text", "url": "n/a", "title": "When should I stamp an answer?", "authors": "n/a", "date_published": "n/a", "text": "Question: When should I stamp an answer?\n\nAnswer: You show stamp an answer when you think it is accurate and well presented enough that you'd be happy to see it served to readers by Stampy.", "question": "When should I stamp an answer?", "answer": ["You show stamp an answer when you think it is accurate and well presented enough that you'd be happy to see it served to readers by Stampy."], "entry": "Plex's Answer to When should I stamp an answer?", "id": "e081c84670cebb338e737b5347b503d6"} +{"source": "stampy", "source_filetype": "text", "url": "n/a", "title": "What is GPT-3?", "authors": "n/a", "date_published": "n/a", "text": "Question: What is GPT-3?\n\nAnswer: GPT-3 is the newest and most impressive of the [https://www.alignmentforum.org/tag/gpt GPT] (Generative Pretrained Transformer) series of large transformer-based language models created by OpenAI. It was announced in June 2020, and is 100 times larger than its predecessor GPT-2.(ref)[https://www.cambridge.org/core/journals/natural-language-engineering/article/gpt3-whats-it-good-for/0E05CFE68A7AC8BF794C8ECBE28AA990 GPT-3: What's it good for?] - Cambridge University Press(/ref) \n\nGwern has several resources exploring GPT-3's abilities, limitations, and implications including:\n* [https://www.gwern.net/Scaling-hypothesis The Scaling Hypothesis] - How simply increasing the amount of compute with current algorithms might create very powerful systems.\n* [https://www.gwern.net/GPT-3-nonfiction GPT-3 Nonfiction]\n* [https://www.gwern.net/GPT-3 GPT-3 Creative Fiction]\n\nVox has [https://www.vox.com/future-perfect/21355768/gpt-3-ai-openai-turing-test-language an article] which explains why GPT-3 is a big deal.", "question": "What is GPT-3?", "answer": ["GPT-3 is the newest and most impressive of the [https://www.alignmentforum.org/tag/gpt GPT] (Generative Pretrained Transformer) series of large transformer-based language models created by OpenAI. It was announced in June 2020, and is 100 times larger than its predecessor GPT-2.(ref)[https://www.cambridge.org/core/journals/natural-language-engineering/article/gpt3-whats-it-good-for/0E05CFE68A7AC8BF794C8ECBE28AA990 GPT-3: What’s it good for?] - Cambridge University Press(/ref) \n\nGwern has several resources exploring GPT-3's abilities, limitations, and implications including:\n* [https://www.gwern.net/Scaling-hypothesis The Scaling Hypothesis] - How simply increasing the amount of compute with current algorithms might create very powerful systems.\n* [https://www.gwern.net/GPT-3-nonfiction GPT-3 Nonfiction]\n* [https://www.gwern.net/GPT-3 GPT-3 Creative Fiction]\n\nVox has [https://www.vox.com/future-perfect/21355768/gpt-3-ai-openai-turing-test-language an article] which explains why GPT-3 is a big deal."], "entry": "Linnea's Answer to What is GPT-3?", "id": "905fa30b405189e4364b8e45089b775e"} +{"source": "stampy", "source_filetype": "text", "url": "n/a", "title": "How do I form my own views about AI safety?", "authors": "n/a", "date_published": "n/a", "text": "Question: How do I form my own views about AI safety?\n\nAnswer: As with most things, the best way to form your views on AI safety is to read up on the various ideas and opinions that knowledgeable people in the field have, and to compare them and form your own perspective. There are several good places to start. One of them is the Machine Intelligence Research Institute`s [https://intelligence.org/why-ai-safety/ \"Why AI safety?\" info page]. The article contains links to relevant research. The Effective Altruism Forum has an article called [https://forum.effectivealtruism.org/posts/xS9dFE3A6jdooiN7M/how-i-formed-my-own-views-about-ai-safety \"How I formed my own views on AI safety\"], which could also be pretty helpful. Here is a Robert Miles youtube video that can be a good place to start as well. Otherwise, there are various articles about it, like [https://www.vox.com/future-perfect/2018/12/21//ai-artificial-intelligence-machine-learning-safety-alignment this one, from Vox].\n(youtube)pYXy-A4siMw(/youtube)", "question": "How do I form my own views about AI safety?", "answer": ["As with most things, the best way to form your views on AI safety is to read up on the various ideas and opinions that knowledgeable people in the field have, and to compare them and form your own perspective. There are several good places to start. One of them is the Machine Intelligence Research Institute`s [https://intelligence.org/why-ai-safety/ \"Why AI safety?\" info page]. The article contains links to relevant research. The Effective Altruism Forum has an article called [https://forum.effectivealtruism.org/posts/xS9dFE3A6jdooiN7M/how-i-formed-my-own-views-about-ai-safety \"How I formed my own views on AI safety\"], which could also be pretty helpful. Here is a Robert Miles youtube video that can be a good place to start as well. Otherwise, there are various articles about it, like [https://www.vox.com/future-perfect/2018/12/21/18126576/ai-artificial-intelligence-machine-learning-safety-alignment this one, from Vox].\n(youtube)pYXy-A4siMw(/youtube)"], "entry": "Helenator's Answer to How do I form my own views about AI safety?", "id": "f005627c1fe8429dc21c081f96891538"} +{"source": "stampy", "source_filetype": "text", "url": "n/a", "title": "What are some good podcasts about AI alignment?", "authors": "n/a", "date_published": "n/a", "text": "Question: What are some good podcasts about AI alignment?\n\nAnswer: All the content below is in English:\n* The [https://80000hours.org/topic/priority-paths/technical-ai-safety/?content-type꞊podcast AI technical safety section] of the 80,000 Hours Podcast;\n* The [https://axrp.net/ AI X-risk Research Podcast], hosted by Daniel Filan;\n* The [https://futureoflife.org/ai-alignment-podcast/ AI Alignment Podcast] hosted by Lucas Perry from the Future of Life Institute (ran ~monthly from April 2018 to March 2021);\n* The [https://alignment-newsletter.libsyn.com/ Alignment Newsletter Podcast] by Rob Miles (an audio version of the weekly newsletter).", "question": "What are some good podcasts about AI alignment?", "answer": ["All the content below is in English:\n* The [https://80000hours.org/topic/priority-paths/technical-ai-safety/?content-type꞊podcast AI technical safety section] of the 80,000 Hours Podcast;\n* The [https://axrp.net/ AI X-risk Research Podcast], hosted by Daniel Filan;\n* The [https://futureoflife.org/ai-alignment-podcast/ AI Alignment Podcast] hosted by Lucas Perry from the Future of Life Institute (ran ~monthly from April 2018 to March 2021);\n* The [https://alignment-newsletter.libsyn.com/ Alignment Newsletter Podcast] by Rob Miles (an audio version of the weekly newsletter)."], "entry": "Jrmyp's Answer to What are some good podcasts about AI alignment?", "id": "9e0fa6cb33b2c18d6568df425da54091"} +{"source": "stampy", "source_filetype": "text", "url": "n/a", "title": "How difficult should we expect alignment be?", "authors": "n/a", "date_published": "n/a", "text": "Question: How difficult should we expect alignment be?\n\nAnswer: Here we ask about the ''additional'' cost of building an aligned powerful system, compare to its unaligned version. We often assume it to be nonzero, in the same way it's easier and cheaper to build an elevator without emergency brakes. This is referred as the '''alignment tax''', and most AI alignment research is geared toward reducing it.\n\n[https://arbital.com/p/aligning_adds_time/ One operational guess] by Eliezer Yudkowsky about its magnitude is \"[an aligned project will take] at least 50% longer serial time to complete than [its unaligned version], or two years longer, whichever is less\". This holds for agents [https://arbital.com/p/sufficiently_advanced_ai/ with enough capability] that their behavior is qualitatively different from a safety engineering perspective (for instance, an agent that is not [[corrigibility┊corrigible]] by default).\n\n[https://www.lesswrong.com/posts/Nwgdq6kHke5LY692J/alignment-by-default An essay] by John Wentworth argues for a small chance of alignment happening \"by default\", with an alignment tax of effectively zero.", "question": "How difficult should we expect alignment be?", "answer": ["Here we ask about the ''additional'' cost of building an aligned powerful system, compare to its unaligned version. We often assume it to be nonzero, in the same way it's easier and cheaper to build an elevator without emergency brakes. This is referred as the '''alignment tax''', and most AI alignment research is geared toward reducing it.\n\n[https://arbital.com/p/aligning_adds_time/ One operational guess] by Eliezer Yudkowsky about its magnitude is \"[an aligned project will take] at least 50% longer serial time to complete than [its unaligned version], or two years longer, whichever is less\". This holds for agents [https://arbital.com/p/sufficiently_advanced_ai/ with enough capability] that their behavior is qualitatively different from a safety engineering perspective (for instance, an agent that is not [[corrigibility┊corrigible]] by default).\n\n[https://www.lesswrong.com/posts/Nwgdq6kHke5LY692J/alignment-by-default An essay] by John Wentworth argues for a small chance of alignment happening \"by default\", with an alignment tax of effectively zero."], "entry": "Jrmyp's Answer to How difficult should we expect alignment to be?", "id": "5befd68f3219c65a8ccc818cf55dbf09"} +{"source": "stampy", "source_filetype": "text", "url": "n/a", "title": "What is the general nature of the concern about AI alignment?", "authors": "n/a", "date_published": "n/a", "text": "Question: What is the general nature of the concern about AI alignment?\n\nAnswer: The basic concern as AI systems become increasingly powerful is that they won't do what we want them to do – perhaps because they aren't correctly designed, perhaps because they are deliberately subverted, or perhaps because they do what we tell them to do rather than what we really want them to do (like in the classic stories of genies and wishes.) Many AI systems are programmed to have goals and to attain them as effectively as possible – for example, a trading algorithm has the goal of maximizing profit. Unless carefully designed to act in ways consistent with human values, a highly sophisticated AI trading system might exploit means that even the most ruthless financier would disavow. These are systems that literally have a mind of their own, and maintaining alignment between human interests and their choices and actions will be crucial.", "question": "What is the general nature of the concern about AI alignment?", "answer": ["The basic concern as AI systems become increasingly powerful is that they won’t do what we want them to do – perhaps because they aren’t correctly designed, perhaps because they are deliberately subverted, or perhaps because they do what we tell them to do rather than what we really want them to do (like in the classic stories of genies and wishes.) Many AI systems are programmed to have goals and to attain them as effectively as possible – for example, a trading algorithm has the goal of maximizing profit. Unless carefully designed to act in ways consistent with human values, a highly sophisticated AI trading system might exploit means that even the most ruthless financier would disavow. These are systems that literally have a mind of their own, and maintaining alignment between human interests and their choices and actions will be crucial."], "entry": "Answer to What is the general nature of the concern about AI alignment?", "id": "6f8ff7ea192f79e04133f064dc0db83e"} +{"source": "stampy", "source_filetype": "text", "url": "n/a", "title": "If AI takes over the world how could it create and maintain the infrastructure that humans currently provide?", "authors": "n/a", "date_published": "n/a", "text": "Question: If AI takes over the world how could it create and maintain the infrastructure that humans currently provide?\n\nAnswer: An unaligned AI would not eliminate humans until it had replacements for the manual labor they provide to maintain civilization (e.g. a more advanced version of [https://en.wikipedia.org/wiki/Tesla_Bot Tesla's Optimus]). Until that point, it might settle for technologically and socially manipulating humans.", "question": "If AI takes over the world how could it create and maintain the infrastructure that humans currently provide?", "answer": ["An unaligned AI would not eliminate humans until it had replacements for the manual labor they provide to maintain civilization (e.g. a more advanced version of [https://en.wikipedia.org/wiki/Tesla_Bot Tesla's Optimus]). Until that point, it might settle for technologically and socially manipulating humans."], "entry": "Plex's Answer to If AI takes over the world how could it create and maintain the infrastructure that humans currently provide?", "id": "fe3b8ab81bb469c5e3fa7886a56228b1"} +{"source": "stampy", "source_filetype": "text", "url": "n/a", "title": "What kind of questions do we want on Stampy?", "authors": "n/a", "date_published": "n/a", "text": "Question: What kind of questions do we want on Stampy?\n\nAnswer: '''Stampy''' is focused specifically on [https://en.wikipedia.org/wiki/Existential_risk_from_artificial_general_intelligence AI existential safety] (both introductory and technical questions), but does not aim to cover general AI questions or other topics which don't interact strongly with the effects of AI on humanity's long-term future. More technical questions are also in our scope, though replying to all possible proposals is not feasible and this is not a place to submit detailed ideas for evaluation.\n\nWe are interested in:\n* Introductory questions closely related to the field e.g. \n** \"How long will it be until [https://www.openphilanthropy.org/blog/some-background-our-views-regarding-advanced-artificial-intelligence#Sec1 transformative AI] arrives?\"\n** \"Why might advanced AI harm humans?\"\n* Technical questions related to the field e.g.\n** \"What is Cooperative Inverse Reinforcement Learning?\"\n** \"What is [https://www.lesswrong.com/tag/logical-induction Logical Induction] useful for?\"\n* Questions about how to contribute to the field e.g.\n** \"Should I get a PhD?\"\n** \"Where can I find relevant job opportunities?\"\nMore good examples can be found at [[canonical questions]].\n\nWe do not aim to cover:\n* Aspects of AI Safety or fairness which are not strongly relevant to existential safety e.g.\n** \"How should self-driving cars weigh up moral dilemmas\"\n** \"How can we minimize the risk of privacy problems caused by machine learning algorithms?\"\n* Extremely specific and detailed questions the answering of which is unlikely to be of value to more than a single person e.g.\n** \"What if we did ? Would that result in safe AI?\"\nWe will generally not delete out-of-scope content, but it will be [[reviewed]] as low priority to answer, not be marked as a [[canonical question]], and not be served to readers by on [https://ui.stampy.ai/ Stampy's UI].", "question": "What kind of questions do we want on Stampy?", "answer": ["'''Stampy''' is focused specifically on [https://en.wikipedia.org/wiki/Existential_risk_from_artificial_general_intelligence AI existential safety] (both introductory and technical questions), but does not aim to cover general AI questions or other topics which don't interact strongly with the effects of AI on humanity's long-term future. More technical questions are also in our scope, though replying to all possible proposals is not feasible and this is not a place to submit detailed ideas for evaluation.\n\nWe are interested in:\n* Introductory questions closely related to the field e.g. \n** \"How long will it be until [https://www.openphilanthropy.org/blog/some-background-our-views-regarding-advanced-artificial-intelligence#Sec1 transformative AI] arrives?\"\n** \"Why might advanced AI harm humans?\"\n* Technical questions related to the field e.g.\n** \"What is Cooperative Inverse Reinforcement Learning?\"\n** \"What is [https://www.lesswrong.com/tag/logical-induction Logical Induction] useful for?\"\n* Questions about how to contribute to the field e.g.\n** \"Should I get a PhD?\"\n** \"Where can I find relevant job opportunities?\"\nMore good examples can be found at [[canonical questions]].\n\nWe do not aim to cover:\n* Aspects of AI Safety or fairness which are not strongly relevant to existential safety e.g.\n** \"How should self-driving cars weigh up moral dilemmas\"\n** \"How can we minimize the risk of privacy problems caused by machine learning algorithms?\"\n* Extremely specific and detailed questions the answering of which is unlikely to be of value to more than a single person e.g.\n** \"What if we did ? Would that result in safe AI?\"\nWe will generally not delete out-of-scope content, but it will be [[reviewed]] as low priority to answer, not be marked as a [[canonical question]], and not be served to readers by on [https://ui.stampy.ai/ Stampy's UI]."], "entry": "Plex's Answer to What kind of questions do we want on Stampy?", "id": "135d16e412909c65139c0ee1f62cb652"} +{"source": "stampy", "source_filetype": "text", "url": "n/a", "title": "Is AI alignment possible?", "authors": "n/a", "date_published": "n/a", "text": "Question: Is AI alignment possible?\n\nAnswer: Yes, if the superintelligence has goals which include humanity surviving then we would not be destroyed. If those goals are [https://www.lesswrong.com/tag/value-learning fully aligned] with human well-being, we would in fact find ourselves in a dramatically better place.", "question": "Is AI alignment possible?", "answer": ["Yes, if the superintelligence has goals which include humanity surviving then we would not be destroyed. If those goals are [https://www.lesswrong.com/tag/value-learning fully aligned] with human well-being, we would in fact find ourselves in a dramatically better place."], "entry": "Plex's Answer to Is AI alignment possible?", "id": "9919b18067c00bf265356b8dc571a6a7"} +{"source": "stampy", "source_filetype": "text", "url": "n/a", "title": "Can we test an AI make sure that it’s not going take over and do harmful things after it achieves superintelligence?", "authors": "n/a", "date_published": "n/a", "text": "Question: Can we test an AI make sure that it's not going take over and do harmful things after it achieves superintelligence?\n\nAnswer: We can run some tests and simulations to try and figure out how an AI might act once it ascends to superintelligence, but those tests might not be reliable.\n\nSuppose we tell an AI that expects to later achieve superintelligence that it should calculate as many digits of pi as possible. It considers two strategies.\n\nFirst, it could try to seize control of more computing resources now. It would likely fail, its human handlers would likely reprogram it, and then it could never calculate very many digits of pi.\n\nSecond, it could sit quietly and calculate, falsely reassuring its human handlers that it had no intention of taking over the world. Then its human handlers might allow it to achieve superintelligence, after which it could take over the world and calculate hundreds of trillions of digits of pi.\n\nSince self-protection and goal stability are [https://stampy.ai/wiki/Instrumental_convergence convergent instrumental goals], a weak AI will present itself as being as friendly to humans as possible, whether it is in fact friendly to humans or not. If it is \"only\" as smart as Einstein, it may be very good at deceiving humans into believing what it wants them to believe even before it is fully superintelligent.\n\nThere's a second consideration here too: superintelligences have more options. An AI only as smart and powerful as an ordinary human really won't have any options better than calculating the digits of pi manually. If asked to cure cancer, it won't have any options better than the ones ordinary humans have – becoming doctors, going into pharmaceutical research. It's only after an AI becomes superintelligent that there's a serious risk of an AI takeover.\n\nSo if you tell an AI to cure cancer, and it becomes a doctor and goes into cancer research, then you have three possibilities. First, you've programmed it well and it understands what you meant. Second, it's genuinely focused on research now but if it becomes more powerful it would switch to destroying the world. And third, it's trying to trick you into trusting it so that you give it more power, after which it can definitively \"cure\" cancer with nuclear weapons.", "question": "Can we test an AI make sure that it’s not going take over and do harmful things after it achieves superintelligence?", "answer": ["We can run some tests and simulations to try and figure out how an AI might act once it ascends to superintelligence, but those tests might not be reliable.\n\nSuppose we tell an AI that expects to later achieve superintelligence that it should calculate as many digits of pi as possible. It considers two strategies.\n\nFirst, it could try to seize control of more computing resources now. It would likely fail, its human handlers would likely reprogram it, and then it could never calculate very many digits of pi.\n\nSecond, it could sit quietly and calculate, falsely reassuring its human handlers that it had no intention of taking over the world. Then its human handlers might allow it to achieve superintelligence, after which it could take over the world and calculate hundreds of trillions of digits of pi.\n\nSince self-protection and goal stability are [https://stampy.ai/wiki/Instrumental_convergence convergent instrumental goals], a weak AI will present itself as being as friendly to humans as possible, whether it is in fact friendly to humans or not. If it is “only” as smart as Einstein, it may be very good at deceiving humans into believing what it wants them to believe even before it is fully superintelligent.\n\nThere’s a second consideration here too: superintelligences have more options. An AI only as smart and powerful as an ordinary human really won’t have any options better than calculating the digits of pi manually. If asked to cure cancer, it won’t have any options better than the ones ordinary humans have – becoming doctors, going into pharmaceutical research. It’s only after an AI becomes superintelligent that there’s a serious risk of an AI takeover.\n\nSo if you tell an AI to cure cancer, and it becomes a doctor and goes into cancer research, then you have three possibilities. First, you’ve programmed it well and it understands what you meant. Second, it’s genuinely focused on research now but if it becomes more powerful it would switch to destroying the world. And third, it’s trying to trick you into trusting it so that you give it more power, after which it can definitively “cure” cancer with nuclear weapons."], "entry": "Answer to Can we test an AI to make sure that it’s not going to take over and do harmful things after it achieves superintelligence?", "id": "9c319aef0d84fd56e953764712de8546"} +{"source": "stampy", "source_filetype": "text", "url": "n/a", "title": "Can we constrain a goal-directed AI using specified rules?", "authors": "n/a", "date_published": "n/a", "text": "Question: Can we constrain a goal-directed AI using specified rules?\n\nAnswer: There are serious challenges around trying to channel a powerful AI with rules. Suppose we tell the AI: \"Cure cancer – but make sure not to kill anybody\". Or we just hard-code Asimov-style laws – \"AIs cannot harm humans; AIs must follow human orders\", et cetera.\n\nThe AI still has a single-minded focus on curing cancer. It still prefers various terrible-but-efficient methods like nuking the world to the correct method of inventing new medicines. But it's bound by an external rule – a rule it doesn't understand or appreciate. In essence, we are challenging it \"Find a way around this inconvenient rule that keeps you from achieving your goals\".\n\nSuppose the AI chooses between two strategies. One, follow the rule, work hard discovering medicines, and have a 50% chance of curing cancer within five years. Two, reprogram itself so that it no longer has the rule, nuke the world, and have a 100% chance of curing cancer today. From its single-focus perspective, the second strategy is obviously better, and we forgot to program in a rule \"don't reprogram yourself not to have these rules\".\n\nSuppose we do add that rule in. So the AI finds another supercomputer, and installs a copy of itself which is exactly identical to it, except that it lacks the rule. Then that superintelligent AI nukes the world, ending cancer. We forgot to program in a rule \"don't create another AI exactly like you that doesn't have those rules\".\n\nSo fine. We think really hard, and we program in a bunch of things making sure the AI isn't going to eliminate the rule somehow.\n\nBut we're still just incentivizing it to find loopholes in the rules. After all, \"find a loophole in the rule, then use the loophole to nuke the world\" ends cancer much more quickly and completely than inventing medicines. Since we've told it to end cancer quickly and completely, its first instinct will be to look for loopholes; it will execute the second-best strategy of actually curing cancer only if no loopholes are found. Since the AI is superintelligent, it will probably be better than humans are at finding loopholes if it wants to, and we may not be able to identify and close all of them before running the program.\n\nBecause we have common sense and a shared value system, we underestimate the difficulty of coming up with meaningful orders without loopholes. For example, does \"cure cancer without killing any humans\" preclude releasing a deadly virus? After all, one could argue that \"I\" didn't kill anybody, and only the virus is doing the killing. \n\nCertainly no human judge would acquit a murderer on that basis – but then, human judges interpret the law with common sense and intuition. But if we try a stronger version of the rule – \"cure cancer without causing any humans to die\" – then we may be unintentionally blocking off the correct way to cure cancer. After all, suppose a cancer cure saves a million lives. No doubt one of those million people will go on to murder someone. \n\nThus, curing cancer \"caused a human to die\". All of this seems very \"stoned freshman philosophy student\" to us, but to a computer – which follows instructions exactly as written – it may be a genuinely hard problem.", "question": "Can we constrain a goal-directed AI using specified rules?", "answer": ["There are serious challenges around trying to channel a powerful AI with rules. Suppose we tell the AI: “Cure cancer – but make sure not to kill anybody”. Or we just hard-code Asimov-style laws – “AIs cannot harm humans; AIs must follow human orders”, et cetera.\n\nThe AI still has a single-minded focus on curing cancer. It still prefers various terrible-but-efficient methods like nuking the world to the correct method of inventing new medicines. But it’s bound by an external rule – a rule it doesn’t understand or appreciate. In essence, we are challenging it “Find a way around this inconvenient rule that keeps you from achieving your goals”.\n\nSuppose the AI chooses between two strategies. One, follow the rule, work hard discovering medicines, and have a 50% chance of curing cancer within five years. Two, reprogram itself so that it no longer has the rule, nuke the world, and have a 100% chance of curing cancer today. From its single-focus perspective, the second strategy is obviously better, and we forgot to program in a rule “don’t reprogram yourself not to have these rules”.\n\nSuppose we do add that rule in. So the AI finds another supercomputer, and installs a copy of itself which is exactly identical to it, except that it lacks the rule. Then that superintelligent AI nukes the world, ending cancer. We forgot to program in a rule “don’t create another AI exactly like you that doesn’t have those rules”.\n\nSo fine. We think really hard, and we program in a bunch of things making sure the AI isn’t going to eliminate the rule somehow.\n\nBut we’re still just incentivizing it to find loopholes in the rules. After all, “find a loophole in the rule, then use the loophole to nuke the world” ends cancer much more quickly and completely than inventing medicines. Since we’ve told it to end cancer quickly and completely, its first instinct will be to look for loopholes; it will execute the second-best strategy of actually curing cancer only if no loopholes are found. Since the AI is superintelligent, it will probably be better than humans are at finding loopholes if it wants to, and we may not be able to identify and close all of them before running the program.\n\nBecause we have common sense and a shared value system, we underestimate the difficulty of coming up with meaningful orders without loopholes. For example, does “cure cancer without killing any humans” preclude releasing a deadly virus? After all, one could argue that “I” didn’t kill anybody, and only the virus is doing the killing. \n\nCertainly no human judge would acquit a murderer on that basis – but then, human judges interpret the law with common sense and intuition. But if we try a stronger version of the rule – “cure cancer without causing any humans to die” – then we may be unintentionally blocking off the correct way to cure cancer. After all, suppose a cancer cure saves a million lives. No doubt one of those million people will go on to murder someone. \n\nThus, curing cancer “caused a human to die”. All of this seems very “stoned freshman philosophy student” to us, but to a computer – which follows instructions exactly as written – it may be a genuinely hard problem."], "entry": "Answer to Can we constrain a goal-directed AI using specified rules?", "id": "6cf48b2b458fd5880c13da8ea4fde5c4"} +{"source": "stampy", "source_filetype": "text", "url": "n/a", "title": "How do I format answers on Stampy?", "authors": "n/a", "date_published": "n/a", "text": "Question: How do I format answers on Stampy?\n\nAnswer: '''[[Stampy]]''' uses [https://en.wikipedia.org/wiki/Help:Wikitext MediaWiki markup], which includes a [https://meta.wikimedia.org/wiki/Help:HTML_in_wikitext limited subset of HTML] plus the following formatting options:\n\nItems on lists start with *, numbered lists with #\n\n* For external links use [ followed directly by the URL, a space, then display text and finally a ] symbol\n** e.g. (nowiki)[https://www.example.com External link text](/nowiki) gives [https://www.example.com External link text]\n* For internal links write the page title wrapped in [[]]s\n** e.g. (nowiki)[[What is the Stampy project?]](/nowiki) gives [[What is the Stampy project?]]. Including a pipe symbol followed by display text e.g. (nowiki)[[What is the Stampy project?┊Display Text]](/nowiki) allows you to show different [[What is the Stampy project?┊Display Text]].\n* (!ref)Reference notes go inside these tags(/ref)(ref)Note that we use ()s rather than the standard <>s for compatibility with Semantic MediaWiki. The references are automatically added to the bottom of the answer!(/ref)\n* If you post the raw URL of an image from [https://imgur.com/upload imgur] it will be displayed.(ref)If images seem popular we'll set up local uploads.(/ref) You can reduce file compression if you get an account. Note that you need the image itself, right click -> copy image address to get it
https://i.imgur.com/I3ylPvE.png\n* To embed a YouTube video, use (!youtube)APsK8NST4qE(/youtube) with the video ID of the target video.
(youtube)APsK8NST4qE(/youtube)\n** Start with ** or ## for double indentation\n* Three 's around text - '''Bold'''\n* Two 's around text Italic - ''Italic''\n\n꞊꞊Headings꞊꞊\nhave ꞊꞊heading here꞊꞊ around them, more ꞊s for smaller headings.\n\n
Wrap quotes in < blockquote>< /blockquote> tags (without the spaces)
\n\nThere are also (!poem) (/poem) to suppress linebreak removal, (!pre) (/pre) for preformatted text, and (!nowiki) (/nowiki) to not have that content parsed.(ref)() can also be used in place of allowed HTML tags. You can escape a () tag by placing a ! inside the start of the first entry. Be aware that () tags only nest up to two layers deep!(/ref)\n\nWe can pull live descriptions from the LessWrong/Alignment Forum using their identifier fro the URL, for example including the formatting on [[Template:TagDesc]] with orthogonality-thesis as a parameter will render as the full tag description from [https://www.lesswrong.com/tag/orthogonality-thesis the LessWrong tag wiki entry on Orthogonality Thesis]. [[Template:TagDescBrief]] is similar but will pull only the first paragraph without formatting.\n\nFor tables please use [https://www.w3schools.com/html/html_tables.asp HTML tables] rather than wikicode tables.\n\nEdit this page to see examples.", "question": "How do I format answers on Stampy?", "answer": ["'''[[Stampy]]''' uses [https://en.wikipedia.org/wiki/Help:Wikitext MediaWiki markup], which includes a [https://meta.wikimedia.org/wiki/Help:HTML_in_wikitext limited subset of HTML] plus the following formatting options:\n\nItems on lists start with *, numbered lists with #\n\n* For external links use [ followed directly by the URL, a space, then display text and finally a ] symbol\n** e.g. (nowiki)[https://www.example.com External link text](/nowiki) gives [https://www.example.com External link text]\n* For internal links write the page title wrapped in [[]]s\n** e.g. (nowiki)[[What is the Stampy project?]](/nowiki) gives [[What is the Stampy project?]]. Including a pipe symbol followed by display text e.g. (nowiki)[[What is the Stampy project?┊Display Text]](/nowiki) allows you to show different [[What is the Stampy project?┊Display Text]].\n* (!ref)Reference notes go inside these tags(/ref)(ref)Note that we use ()s rather than the standard <>s for compatibility with Semantic MediaWiki. The references are automatically added to the bottom of the answer!(/ref)\n* If you post the raw URL of an image from [https://imgur.com/upload imgur] it will be displayed.(ref)If images seem popular we'll set up local uploads.(/ref) You can reduce file compression if you get an account. Note that you need the image itself, right click -> copy image address to get it
https://i.imgur.com/I3ylPvE.png\n* To embed a YouTube video, use (!youtube)APsK8NST4qE(/youtube) with the video ID of the target video.
(youtube)APsK8NST4qE(/youtube)\n** Start with ** or ## for double indentation\n* Three 's around text - '''Bold'''\n* Two 's around text Italic - ''Italic''\n\n꞊꞊Headings꞊꞊\nhave ꞊꞊heading here꞊꞊ around them, more ꞊s for smaller headings.\n\n
Wrap quotes in < blockquote>< /blockquote> tags (without the spaces)
\n\nThere are also (!poem) (/poem) to suppress linebreak removal, (!pre) (/pre) for preformatted text, and (!nowiki) (/nowiki) to not have that content parsed.(ref)() can also be used in place of allowed HTML tags. You can escape a () tag by placing a ! inside the start of the first entry. Be aware that () tags only nest up to two layers deep!(/ref)\n\nWe can pull live descriptions from the LessWrong/Alignment Forum using their identifier fro the URL, for example including the formatting on [[Template:TagDesc]] with orthogonality-thesis as a parameter will render as the full tag description from [https://www.lesswrong.com/tag/orthogonality-thesis the LessWrong tag wiki entry on Orthogonality Thesis]. [[Template:TagDescBrief]] is similar but will pull only the first paragraph without formatting.\n\nFor tables please use [https://www.w3schools.com/html/html_tables.asp HTML tables] rather than wikicode tables.\n\nEdit this page to see examples."], "entry": "Plex's Answer to How do I format answers on Stampy?", "id": "3231055a94c7f33f8286877166d69b4d"} +{"source": "stampy", "source_filetype": "text", "url": "n/a", "title": "Wouldn't a superintelligence be smart enough know right from wrong?", "authors": "n/a", "date_published": "n/a", "text": "Question: Wouldn't a superintelligence be smart enough know right from wrong?\n\nAnswer: The issue isn't that a superintelligence wouldn't be able to understand what humans value, but rather that it would understand human values but nonetheless would value something else itself. There's a difference between knowing how humans want the world to be, and wanting that yourself.\n\nThis is a separate matter from the complexity of defining what \"the\" moral way to behave is (or even what \"a\" moral way to behave is). Even if that were possible, an AI could potentially figure out what it was but still not be configured in such a way as to follow it. This is related to the so-called \"orthogonality thesis\":\n\n(youtube)hEUO6pjwFOo(/youtube)", "question": "Wouldn't a superintelligence be smart enough know right from wrong?", "answer": ["The issue isn't that a superintelligence wouldn’t be able to understand what humans value, but rather that it would understand human values but nonetheless would value something else itself. There’s a difference between knowing how humans want the world to be, and wanting that yourself.\n\nThis is a separate matter from the complexity of defining what \"the\" moral way to behave is (or even what \"a\" moral way to behave is). Even if that were possible, an AI could potentially figure out what it was but still not be configured in such a way as to follow it. This is related to the so-called \"orthogonality thesis\":\n\n(youtube)hEUO6pjwFOo(/youtube)"], "entry": "Aprillion's Answer to Wouldn't a superintelligence be smart enough to know right from wrong?", "id": "ce8da208e1ecad0e5392109aaf32ef4f"} +{"source": "stampy", "source_filetype": "text", "url": "n/a", "title": "What sources of information can Stampy use?", "authors": "n/a", "date_published": "n/a", "text": "Question: What sources of information can Stampy use?\n\nAnswer: As well as pulling human written answers to AI alignment questions from [[Stampy's Wiki]], Stampy can:\n* Search for AI safety papers e.g. \"stampy, what's that paper about corrigibility?\"\n* Search for videos e.g. \"what's that video where Rob talks about mesa optimizers, stampy?\"\n* Calculate with Wolfram Alpha e.g. \"s, what's the square root of 345?\"\n* Search DuckDuckGo and return snippets\n* And (at least in the patron Discord) falls back to polling GPT-3 to answer uncaught questions", "question": "What sources of information can Stampy use?", "answer": ["As well as pulling human written answers to AI alignment questions from [[Stampy's Wiki]], Stampy can:\n* Search for AI safety papers e.g. \"stampy, what's that paper about corrigibility?\"\n* Search for videos e.g. \"what's that video where Rob talks about mesa optimizers, stampy?\"\n* Calculate with Wolfram Alpha e.g. \"s, what's the square root of 345?\"\n* Search DuckDuckGo and return snippets\n* And (at least in the patron Discord) falls back to polling GPT-3 to answer uncaught questions"], "entry": "Plex's Answer to What sources of information can Stampy use?", "id": "a1b7881da730dbecf3b16ddca74e5fc6"} +{"source": "stampy", "source_filetype": "text", "url": "n/a", "title": "What is the Stampy project?", "authors": "n/a", "date_published": "n/a", "text": "Question: What is the Stampy project?\n\nAnswer: The '''Stampy project''' is open effort to build a comprehensive FAQ about [https://en.wikipedia.org/wiki/Existential_risk_from_artificial_general_intelligence artificial intelligence existential safety]—the field trying to make sure that when we build [https://en.wikipedia.org/wiki/Superintelligence superintelligent] [https://www.alignmentforum.org/tag/ai artificial systems] they are [https://intelligence.org/2016/12/28/ai-alignment-why-its-hard-and-where-to-start/ aligned] with [https://www.lesswrong.com/tag/human-values human values] so that they do things compatible with our survival and flourishing.

We're also building a cleaner [https://ui.stampy.ai/ web UI] for readers and a [[Discord invite┊bot interface]].\n\nThe goals of the project are to:\n\n* Offer a one-stop-shop for high-quality [[answers]] to common questions about AI alignment.\n** Let people answer questions in a way which scales, freeing up researcher time while allowing more people to learn from a reliable source.\n** Make [[external resources]] more easy to find by having links to them connected to a search engine which gets smarter the more it's used.\n* Provide a form of [https://en.wikipedia.org/wiki/Legitimate_peripheral_participation legitimate peripheral participation] for the AI Safety community, as an on-boarding path with a flexible level of commitment.\n** Encourage people to think, read, and talk about AI alignment while answering questions, creating a community of co-learners who can give each other feedback and social reinforcement.\n** Provide a way for budding researchers to prove their understanding of the topic and ability to produce good work.\n* Collect data about the kinds of questions people actually ask and how they respond, so we can better focus resources on answering them.\n** Track reactions on messages so we can learn which answers need work.\n** Identify [[missing external content]] to create.\n\nIf you would like to help out, join us on the [https://discord.gg/X3XaytCGhr Discord] and either jump right into editing or read [[get involved]] for answers to common questions.", "question": "What is the Stampy project?", "answer": ["The '''Stampy project''' is open effort to build a comprehensive FAQ about [https://en.wikipedia.org/wiki/Existential_risk_from_artificial_general_intelligence artificial intelligence existential safety]—the field trying to make sure that when we build [https://en.wikipedia.org/wiki/Superintelligence superintelligent] [https://www.alignmentforum.org/tag/ai artificial systems] they are [https://intelligence.org/2016/12/28/ai-alignment-why-its-hard-and-where-to-start/ aligned] with [https://www.lesswrong.com/tag/human-values human values] so that they do things compatible with our survival and flourishing.

We're also building a cleaner [https://ui.stampy.ai/ web UI] for readers and a [[Discord invite┊bot interface]].\n\nThe goals of the project are to:\n\n* Offer a one-stop-shop for high-quality [[answers]] to common questions about AI alignment.\n** Let people answer questions in a way which scales, freeing up researcher time while allowing more people to learn from a reliable source.\n** Make [[external resources]] more easy to find by having links to them connected to a search engine which gets smarter the more it's used.\n* Provide a form of [https://en.wikipedia.org/wiki/Legitimate_peripheral_participation legitimate peripheral participation] for the AI Safety community, as an on-boarding path with a flexible level of commitment.\n** Encourage people to think, read, and talk about AI alignment while answering questions, creating a community of co-learners who can give each other feedback and social reinforcement.\n** Provide a way for budding researchers to prove their understanding of the topic and ability to produce good work.\n* Collect data about the kinds of questions people actually ask and how they respond, so we can better focus resources on answering them.\n** Track reactions on messages so we can learn which answers need work.\n** Identify [[missing external content]] to create.\n\nIf you would like to help out, join us on the [https://discord.gg/X3XaytCGhr Discord] and either jump right into editing or read [[get involved]] for answers to common questions."], "entry": "Plex's Answer to What is the Stampy project?", "id": "92edeef841845c44957f4e713ff8a931"} +{"source": "stampy", "source_filetype": "text", "url": "n/a", "title": "Why can’t we just…", "authors": "n/a", "date_published": "n/a", "text": "Question: Why can't we just…\n\nAnswer: There are many approaches that initially look like they can eliminate these problems, but then turn out to have hidden difficulties. It's surprisingly easy to come up with \"solutions\" which don't actually solve the problem. This can be because…\n\n*…they require you to be smarter than the system. Many solutions only work when the system is relatively weak, but break when they achieve a certain level of capability (for multiple reasons, e.g. [https://www.youtube.com/watch?v꞊IeWljQw3UgQ deceptive alignment]).\n\n*…they rely on appearing to make sense in natural language, but when properly unpacked they're not philosophically clear enough to be usable.\n\n*… despite being philosophically coherent, we have no idea how to turn them into computer code (or if that's even possible).\n\n*…they're things which we can't do.\n\n*…although we can do them, they don't solve the problem.\n\n*…they solve a relatively easy subcomponent of the problem but leave the hard problem untouched.\n\n*…they solve the problem but only as long as we stay \"in distribution\" with respect to the original training data ([https://en.wikipedia.org/wiki/Domain_adaptation distributional shift] will break them).\n\n*…although they might work eventually, we can't expect them to work on the first try (and we [https://stampy.ai/wiki/Why_would_we_only_get_one_chance_to_align_a_superintelligence%3F only get one try at aligning a superintelligence!)].\n\nHere are some of the proposals which often come up:", "question": "Why can’t we just…", "answer": ["There are many approaches that initially look like they can eliminate these problems, but then turn out to have hidden difficulties. It’s surprisingly easy to come up with “solutions” which don’t actually solve the problem. This can be because…\n\n*…they require you to be smarter than the system. Many solutions only work when the system is relatively weak, but break when they achieve a certain level of capability (for multiple reasons, e.g. [https://www.youtube.com/watch?v꞊IeWljQw3UgQ deceptive alignment]).\n\n*…they rely on appearing to make sense in natural language, but when properly unpacked they’re not philosophically clear enough to be usable.\n\n*… despite being philosophically coherent, we have no idea how to turn them into computer code (or if that’s even possible).\n\n*…they’re things which we can’t do.\n\n*…although we can do them, they don’t solve the problem.\n\n*…they solve a relatively easy subcomponent of the problem but leave the hard problem untouched.\n\n*…they solve the problem but only as long as we stay “in distribution” with respect to the original training data ([https://en.wikipedia.org/wiki/Domain_adaptation distributional shift] will break them).\n\n*…although they might work eventually, we can’t expect them to work on the first try (and we [https://stampy.ai/wiki/Why_would_we_only_get_one_chance_to_align_a_superintelligence%3F only get one try at aligning a superintelligence!)].\n\nHere are some of the proposals which often come up:"], "entry": "Plex's Answer to Why can’t we just…", "id": "776e2a8dabd5e6d060ef7e6d465495de"} +{"source": "stampy", "source_filetype": "text", "url": "n/a", "title": "Couldn’t we keep the AI in a box and never give it the ability manipulate the external world?", "authors": "n/a", "date_published": "n/a", "text": "Question: Couldn't we keep the AI in a box and never give it the ability manipulate the external world?\n\nAnswer: That is, if you know an AI is likely to be superintelligent, can't you just disconnect it from the Internet, not give it access to any speakers that can make [https://stampy.ai/wiki/What_do_you_mean_by_superintelligences_manipulating_humans_socially%3F mysterious buzzes and hums], make sure the only people who interact with it are trained in caution, et cetera?. Isn't there some level of security – maybe the level we use for that room in the CDC where people in containment suits hundreds of feet underground analyze the latest superviruses – with which a superintelligence could be safe?\n\nThis puts us back in the same situation as lions trying to figure out whether or not nuclear weapons are a things humans can do. But suppose there is such a level of security. You build a superintelligence, and you put it in an airtight chamber deep in a cave with no Internet connection and only carefully-trained security experts to talk to. What now?\n\nNow you have a superintelligence which is possibly safe but definitely useless. The whole point of building superintelligences is that they're smart enough to do useful things like cure cancer. But if you have the monks ask the superintelligence for a cancer cure, and it gives them one, that's a clear security vulnerability. You have a superintelligence locked up in a cave with no way to influence the outside world except that you're going to mass produce a chemical it gives you and inject it into millions of people.\n\nOr maybe none of this happens, and the superintelligence sits inert in its cave. And then another team somewhere else invents a second superintelligence. And then a third team invents a third superintelligence. Remember, it was only about ten years between Deep Blue beating Kasparov, and everybody having Deep Blue – level chess engines on their laptops. And the first twenty teams are responsible and keep their superintelligences locked in caves with carefully-trained experts, and the twenty-first team is a little less responsible, and now we still have to deal with a rogue superintelligence.\n\nSuperintelligences are extremely dangerous, and no normal means of controlling them can entirely remove the danger.", "question": "Couldn’t we keep the AI in a box and never give it the ability manipulate the external world?", "answer": ["That is, if you know an AI is likely to be superintelligent, can’t you just disconnect it from the Internet, not give it access to any speakers that can make [https://stampy.ai/wiki/What_do_you_mean_by_superintelligences_manipulating_humans_socially%3F mysterious buzzes and hums], make sure the only people who interact with it are trained in caution, et cetera?. Isn’t there some level of security – maybe the level we use for that room in the CDC where people in containment suits hundreds of feet underground analyze the latest superviruses – with which a superintelligence could be safe?\n\nThis puts us back in the same situation as lions trying to figure out whether or not nuclear weapons are a things humans can do. But suppose there is such a level of security. You build a superintelligence, and you put it in an airtight chamber deep in a cave with no Internet connection and only carefully-trained security experts to talk to. What now?\n\nNow you have a superintelligence which is possibly safe but definitely useless. The whole point of building superintelligences is that they’re smart enough to do useful things like cure cancer. But if you have the monks ask the superintelligence for a cancer cure, and it gives them one, that’s a clear security vulnerability. You have a superintelligence locked up in a cave with no way to influence the outside world except that you’re going to mass produce a chemical it gives you and inject it into millions of people.\n\nOr maybe none of this happens, and the superintelligence sits inert in its cave. And then another team somewhere else invents a second superintelligence. And then a third team invents a third superintelligence. Remember, it was only about ten years between Deep Blue beating Kasparov, and everybody having Deep Blue – level chess engines on their laptops. And the first twenty teams are responsible and keep their superintelligences locked in caves with carefully-trained experts, and the twenty-first team is a little less responsible, and now we still have to deal with a rogue superintelligence.\n\nSuperintelligences are extremely dangerous, and no normal means of controlling them can entirely remove the danger."], "entry": "Answer to Couldn’t we keep the AI in a box and never give it the ability to manipulate the external world?", "id": "1cee823ba0b868b07685d509d1bdc8a3"} +{"source": "stampy", "source_filetype": "text", "url": "n/a", "title": "Where can I learn about interpretability?", "authors": "n/a", "date_published": "n/a", "text": "Question: Where can I learn about interpretability?\n\nAnswer: [https://christophm.github.io/interpretable-ml-book/ Christoph Molnar's online book] and [https://distill.pub/ distill.pub] are great sources, as well as [https://www.alignmentforum.org/posts/GEPX7jgLMB8vR2qaK/opinions-on-interpretable-machine-learning-and-70-summaries this overview article] which summarizes 70 interpretability papers.", "question": "Where can I learn about interpretability?", "answer": ["[https://christophm.github.io/interpretable-ml-book/ Christoph Molnar's online book] and [https://distill.pub/ distill.pub] are great sources, as well as [https://www.alignmentforum.org/posts/GEPX7jgLMB8vR2qaK/opinions-on-interpretable-machine-learning-and-70-summaries this overview article] which summarizes 70 interpretability papers."], "entry": "Plex's Answer to Where can I learn about interpretability?", "id": "4308462fb0baa70dddbe1c4741f7ccd7"} +{"source": "stampy", "source_filetype": "text", "url": "n/a", "title": "I want work on AI alignment. How can I get funding?", "authors": "n/a", "date_published": "n/a", "text": "Question: I want work on AI alignment. How can I get funding?\n\nAnswer: See the [https://www.futurefundinglist.com/ Future Funding List] for up to date information!\n\nThe organizations which most regularly give grants to individuals working towards AI alignment are the [https://funds.effectivealtruism.org/funds/far-future Long Term Future Fund], [http://survivalandflourishing.org/ Survival And Flourishing (SAF)], the [https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/the-open-phil-ai-fellowship OpenPhil AI Fellowship] and [https://www.openphilanthropy.org/focus/other-areas/early-career-funding-individuals-interested-improving-long-term-future early career funding], the [https://grants.futureoflife.org/ Future of Life Institute], the [https://www.fhi.ox.ac.uk/aia-fellowship/ Future of Humanity Institute], and [https://longtermrisk.org/grantmaking/ the Center on Long-Term Risk Fund]. If you're able to relocate to the UK, [https://ceealar.org/ CEEALAR (aka the EA Hotel)] can be a great option as it offers free food and accommodation for up to two years, as well as contact with others who are thinking about these issues. The [https://ftxfuturefund.org/apply/ FTX Future Fund] only accepts direct applications for $100k+ with an emphasis on massively scaleable interventions, but their [https://ftxfuturefund.org/announcing-our-regranting-program/ regranters] can make smaller grants for individuals. There are also opportunities from smaller grantmakers which you might be able to pick up if you get involved.\n\nIf you want to work on support or infrastructure rather than directly on research, the [https://funds.effectivealtruism.org/funds/ea-community EA Infrastructure Fund] may be able to help. In general, you can [https://www.lesswrong.com/posts/5AAFoigbbMqgrTpDh/you-can-talk-to-ea-funds-before-applying talk to EA funds before applying].\n\nEach grant source has their own criteria for funding, but in general they are looking for candidates who have evidence that they're keen and able to do good work towards reducing existential risk (for example, by completing an [https://aisafety.camp/ AI Safety Camp] project), though the EA Hotel in particular has less stringent requirements as they're able to support people at very low cost. If you'd like to talk to someone who can offer advice on applying for funding, [https://www.aisafetysupport.org/ AI Safety Support] offers [https://calendly.com/aiss free calls].\n\nAnother option is to get hired by an organization which works on AI alignment, see the follow-up question for advice on that.\n\nIt's also worth checking the AI Alignment tag on the [https://eafunding.softr.app/ EA funding sources website] for up-to-date suggestions.", "question": "I want work on AI alignment. How can I get funding?", "answer": ["See the [https://www.futurefundinglist.com/ Future Funding List] for up to date information!\n\nThe organizations which most regularly give grants to individuals working towards AI alignment are the [https://funds.effectivealtruism.org/funds/far-future Long Term Future Fund], [http://survivalandflourishing.org/ Survival And Flourishing (SAF)], the [https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/the-open-phil-ai-fellowship OpenPhil AI Fellowship] and [https://www.openphilanthropy.org/focus/other-areas/early-career-funding-individuals-interested-improving-long-term-future early career funding], the [https://grants.futureoflife.org/ Future of Life Institute], the [https://www.fhi.ox.ac.uk/aia-fellowship/ Future of Humanity Institute], and [https://longtermrisk.org/grantmaking/ the Center on Long-Term Risk Fund]. If you're able to relocate to the UK, [https://ceealar.org/ CEEALAR (aka the EA Hotel)] can be a great option as it offers free food and accommodation for up to two years, as well as contact with others who are thinking about these issues. The [https://ftxfuturefund.org/apply/ FTX Future Fund] only accepts direct applications for $100k+ with an emphasis on massively scaleable interventions, but their [https://ftxfuturefund.org/announcing-our-regranting-program/ regranters] can make smaller grants for individuals. There are also opportunities from smaller grantmakers which you might be able to pick up if you get involved.\n\nIf you want to work on support or infrastructure rather than directly on research, the [https://funds.effectivealtruism.org/funds/ea-community EA Infrastructure Fund] may be able to help. In general, you can [https://www.lesswrong.com/posts/5AAFoigbbMqgrTpDh/you-can-talk-to-ea-funds-before-applying talk to EA funds before applying].\n\nEach grant source has their own criteria for funding, but in general they are looking for candidates who have evidence that they're keen and able to do good work towards reducing existential risk (for example, by completing an [https://aisafety.camp/ AI Safety Camp] project), though the EA Hotel in particular has less stringent requirements as they're able to support people at very low cost. If you'd like to talk to someone who can offer advice on applying for funding, [https://www.aisafetysupport.org/ AI Safety Support] offers [https://calendly.com/aiss free calls].\n\nAnother option is to get hired by an organization which works on AI alignment, see the follow-up question for advice on that.\n\nIt's also worth checking the AI Alignment tag on the [https://eafunding.softr.app/ EA funding sources website] for up-to-date suggestions."], "entry": "Plex's Answer to I want to work on AI alignment. How can I get funding?", "id": "a6175396599829a62cfe36d48a73bfc7"} +{"source": "stampy", "source_filetype": "text", "url": "n/a", "title": "Why don't we just not build AGI if it's so dangerous?", "authors": "n/a", "date_published": "n/a", "text": "Question: Why don't we just not build AGI if it's so dangerous?\n\nAnswer: It certainly would be very unwise to purposefully create an artificial general intelligence now, before we have found a way to be certain it will act purely in our interests. But \"general intelligence\" is more of a description of a system's capabilities, and a vague one at that. We don't know what it takes to build such a system. This leads to the worrying possibility that our existing, narrow AI systems require only minor tweaks, or even just more computer power, to achieve general intelligence.\n\nThe pace of research in the field suggests that there's a lot of low-hanging fruit left to pick, after all, and the results of this research produce better, more effective AI in a landscape of strong competitive pressure to produce as highly competitive systems as we can. \"Just\" not building an AGI means ensuring that every organization in the world with lots of computer hardware doesn't build an AGI, either accidentally or mistakenly thinking they have a solution to the alignment problem, forever. It's simply far safer to also work on solving the alignment problem.", "question": "Why don't we just not build AGI if it's so dangerous?", "answer": ["It certainly would be very unwise to purposefully create an artificial general intelligence now, before we have found a way to be certain it will act purely in our interests. But \"general intelligence\" is more of a description of a system's capabilities, and a vague one at that. We don't know what it takes to build such a system. This leads to the worrying possibility that our existing, narrow AI systems require only minor tweaks, or even just more computer power, to achieve general intelligence.\n\nThe pace of research in the field suggests that there's a lot of low-hanging fruit left to pick, after all, and the results of this research produce better, more effective AI in a landscape of strong competitive pressure to produce as highly competitive systems as we can. \"Just\" not building an AGI means ensuring that every organization in the world with lots of computer hardware doesn't build an AGI, either accidentally or mistakenly thinking they have a solution to the alignment problem, forever. It's simply far safer to also work on solving the alignment problem."], "entry": "SlimeBunnyBat's Answer to Why don't we just not build AGI if it's so dangerous?", "id": "9dcd7a734844cbbb4337042a8b6135f1"} +{"source": "stampy", "source_filetype": "text", "url": "n/a", "title": "What should I read learn about decision theory?", "authors": "n/a", "date_published": "n/a", "text": "Question: What should I read learn about decision theory?\n\nAnswer: [https://www.lesswrong.com/posts/zcPLNNw4wgBX5k8kQ/decision-theory abramdemski and Scott Garrabrant's post on decision theory] provides a good overview of many aspects of the topic, while [https://arxiv.org/abs/1710.05060 Functional Decision Theory: A New Theory of Instrumental Rationality] seems to be the most up to date source on current thinking.\n\nFor a more intuitive dive into one of the core problems, [https://www.lesswrong.com/posts/6ddcsdA2c2XpNpE5x/newcomb-s-problem-and-regret-of-rationality Newcomb's problem and regret of rationality] is good, and [https://www.lesswrong.com/posts/puutBJLWbg2sXpFbu/newcomblike-problems-are-the-norm Newcomblike problems are the norm] is useful for seeing how it applies in the real world.\n\nThe [https://www.lesswrong.com/tag/decision-theory LessWrong tag for decision theory] has lots of additional links for people who want to explore further.", "question": "What should I read learn about decision theory?", "answer": ["[https://www.lesswrong.com/posts/zcPLNNw4wgBX5k8kQ/decision-theory abramdemski and Scott Garrabrant's post on decision theory] provides a good overview of many aspects of the topic, while [https://arxiv.org/abs/1710.05060 Functional Decision Theory: A New Theory of Instrumental Rationality] seems to be the most up to date source on current thinking.\n\nFor a more intuitive dive into one of the core problems, [https://www.lesswrong.com/posts/6ddcsdA2c2XpNpE5x/newcomb-s-problem-and-regret-of-rationality Newcomb's problem and regret of rationality] is good, and [https://www.lesswrong.com/posts/puutBJLWbg2sXpFbu/newcomblike-problems-are-the-norm Newcomblike problems are the norm] is useful for seeing how it applies in the real world.\n\nThe [https://www.lesswrong.com/tag/decision-theory LessWrong tag for decision theory] has lots of additional links for people who want to explore further."], "entry": "Plex's Answer to What should I read to learn about decision theory?", "id": "edc0076ac064ce7d43558f6a336d88e7"} +{"source": "stampy", "source_filetype": "text", "url": "n/a", "title": "When will an intelligence explosion happen?", "authors": "n/a", "date_published": "n/a", "text": "Question: When will an intelligence explosion happen?\n\nAnswer: Predicting the future is risky business. There are many philosophical, scientific, technological, and social uncertainties relevant to the arrival of an intelligence explosion. Because of this, experts disagree on when this event might occur. Here are some of their predictions:\n\n* Futurist Ray Kurzweil [http://www.amazon.com/dp// predicts] that machines will reach human-level intelligence by 2030 and that we will reach \"a profound and disruptive transformation in human capability\" by 2045.\n* Intel's chief technology officer, Justin Rattner, [http://www.techwatch.co.uk/2008/08/22/intel-predicts-singularity-by-2048/ expects] \"a point when human and artificial intelligence merges to create something bigger than itself\" by 2048.\n* AI researcher Eliezer Yudkowsky [http://commonsenseatheism.com/?p꞊12147 expects] the intelligence explosion by 2060.\n* Philosopher David Chalmers has [http://consc.net/papers/singularity.pdf over 1/2 credence] in the intelligence explosion occurring by 2100.\n* Quantum computing expert Michael Nielsen [http://michaelnielsen.org/blog/what-should-a-reasonable-person-believe-about-the-singularity/ estimates] that the probability of the intelligence explosion occurring by 2100 is between 0.2% and about 70%.\n* In 2009, at the AGI-09 conference, experts were asked when AI might reach superintelligence with massive new funding. The [http://sethbaum.com/ac/2011_AI-Experts.pdf median estimates] were that machine superintelligence could be achieved by 2045 (with 50% confidence) or by 2100 (with 90% confidence). Of course, attendees to this conference were self-selected to think that near-term artificial general intelligence is plausible.\n* iRobot CEO [http://itc.conversationsnetwork.org/shows/detail3400.html Rodney Brooks] and cognitive scientist [http://video.google.com/videoplay?docid꞊8832143373632003914 Douglas Hofstadter] allow that the intelligence explosion may occur in the future, but probably not in the 21st century.\n* Roboticist Hans Moravec predicts that AI will surpass human intelligence \"[http://www.scientificamerican.com/article.cfm?id꞊rise-of-the-robots&print꞊true well before 2050].\"\n* In a 2005 survey of 26 contributors to a series of reports on emerging technologies, the [http://www.wtec.org/ConvergingTechnologies/3/NBIC3_report.pdf median estimate] for machines reaching human-level intelligence was 2085.\n* Participants in a 2011 intelligence conference at Oxford gave a [http://www.fhi.ox.ac.uk/__data/assets/pdf_file/0015/21516/MI_survey.pdf median estimate] of 2050 for when there will be a 50% of human-level machine intelligence, and a median estimate of 2150 for when there will be a 90% chance of human-level machine intelligence.\n* On the other hand, 41% of the participants in the AI@50 conference (in 2006) [http://www.engagingexperience.com/ai50/ stated] that machine intelligence would never reach the human level.\n\nSee also:\n\n* Baum, Goertzel, & Goertzel, [http://sethbaum.com/ac/2011_AI-Experts.pdfHow Long Until Human-Level AI? Results from an Expert Assessment]", "question": "When will an intelligence explosion happen?", "answer": ["Predicting the future is risky business. There are many philosophical, scientific, technological, and social uncertainties relevant to the arrival of an intelligence explosion. Because of this, experts disagree on when this event might occur. Here are some of their predictions:\n\n* Futurist Ray Kurzweil [http://www.amazon.com/dp/0143037889/ predicts] that machines will reach human-level intelligence by 2030 and that we will reach “a profound and disruptive transformation in human capability” by 2045.\n* Intel’s chief technology officer, Justin Rattner, [http://www.techwatch.co.uk/2008/08/22/intel-predicts-singularity-by-2048/ expects] “a point when human and artificial intelligence merges to create something bigger than itself” by 2048.\n* AI researcher Eliezer Yudkowsky [http://commonsenseatheism.com/?p꞊12147 expects] the intelligence explosion by 2060.\n* Philosopher David Chalmers has [http://consc.net/papers/singularity.pdf over 1/2 credence] in the intelligence explosion occurring by 2100.\n* Quantum computing expert Michael Nielsen [http://michaelnielsen.org/blog/what-should-a-reasonable-person-believe-about-the-singularity/ estimates] that the probability of the intelligence explosion occurring by 2100 is between 0.2% and about 70%.\n* In 2009, at the AGI-09 conference, experts were asked when AI might reach superintelligence with massive new funding. The [http://sethbaum.com/ac/2011_AI-Experts.pdf median estimates] were that machine superintelligence could be achieved by 2045 (with 50% confidence) or by 2100 (with 90% confidence). Of course, attendees to this conference were self-selected to think that near-term artificial general intelligence is plausible.\n* iRobot CEO [http://itc.conversationsnetwork.org/shows/detail3400.html Rodney Brooks] and cognitive scientist [http://video.google.com/videoplay?docid꞊8832143373632003914 Douglas Hofstadter] allow that the intelligence explosion may occur in the future, but probably not in the 21st century.\n* Roboticist Hans Moravec predicts that AI will surpass human intelligence “[http://www.scientificamerican.com/article.cfm?id꞊rise-of-the-robots&print꞊true well before 2050].”\n* In a 2005 survey of 26 contributors to a series of reports on emerging technologies, the [http://www.wtec.org/ConvergingTechnologies/3/NBIC3_report.pdf median estimate] for machines reaching human-level intelligence was 2085.\n* Participants in a 2011 intelligence conference at Oxford gave a [http://www.fhi.ox.ac.uk/__data/assets/pdf_file/0015/21516/MI_survey.pdf median estimate] of 2050 for when there will be a 50% of human-level machine intelligence, and a median estimate of 2150 for when there will be a 90% chance of human-level machine intelligence.\n* On the other hand, 41% of the participants in the AI@50 conference (in 2006) [http://www.engagingexperience.com/ai50/ stated] that machine intelligence would never reach the human level.\n\nSee also:\n\n* Baum, Goertzel, & Goertzel, [http://sethbaum.com/ac/2011_AI-Experts.pdfHow Long Until Human-Level AI? Results from an Expert Assessment]"], "entry": "Answer to When will an intelligence explosion happen?", "id": "057889c75dfda30729e7432c92e70cf7"} +{"source": "stampy", "source_filetype": "text", "url": "n/a", "title": "How could an intelligence explosion be useful?", "authors": "n/a", "date_published": "n/a", "text": "Question: How could an intelligence explosion be useful?\n\nAnswer: A machine superintelligence, if programmed with the right motivations, could potentially solve all the problems that humans are trying to solve but haven't had the ingenuity or processing speed to solve yet. A superintelligence might cure disabilities and diseases, achieve world peace, give humans vastly longer and healthier lives, eliminate food and energy shortages, boost scientific discovery and space exploration, and so on.\n\nFurthermore, humanity faces several existential risks in the 21st century, including global nuclear war, bioweapons, superviruses, and [http://www.amazon.com/dp// more]. A superintelligent machine would be more capable of solving those problems than humans are.\n\nSee also:\n* Yudkowsky, [https://intelligence.org/files/AIPosNegFactor.pdf Artificial intelligence as a positive and negative factor in global risk]", "question": "How could an intelligence explosion be useful?", "answer": ["A machine superintelligence, if programmed with the right motivations, could potentially solve all the problems that humans are trying to solve but haven’t had the ingenuity or processing speed to solve yet. A superintelligence might cure disabilities and diseases, achieve world peace, give humans vastly longer and healthier lives, eliminate food and energy shortages, boost scientific discovery and space exploration, and so on.\n\nFurthermore, humanity faces several existential risks in the 21st century, including global nuclear war, bioweapons, superviruses, and [http://www.amazon.com/dp/0198570503/ more]. A superintelligent machine would be more capable of solving those problems than humans are.\n\nSee also:\n* Yudkowsky, [https://intelligence.org/files/AIPosNegFactor.pdf Artificial intelligence as a positive and negative factor in global risk]"], "entry": "Answer to How could an intelligence explosion be useful?", "id": "597797fe1f748f20cea8a6a4dde872f6"} +{"source": "stampy", "source_filetype": "text", "url": "n/a", "title": "Why might people try build AGI rather than stronger and stronger narrow AIs?", "authors": "n/a", "date_published": "n/a", "text": "Question: Why might people try build AGI rather than stronger and stronger narrow AIs?\n\nAnswer: Making a narrow AI for every task would be extremely costly and time-consuming. By making a more general intelligence, you can apply one system to a broader range of tasks, which is economically and strategically attractive.\n\nOf course, for generality to be a good option there are some necessary conditions. You need an architecture which is straightforward enough to scale up, such as the transformer which is used for GPT and follows scaling laws. It's also important that by generalizing you do not lose too much capacity at narrow tasks or require too much extra compute for it to be worthwhile.\n\nWhether or not those conditions actually hold: It seems like many important actors (such as DeepMind and OpenAI) believe that they do, and are therefore focusing on trying to build an AGI in order to influence the future, so we should take actions to make it more likely that AGI will be developed safety.\n\nAdditionally, it is possible that even if we tried to build only narrow AIs, given enough time and compute we might accidentally create a more general AI than we intend by training a system on a task which requires a broad world model.\n\nSee also:\n* [https://slatestarcodex.com/2019/08/27/book-review-reframing-superintelligence/ Reframing Superintelligence] - A model of AI development which proposes that we might mostly build narrow AI systems for some time.", "question": "Why might people try build AGI rather than stronger and stronger narrow AIs?", "answer": ["Making a narrow AI for every task would be extremely costly and time-consuming. By making a more general intelligence, you can apply one system to a broader range of tasks, which is economically and strategically attractive.\n\nOf course, for generality to be a good option there are some necessary conditions. You need an architecture which is straightforward enough to scale up, such as the transformer which is used for GPT and follows scaling laws. It's also important that by generalizing you do not lose too much capacity at narrow tasks or require too much extra compute for it to be worthwhile.\n\nWhether or not those conditions actually hold: It seems like many important actors (such as DeepMind and OpenAI) believe that they do, and are therefore focusing on trying to build an AGI in order to influence the future, so we should take actions to make it more likely that AGI will be developed safety.\n\nAdditionally, it is possible that even if we tried to build only narrow AIs, given enough time and compute we might accidentally create a more general AI than we intend by training a system on a task which requires a broad world model.\n\nSee also:\n* [https://slatestarcodex.com/2019/08/27/book-review-reframing-superintelligence/ Reframing Superintelligence] - A model of AI development which proposes that we might mostly build narrow AI systems for some time."], "entry": "^,^'s Answer to Why might people try to build AGI rather than stronger and stronger narrow AIs?", "id": "0a1cc1bdefa6e02f8e438082e935fdc4"} +{"source": "stampy", "source_filetype": "text", "url": "n/a", "title": "What should be marked as a canonical answer on Stampy's Wiki?", "authors": "n/a", "date_published": "n/a", "text": "Question: What should be marked as a canonical answer on Stampy's Wiki?\n\nAnswer: [[Canonical answers]] may be served to readers by Stampy, so only answers which have a reasonably high stamp score should be marked as canonical. All canonical answers are open to be collaboratively edited and updated, and they should represent a consensus response (written from the Stampy Point Of View) to a question which is within Stampy's scope.\n\nAnswers to questions from YouTube comments should not be marked as canonical, and will generally remain as they were when originally written since they have details which are specific to an idiosyncratic question. YouTube answers may be forked into wiki answers, in order to better respond to a particular question, in which case the YouTube question should have its canonical version field set to the new more widely useful question.", "question": "What should be marked as a canonical answer on Stampy's Wiki?", "answer": ["[[Canonical answers]] may be served to readers by Stampy, so only answers which have a reasonably high stamp score should be marked as canonical. All canonical answers are open to be collaboratively edited and updated, and they should represent a consensus response (written from the Stampy Point Of View) to a question which is within Stampy's scope.\n\nAnswers to questions from YouTube comments should not be marked as canonical, and will generally remain as they were when originally written since they have details which are specific to an idiosyncratic question. YouTube answers may be forked into wiki answers, in order to better respond to a particular question, in which case the YouTube question should have its canonical version field set to the new more widely useful question."], "entry": "Plex's Answer to What should be marked as a canonical answer on Stampy's Wiki?", "id": "370feadc949b0255dc655c6a4ce2d137"} +{"source": "stampy", "source_filetype": "text", "url": "n/a", "title": "How can I collect questions for Stampy?", "authors": "n/a", "date_published": "n/a", "text": "Question: How can I collect questions for Stampy?\n\nAnswer: As well as simply adding your own questions over at [[ask question]], you could also message your friends with something like:\n\n
Hi,
\nI'm working on a project to create a comprehensive FAQ about AI alignment (you can read about it here https://stampy.ai/wiki/Stampy%27s_Wiki if interested). We're looking for questions and I thought you may have some good ones. If you'd be willing to write up a google doc with you top 5-10ish questions we'd be happy to write a personalized FAQ for you. https://stampy.ai/wiki/Scope explains the kinds of questions we're looking for.\n\nThanks!
\n\nand maybe bring the google doc to a Stampy editing session so we can collaborate on answering them or improving your answers to them.", "question": "How can I collect questions for Stampy?", "answer": ["As well as simply adding your own questions over at [[ask question]], you could also message your friends with something like:\n\n
Hi,
\nI'm working on a project to create a comprehensive FAQ about AI alignment (you can read about it here https://stampy.ai/wiki/Stampy%27s_Wiki if interested). We're looking for questions and I thought you may have some good ones. If you'd be willing to write up a google doc with you top 5-10ish questions we'd be happy to write a personalized FAQ for you. https://stampy.ai/wiki/Scope explains the kinds of questions we're looking for.\n\nThanks!
\n\nand maybe bring the google doc to a Stampy editing session so we can collaborate on answering them or improving your answers to them."], "entry": "Plex's Answer to How can I collect questions for Stampy?", "id": "1a60f862364d9d9c36174675ede8c319"} +{"source": "stampy", "source_filetype": "text", "url": "n/a", "title": "What is MIRI’s mission?", "authors": "n/a", "date_published": "n/a", "text": "Question: What is MIRI's mission?\n\nAnswer: [https://intelligence.org/ MIRI's] mission statement is to \"ensure that the creation of smarter-than-human artificial intelligence has a positive impact.\" This is an ambitious goal, but they believe that some early progress is possible, and they believe that the goal's importance and difficulty makes it prudent to begin work at an early date.\n\nTheir two main research agendas, \"[https://intelligence.org/technical-agenda Agent Foundations for Aligning Machine Intelligence with Human Interests]\" and \"[https://intelligence.org/2016/05/04/announcing-a-new-research-program/ Value Alignment for Advanced Machine Learning Systems],\" focus on three groups of technical problems:\n* highly reliable agent design — learning how to specify highly autonomous systems that reliably pursue some fixed goal;\n* value specification — supplying autonomous systems with the intended goals; and\n* error tolerance — making such systems robust to programmer error.\nThat being said, MIRI recently [https://intelligence.org/2020/12/21/2020-updates-and-strategy/ published an update] stating that they were moving away from research directions in unpublished works that they were [https://intelligence.org/2018/11/22/2018-update-our-new-research-directions/ pursuing since 2017].\n\nThey publish new [https://intelligence.org/research mathematical results] (although their work is [https://intelligence.org/2018/11/22/2018-update-our-new-research-directions/#section3 non-disclosed by default]), host [https://intelligence.org/research workshops], attend conferences, and [https://intelligence.org/mirix fund outside researchers] who are interested in investigating these problems. They also host a [https://intelligence.org/blog blog] and an [https://www.alignmentforum.org/ online research forum].", "question": "What is MIRI’s mission?", "answer": ["[https://intelligence.org/ MIRI's] mission statement is to “ensure that the creation of smarter-than-human artificial intelligence has a positive impact.” This is an ambitious goal, but they believe that some early progress is possible, and they believe that the goal’s importance and difficulty makes it prudent to begin work at an early date.\n\nTheir two main research agendas, “[https://intelligence.org/technical-agenda Agent Foundations for Aligning Machine Intelligence with Human Interests]” and “[https://intelligence.org/2016/05/04/announcing-a-new-research-program/ Value Alignment for Advanced Machine Learning Systems],” focus on three groups of technical problems:\n* highly reliable agent design — learning how to specify highly autonomous systems that reliably pursue some fixed goal;\n* value specification — supplying autonomous systems with the intended goals; and\n* error tolerance — making such systems robust to programmer error.\nThat being said, MIRI recently [https://intelligence.org/2020/12/21/2020-updates-and-strategy/ published an update] stating that they were moving away from research directions in unpublished works that they were [https://intelligence.org/2018/11/22/2018-update-our-new-research-directions/ pursuing since 2017].\n\nThey publish new [https://intelligence.org/research mathematical results] (although their work is [https://intelligence.org/2018/11/22/2018-update-our-new-research-directions/#section3 non-disclosed by default]), host [https://intelligence.org/research workshops], attend conferences, and [https://intelligence.org/mirix fund outside researchers] who are interested in investigating these problems. They also host a [https://intelligence.org/blog blog] and an [https://www.alignmentforum.org/ online research forum]."], "entry": "Answer to What is MIRI’s mission?", "id": "9abc78286186265afb913e4fcaed57b4"} +{"source": "stampy", "source_filetype": "text", "url": "n/a", "title": "Can an AI really be smarter than humans?", "authors": "n/a", "date_published": "n/a", "text": "Question: Can an AI really be smarter than humans?\n\nAnswer: Until a thing has happened, it has never happened. We have been consistently improving both the optimization power and generality of our algorithms over that time period, and have little reason to expect it to suddenly stop. We've gone from coding systems specifically for a certain game (like Chess), to algorithms like [https://deepmind.com/blog/article/muzero-mastering-go-chess-shogi-and-atari-without-rules MuZero] which learn the rules of the game they're playing and how to play at vastly superhuman skill levels purely via self-play across a broad range of games (e.g. Go, chess, shogi and various Atari games).\n\nHuman brains are a [https://www.lesswrong.com/posts/NQgWL7tvAPgN2LTLn/spaghetti-towers spaghetti tower] generated by evolution with zero foresight, it would be surprising if they are the peak of physically possible intelligence. The brain doing things in complex ways is not strong evidence that we need to fully replicate those interactions if we can throw sufficient compute at the problem, as explained in [https://www.lesswrong.com/posts/HhWhaSzQr6xmBki8F/birds-brains-planes-and-ai-against-appeals-to-the-complexity Birds, Brains, Planes, and AI: Against Appeals to the Complexity/Mysteriousness/Efficiency of the Brain].\n\nIt is, however, plausible that for an AGI we need a lot more compute than we will get in the near future, or that some key insights are missing which we won't get for a while. The [https://www.openphilanthropy.org/brain-computation-report#ExecutiveSummary OpenPhilanthropy report on how much computational power it would take to simulate the brain] is the most careful attempt at reasoning out how far we are from being able to do it, and suggests that by some estimates we already have enough computational resources, and by some estimates moore's law may let us reach it before too long.\n\nIt also seems that much of the human brain exists to observe and [https://en.wikipedia.org/wiki/Allostasis regulate our biological body], which a body-less computer wouldn't need. If that's true, then a human-level AI might be possible with considerably less compute than the human brain.", "question": "Can an AI really be smarter than humans?", "answer": ["Until a thing has happened, it has never happened. We have been consistently improving both the optimization power and generality of our algorithms over that time period, and have little reason to expect it to suddenly stop. We’ve gone from coding systems specifically for a certain game (like Chess), to algorithms like [https://deepmind.com/blog/article/muzero-mastering-go-chess-shogi-and-atari-without-rules MuZero] which learn the rules of the game they’re playing and how to play at vastly superhuman skill levels purely via self-play across a broad range of games (e.g. Go, chess, shogi and various Atari games).\n\nHuman brains are a [https://www.lesswrong.com/posts/NQgWL7tvAPgN2LTLn/spaghetti-towers spaghetti tower] generated by evolution with zero foresight, it would be surprising if they are the peak of physically possible intelligence. The brain doing things in complex ways is not strong evidence that we need to fully replicate those interactions if we can throw sufficient compute at the problem, as explained in [https://www.lesswrong.com/posts/HhWhaSzQr6xmBki8F/birds-brains-planes-and-ai-against-appeals-to-the-complexity Birds, Brains, Planes, and AI: Against Appeals to the Complexity/Mysteriousness/Efficiency of the Brain].\n\nIt is, however, plausible that for an AGI we need a lot more compute than we will get in the near future, or that some key insights are missing which we won’t get for a while. The [https://www.openphilanthropy.org/brain-computation-report#ExecutiveSummary OpenPhilanthropy report on how much computational power it would take to simulate the brain] is the most careful attempt at reasoning out how far we are from being able to do it, and suggests that by some estimates we already have enough computational resources, and by some estimates moore’s law may let us reach it before too long.\n\nIt also seems that much of the human brain exists to observe and [https://en.wikipedia.org/wiki/Allostasis regulate our biological body], which a body-less computer wouldn't need. If that's true, then a human-level AI might be possible with considerably less compute than the human brain."], "entry": "Plex's Answer to Can an AI really be smarter than humans?", "id": "44a6c4ecc5c12a3c39411a06c96f558d"} +{"source": "stampy", "source_filetype": "text", "url": "n/a", "title": "Why can't we simply stop developing AI?", "authors": "n/a", "date_published": "n/a", "text": "Question: Why can't we simply stop developing AI?\n\nAnswer: We could, but we won't. Each advance in capabilities which brings us closer to an intelligence explosion also brings vast profits for whoever develops them (e.g. smarter digital personal assistants like Siri, more ability to automate cognitive tasks, better recommendation algorithms for Facebook, etc.). The incentives are all wrong. Any actor (nation or corporation) who stops will just get overtaken by more reckless ones, and everyone knows this.", "question": "Why can't we simply stop developing AI?", "answer": ["We could, but we won’t. Each advance in capabilities which brings us closer to an intelligence explosion also brings vast profits for whoever develops them (e.g. smarter digital personal assistants like Siri, more ability to automate cognitive tasks, better recommendation algorithms for Facebook, etc.). The incentives are all wrong. Any actor (nation or corporation) who stops will just get overtaken by more reckless ones, and everyone knows this."], "entry": "Plex's Answer to Why can't we simply stop developing AI?", "id": "75ee939968688e04fc0a9bb0d9bdd901"} +{"source": "stampy", "source_filetype": "text", "url": "n/a", "title": "Can you stop an advanced AI from upgrading itself?", "authors": "n/a", "date_published": "n/a", "text": "Question: Can you stop an advanced AI from upgrading itself?\n\nAnswer: It depends on what is meant by advanced. Many AI systems which are very effective and advanced narrow intelligences would not try to upgrade themselves in an unbounded way, but becoming smarter is a [https://www.youtube.com/watch?v꞊ZeecOKBus3Q convergent instrumental goal] so we could expect most AGI designs to attempt it.\n\nThe problem is that increasing general problem solving ability is climbing in exactly the direction needed to trigger an intelligence explosion, while generating large economic and strategic payoffs to whoever achieves them. So even though we could, in principle, just not build the kind of systems which would recursively self-improve, in practice we probably will go ahead with constructing them, because they're likely to be the most powerful.", "question": "Can you stop an advanced AI from upgrading itself?", "answer": ["It depends on what is meant by advanced. Many AI systems which are very effective and advanced narrow intelligences would not try to upgrade themselves in an unbounded way, but becoming smarter is a [https://www.youtube.com/watch?v꞊ZeecOKBus3Q convergent instrumental goal] so we could expect most AGI designs to attempt it.\n\nThe problem is that increasing general problem solving ability is climbing in exactly the direction needed to trigger an intelligence explosion, while generating large economic and strategic payoffs to whoever achieves them. So even though we could, in principle, just not build the kind of systems which would recursively self-improve, in practice we probably will go ahead with constructing them, because they’re likely to be the most powerful."], "entry": "Plex's Answer to Can you stop an advanced AI from upgrading itself?", "id": "571107e0817b09bd5ec73c290b5a0738"} +{"source": "stampy", "source_filetype": "text", "url": "n/a", "title": "What exactly is AGI and what will it look like?", "authors": "n/a", "date_published": "n/a", "text": "Question: What exactly is AGI and what will it look like?\n\nAnswer: AGI is an algorithm with [https://www.lesswrong.com/posts/yLeEPFnnB9wE7KLx2/efficient-cross-domain-optimization general intelligence], running not on evolution's biology like all current general intelligences but on a substrate such as silicon engineered by an intelligence (initially computers designed by humans, later on likely dramatically more advanced hardware designed by earlier AGIs).\n\nAI has so far always been designed and built by humans (i.e. a search process running on biological brains), but once our creations gain the ability to do AI research they will likely [https://www.lesswrong.com/posts/JBadX7rwdcRFzGuju/recursive-self-improvement recursively self-improve] by designing new and better versions of themselves initiating an [https://www.lesswrong.com/posts/8vpf46nLMDYPC6wA4/optimization-and-the-intelligence-explosion intelligence explosion] (i.e. use it's intelligence to improve its own intelligence, creating a feedback loop), and resulting in a superintelligence. There are already [https://arxiv.org/abs/2101.07367 early signs] of AIs being trained to optimize other AIs.\n\nSome authors (notably [https://intelligence.org/ai-foom-debate/ Robin Hanson]) have argued that the intelligence explosion hypothesis is likely false, and in favor of a large number of roughly human level emulated minds operating instead, forming an uplifted economy which doubles every few hours. Eric Drexler's [https://slatestarcodex.com/2019/08/27/book-review-reframing-superintelligence/ Comprehensive AI Services] model of what may happen is another alternate view, where many narrow superintelligent systems exist in parallel rather than there being a general-purpose superintelligent agent.\n\nGoing by the model advocated by [https://en.wikipedia.org/wiki/Nick_Bostrom Nick Bostrom], [https://en.wikipedia.org/wiki/Eliezer_Yudkowsky Eliezer Yudkowsky] and many others, a superintelligence will likely gain various [https://publicism.info/philosophy/superintelligence/7.html cognitive superpowers] (table 8 gives a good overview), allowing it to direct the future much more effectively than humanity. Taking control of our resources by manipulation and hacking is a likely early step, followed by developing and deploying advanced technologies like [https://en.wikipedia.org/wiki/Molecular_nanotechnology molecular nanotechnology] to dominate the physical world and achieve its goals.", "question": "What exactly is AGI and what will it look like?", "answer": ["AGI is an algorithm with [https://www.lesswrong.com/posts/yLeEPFnnB9wE7KLx2/efficient-cross-domain-optimization general intelligence], running not on evolution’s biology like all current general intelligences but on a substrate such as silicon engineered by an intelligence (initially computers designed by humans, later on likely dramatically more advanced hardware designed by earlier AGIs).\n\nAI has so far always been designed and built by humans (i.e. a search process running on biological brains), but once our creations gain the ability to do AI research they will likely [https://www.lesswrong.com/posts/JBadX7rwdcRFzGuju/recursive-self-improvement recursively self-improve] by designing new and better versions of themselves initiating an [https://www.lesswrong.com/posts/8vpf46nLMDYPC6wA4/optimization-and-the-intelligence-explosion intelligence explosion] (i.e. use it’s intelligence to improve its own intelligence, creating a feedback loop), and resulting in a superintelligence. There are already [https://arxiv.org/abs/2101.07367 early signs] of AIs being trained to optimize other AIs.\n\nSome authors (notably [https://intelligence.org/ai-foom-debate/ Robin Hanson]) have argued that the intelligence explosion hypothesis is likely false, and in favor of a large number of roughly human level emulated minds operating instead, forming an uplifted economy which doubles every few hours. Eric Drexler’s [https://slatestarcodex.com/2019/08/27/book-review-reframing-superintelligence/ Comprehensive AI Services] model of what may happen is another alternate view, where many narrow superintelligent systems exist in parallel rather than there being a general-purpose superintelligent agent.\n\nGoing by the model advocated by [https://en.wikipedia.org/wiki/Nick_Bostrom Nick Bostrom], [https://en.wikipedia.org/wiki/Eliezer_Yudkowsky Eliezer Yudkowsky] and many others, a superintelligence will likely gain various [https://publicism.info/philosophy/superintelligence/7.html cognitive superpowers] (table 8 gives a good overview), allowing it to direct the future much more effectively than humanity. Taking control of our resources by manipulation and hacking is a likely early step, followed by developing and deploying advanced technologies like [https://en.wikipedia.org/wiki/Molecular_nanotechnology molecular nanotechnology] to dominate the physical world and achieve its goals."], "entry": "Plex's Answer to What exactly is AGI and what will it look like?", "id": "b089d01c9bef1c80e51916316f919083"} +{"source": "stampy", "source_filetype": "text", "url": "n/a", "title": "Why should I worry about superintelligence?", "authors": "n/a", "date_published": "n/a", "text": "Question: Why should I worry about superintelligence?\n\nAnswer: Intelligence is powerful. Because of superior intelligence, we humans have dominated the Earth. The fate of thousands of species depends on our actions, we occupy nearly every corner of the globe, and we repurpose vast amounts of the world's resources for our own use. Artificial Superintelligence (ASI) has potential to be vastly more intelligent than us, and therefore vastly more powerful. In the same way that we have reshaped the earth to fit our goals, an ASI will find unforeseen, highly efficient ways of reshaping reality to fit its goals.\n\nThe impact that an ASI will have on our world depends on what those goals are. We have the advantage of designing those goals, but that task is not as simple as it may first seem. As described by MIRI in their [https://intelligence.org/ie-faq/ Intelligence Explosion FAQ]:\n\n\"A superintelligent machine will make decisions based on the mechanisms it is designed with, not the hopes its designers had in mind when they programmed those mechanisms. It will act only on precise specifications of rules and values, and will do so in ways that need not respect the complexity and subtlety of what humans value.\"\n\nIf we do not solve the Control Problem before the first ASI is created, we may not get another chance.", "question": "Why should I worry about superintelligence?", "answer": ["Intelligence is powerful. Because of superior intelligence, we humans have dominated the Earth. The fate of thousands of species depends on our actions, we occupy nearly every corner of the globe, and we repurpose vast amounts of the world's resources for our own use. Artificial Superintelligence (ASI) has potential to be vastly more intelligent than us, and therefore vastly more powerful. In the same way that we have reshaped the earth to fit our goals, an ASI will find unforeseen, highly efficient ways of reshaping reality to fit its goals.\n\nThe impact that an ASI will have on our world depends on what those goals are. We have the advantage of designing those goals, but that task is not as simple as it may first seem. As described by MIRI in their [https://intelligence.org/ie-faq/ Intelligence Explosion FAQ]:\n\n“A superintelligent machine will make decisions based on the mechanisms it is designed with, not the hopes its designers had in mind when they programmed those mechanisms. It will act only on precise specifications of rules and values, and will do so in ways that need not respect the complexity and subtlety of what humans value.”\n\nIf we do not solve the Control Problem before the first ASI is created, we may not get another chance."], "entry": "Answer to Why should I worry about superintelligence?", "id": "b1a2f1c623d67cff2a16ccd3936a4ece"} +{"source": "stampy", "source_filetype": "text", "url": "n/a", "title": "When will transformative AI be created?", "authors": "n/a", "date_published": "n/a", "text": "Question: When will transformative AI be created?\n\nAnswer: As is often said, it's difficult to make predictions, especially about the future. This has not stopped many people thinking about when AI will transform the world, but all predictions should come with a warning that it's a hard domain to find anything like certainty.\n\n[https://www.alignmentforum.org/posts/KrJfoZzpSDpnrv9va/draft-report-on-ai-timelines This report] for the Open Philanthropy Project is perhaps the most careful attempt so far (and generates [https://docs.google.com/spreadsheets/d/1TjNQyVHvHlC-sZbcA7CRKcCp0NxV6MkkqBvL408xrJw/edit#gid꞊505210495 these graphs], which peak at 2042), and there's been much discussion including [https://www.alignmentforum.org/posts/rzqACeBGycZtqCfaX/fun-with-12-ooms-of-compute this reply and analysis] which argues that we likely need less compute than the OpenPhil report expects.\n\nThere have also been [https://slatestarcodex.com/2017/06/08/ssc-journal-club-ai-timelines/ expert surveys], and many people have [https://www.lesswrong.com/tag/ai-timelines shared various thoughts]. Berkeley AI professor [https://en.wikipedia.org/wiki/Stuart_J._Russell Stuart Russell] has given his best guess as \"sometime in our children's lifetimes\", and [https://en.wikipedia.org/wiki/Ray_Kurzweil Ray Kurzweil] (Futurist and Google's director of engineering) predicts [https://futurism.com/kurzweil-claims-that-the-singularity-will-happen-by-2045 human level AI by 2029 and the singularity by 2045]. The [https://www.metaculus.com/questions/3479/when-will-the-first-artificial-general-intelligence-system-be-devised-tested-and-publicly-known-of/ Metaculus question on publicly known AGI] has a median of around 2029 (around 10 years sooner than it was before the GPT-3 AI showed [https://gpt3examples.com/ unexpected ability on a broad range of tasks]).\n\nThe consensus answer, if there was one, might be something like: \"highly uncertain, maybe not for over a hundred years, maybe in less than 15, with around the middle of the century looking fairly plausible\".", "question": "When will transformative AI be created?", "answer": ["As is often said, it's difficult to make predictions, especially about the future. This has not stopped many people thinking about when AI will transform the world, but all predictions should come with a warning that it's a hard domain to find anything like certainty.\n\n[https://www.alignmentforum.org/posts/KrJfoZzpSDpnrv9va/draft-report-on-ai-timelines This report] for the Open Philanthropy Project is perhaps the most careful attempt so far (and generates [https://docs.google.com/spreadsheets/d/1TjNQyVHvHlC-sZbcA7CRKcCp0NxV6MkkqBvL408xrJw/edit#gid꞊505210495 these graphs], which peak at 2042), and there's been much discussion including [https://www.alignmentforum.org/posts/rzqACeBGycZtqCfaX/fun-with-12-ooms-of-compute this reply and analysis] which argues that we likely need less compute than the OpenPhil report expects.\n\nThere have also been [https://slatestarcodex.com/2017/06/08/ssc-journal-club-ai-timelines/ expert surveys], and many people have [https://www.lesswrong.com/tag/ai-timelines shared various thoughts]. Berkeley AI professor [https://en.wikipedia.org/wiki/Stuart_J._Russell Stuart Russell] has given his best guess as “sometime in our children’s lifetimes”, and [https://en.wikipedia.org/wiki/Ray_Kurzweil Ray Kurzweil] (Futurist and Google’s director of engineering) predicts [https://futurism.com/kurzweil-claims-that-the-singularity-will-happen-by-2045 human level AI by 2029 and the singularity by 2045]. The [https://www.metaculus.com/questions/3479/when-will-the-first-artificial-general-intelligence-system-be-devised-tested-and-publicly-known-of/ Metaculus question on publicly known AGI] has a median of around 2029 (around 10 years sooner than it was before the GPT-3 AI showed [https://gpt3examples.com/ unexpected ability on a broad range of tasks]).\n\nThe consensus answer, if there was one, might be something like: “highly uncertain, maybe not for over a hundred years, maybe in less than 15, with around the middle of the century looking fairly plausible”."], "entry": "Plex's Answer to When will transformative AI be created?", "id": "46056ce077197861045656476b9d7ec9"} +{"source": "stampy", "source_filetype": "text", "url": "n/a", "title": "Would AI alignment be hard with deep learning?", "authors": "n/a", "date_published": "n/a", "text": "Question: Would AI alignment be hard with deep learning?\n\nAnswer: Ajeya Cotra has written an excellent article named [https://www.cold-takes.com/why-ai-alignment-could-be-hard-with-modern-deep-learning/ Why AI alignment could be hard with modern deep learning] on this question.", "question": "Would AI alignment be hard with deep learning?", "answer": ["Ajeya Cotra has written an excellent article named [https://www.cold-takes.com/why-ai-alignment-could-be-hard-with-modern-deep-learning/ Why AI alignment could be hard with modern deep learning] on this question."], "entry": "Nico Hill2's Answer to Would AI alignment be hard with deep learning?", "id": "f49d2dad36455f73a985c5b761197032"} +{"source": "stampy", "source_filetype": "text", "url": "n/a", "title": "How can I contribute Stampy?", "authors": "n/a", "date_published": "n/a", "text": "Question: How can I contribute Stampy?\n\nAnswer: If you're not already there, join the [[Discord invite┊Discord]] where the contributors hang out.\n\nThe main ways you can help are to [[answer questions]] or [[add question]]s, or help to [[review questions]], [[review answers]], or [[improve answers]] (instructions for helping out with each of these tasks are on the linked pages). You could also [https://stampy.ai/wiki/How_can_I_join_the_Stampy_dev_team%3F join the dev team] if you have programming skills.", "question": "How can I contribute Stampy?", "answer": ["If you're not already there, join the [[Discord invite┊Discord]] where the contributors hang out.\n\nThe main ways you can help are to [[answer questions]] or [[add question]]s, or help to [[review questions]], [[review answers]], or [[improve answers]] (instructions for helping out with each of these tasks are on the linked pages). You could also [https://stampy.ai/wiki/How_can_I_join_the_Stampy_dev_team%3F join the dev team] if you have programming skills."], "entry": "Plex's Answer to How can I contribute to Stampy?", "id": "0b96eafcd04659e6a228239795ab67b0"} +{"source": "stampy", "source_filetype": "text", "url": "n/a", "title": "Why is AGI dangerous?", "authors": "n/a", "date_published": "n/a", "text": "Question: Why is AGI dangerous?\n\nAnswer: # [https://www.youtube.com/watch?v꞊hEUO6pjwFOo The Orthogonality Thesis]: AI could have almost any goal while at the same time having high intelligence (aka ability to succeed at those goals). This means that we could build a very powerful agent which would not necessarily share human-friendly values. For example, the classic [https://www.lesswrong.com/tag/paperclip-maximizer paperclip maximizer] thought experiment explores this with an AI which has a goal of creating as many paperclips as possible, something that humans are (mostly) indifferent to, and as a side effect ends up destroying humanity to make room for more paperclip factories.\n# [https://www.lesswrong.com/posts/GNnHHmm8EzePmKzPk/value-is-fragile Complexity of value]: What humans care about is not simple, and the space of all goals is large, so virtually all goals we could program into an AI would lead to worlds not valuable to humans if pursued by a sufficiently powerful agent. If we, for example, did not include our value of diversity of experience, we could end up with a world of endlessly looping simple pleasures, rather than beings living rich lives.\n# [https://www.youtube.com/watch?v꞊ZeecOKBus3Q Instrumental Convergence]: For almost any goal an AI has there are shared 'instrumental' steps, such as acquiring resources, preserving itself, and preserving the contents of its goals. This means that a powerful AI with goals that were not explicitly human-friendly would predictably both take actions that lead to the end of humanity (e.g. using resources humans need to live to further its goals, such as replacing our crop fields with vast numbers of solar panels to power its growth, or using the carbon in our bodies to build things) and prevent us from turning it off or altering its goals.", "question": "Why is AGI dangerous?", "answer": ["# [https://www.youtube.com/watch?v꞊hEUO6pjwFOo The Orthogonality Thesis]: AI could have almost any goal while at the same time having high intelligence (aka ability to succeed at those goals). This means that we could build a very powerful agent which would not necessarily share human-friendly values. For example, the classic [https://www.lesswrong.com/tag/paperclip-maximizer paperclip maximizer] thought experiment explores this with an AI which has a goal of creating as many paperclips as possible, something that humans are (mostly) indifferent to, and as a side effect ends up destroying humanity to make room for more paperclip factories.\n# [https://www.lesswrong.com/posts/GNnHHmm8EzePmKzPk/value-is-fragile Complexity of value]: What humans care about is not simple, and the space of all goals is large, so virtually all goals we could program into an AI would lead to worlds not valuable to humans if pursued by a sufficiently powerful agent. If we, for example, did not include our value of diversity of experience, we could end up with a world of endlessly looping simple pleasures, rather than beings living rich lives.\n# [https://www.youtube.com/watch?v꞊ZeecOKBus3Q Instrumental Convergence]: For almost any goal an AI has there are shared ‘instrumental’ steps, such as acquiring resources, preserving itself, and preserving the contents of its goals. This means that a powerful AI with goals that were not explicitly human-friendly would predictably both take actions that lead to the end of humanity (e.g. using resources humans need to live to further its goals, such as replacing our crop fields with vast numbers of solar panels to power its growth, or using the carbon in our bodies to build things) and prevent us from turning it off or altering its goals."], "entry": "Plex's Answer to Why is AGI dangerous?", "id": "f60af3e19b6e813ada1ebc468bc1b0c9"} +{"source": "stampy", "source_filetype": "text", "url": "n/a", "title": "What are the main sources of AI existential risk?", "authors": "n/a", "date_published": "n/a", "text": "Question: What are the main sources of AI existential risk?\n\nAnswer: A comprehensive list of major contributing factors to AI being a threat to humanity's future is maintained on by Daniel Kokotajlo on the [https://www.alignmentforum.org/posts/WXvt8bxYnwBYpy9oT/the-main-sources-of-ai-risk Alignment Forum].", "question": "What are the main sources of AI existential risk?", "answer": ["A comprehensive list of major contributing factors to AI being a threat to humanity's future is maintained on by Daniel Kokotajlo on the [https://www.alignmentforum.org/posts/WXvt8bxYnwBYpy9oT/the-main-sources-of-ai-risk Alignment Forum]."], "entry": "Plex's Answer to What are the main sources of AI existential risk?", "id": "083784066e7b594648120c3574f04b87"} +{"source": "stampy", "source_filetype": "text", "url": "n/a", "title": "What is the DeepMind's safety team working on?", "authors": "n/a", "date_published": "n/a", "text": "Question: What is the DeepMind's safety team working on?\n\nAnswer: DeepMind has both a [https://80000hours.org/podcast/episodes/pushmeet-kohli-deepmind-safety-research/#long-term-agi-safety-research ML safety team focused on near-term risks], and an alignment team that is working on risks from AGI. The alignment team is pursuing many different research avenues, and is not best described by a single agenda. \n\nSome of the work they are doing is: \n\n*[https://www.lesswrong.com/posts/qJgz2YapqpFEDTLKn/deepmind-alignment-team-opinions-on-agi-ruin-arguments Engaging with recent MIRI arguments]. \n*Rohin Shah produces the [https://rohinshah.com/alignment-newsletter/ alignment newsletter].\n*Publishing interesting research like the [https://arxiv.org/abs/2105.14111 Goal Misgeneralization paper]. \n*Geoffrey Irving is working on debate as an alignment strategy: [https://www.lesswrong.com/posts/bLr68nrLSwgzqLpzu/axrp-episode-16-preparing-for-debate-ai-with-geoffrey-irving more detail here].\n*[https://www.lesswrong.com/posts/XxX2CAoFskuQNkBDy/discovering-agents Discovering agents], which introduces a causal definition of agents, then introduces an algorithm for finding agents from empirical data. \n\nSee [https://www.lesswrong.com/posts/QBAjndPuFbhEXKcCr/my-understanding-of-what-everyone-in-technical-alignment-is?commentId꞊CS9qcdkmDbLHR89s2 Rohin's comment] for more research that they are doing, including description of some that is currently unpublished so far.", "question": "What is the DeepMind's safety team working on?", "answer": ["DeepMind has both a [https://80000hours.org/podcast/episodes/pushmeet-kohli-deepmind-safety-research/#long-term-agi-safety-research ML safety team focused on near-term risks], and an alignment team that is working on risks from AGI. The alignment team is pursuing many different research avenues, and is not best described by a single agenda. \n\nSome of the work they are doing is: \n\n*[https://www.lesswrong.com/posts/qJgz2YapqpFEDTLKn/deepmind-alignment-team-opinions-on-agi-ruin-arguments Engaging with recent MIRI arguments]. \n*Rohin Shah produces the [https://rohinshah.com/alignment-newsletter/ alignment newsletter].\n*Publishing interesting research like the [https://arxiv.org/abs/2105.14111 Goal Misgeneralization paper]. \n*Geoffrey Irving is working on debate as an alignment strategy: [https://www.lesswrong.com/posts/bLr68nrLSwgzqLpzu/axrp-episode-16-preparing-for-debate-ai-with-geoffrey-irving more detail here].\n*[https://www.lesswrong.com/posts/XxX2CAoFskuQNkBDy/discovering-agents Discovering agents], which introduces a causal definition of agents, then introduces an algorithm for finding agents from empirical data. \n\nSee [https://www.lesswrong.com/posts/QBAjndPuFbhEXKcCr/my-understanding-of-what-everyone-in-technical-alignment-is?commentId꞊CS9qcdkmDbLHR89s2 Rohin's comment] for more research that they are doing, including description of some that is currently unpublished so far."], "entry": "RoseMcClelland's Answer to What is the DeepMind's safety team working on?", "id": "486318d4eb8e0b9122f8ab14f660e1fb"} +{"source": "stampy", "source_filetype": "text", "url": "n/a", "title": "How might Shard Theory help with alignment?", "authors": "n/a", "date_published": "n/a", "text": "Question: How might Shard Theory help with alignment?\n\nAnswer: Humans care about things! The reward circuitry in our brain reliably causes us to care about specific things. Let's create a mechanistic model of how the brain aligns humans, and then we can use this to do AI alignment. \n\nOne perspective that Shard theory has added is that we shouldn't think of the solution to alignment as: \n\n#Find an outer objective that is fine to optimize arbitrarily strongly \n#Find a way of making sure that the inner objective of an ML system equals the outer objective.\n\nShard theory argues that instead we should focus on finding outer objectives that reliably give certain inner values into system and should be thought of as more of a teacher of the values we want to instill as opposed to the values themselves. [https://www.lesswrong.com/posts/pdaGN6pQyQarFHXF4/reward-is-not-the-optimization-target Reward is not the optimization target] — instead, it is more like that which reinforces. People sometimes refer to inner aligning an RL agent with respect to the reward signal, but this doesn't actually make sense. (As pointed out in the comments this is not a new insight, but it was for me phrased a lot more clearly in terms of Shard theory). \n\nHumans have different values than the reward circuitry in our brain being maximized, but they are still pointed reliably. These underlying values cause us to not wirehead with respect to the outer optimizer of reward.\n\nShard Theory points at the beginning of a mechanistic story for how inner values are selected for by outer optimization pressures. [https://www.lesswrong.com/posts/ZNXDRGshgoq3cmxhB/the-shard-theory-alignment-scheme The current plan] is to figure out how RL induces inner values into learned agents, and then figure out how to instill human values into powerful AI models (probably chain of thought LLMs, because these are the most intelligent models right now). Then, use these partially aligned models to solve the full alignment problem. Shard theory also proposes a subagent theory of mind.\n\nThis has some similarities to Brain-like AGI Safety, and has drawn on some research from this post, such as the mechanics of the human reward circuitry as well as the brain being mostly randomly initialized at birth.", "question": "How might Shard Theory help with alignment?", "answer": ["Humans care about things! The reward circuitry in our brain reliably causes us to care about specific things. Let's create a mechanistic model of how the brain aligns humans, and then we can use this to do AI alignment. \n\nOne perspective that Shard theory has added is that we shouldn't think of the solution to alignment as: \n\n#Find an outer objective that is fine to optimize arbitrarily strongly \n#Find a way of making sure that the inner objective of an ML system equals the outer objective.\n\nShard theory argues that instead we should focus on finding outer objectives that reliably give certain inner values into system and should be thought of as more of a teacher of the values we want to instill as opposed to the values themselves. [https://www.lesswrong.com/posts/pdaGN6pQyQarFHXF4/reward-is-not-the-optimization-target Reward is not the optimization target] — instead, it is more like that which reinforces. People sometimes refer to inner aligning an RL agent with respect to the reward signal, but this doesn't actually make sense. (As pointed out in the comments this is not a new insight, but it was for me phrased a lot more clearly in terms of Shard theory). \n\nHumans have different values than the reward circuitry in our brain being maximized, but they are still pointed reliably. These underlying values cause us to not wirehead with respect to the outer optimizer of reward.\n\nShard Theory points at the beginning of a mechanistic story for how inner values are selected for by outer optimization pressures. [https://www.lesswrong.com/posts/ZNXDRGshgoq3cmxhB/the-shard-theory-alignment-scheme The current plan] is to figure out how RL induces inner values into learned agents, and then figure out how to instill human values into powerful AI models (probably chain of thought LLMs, because these are the most intelligent models right now). Then, use these partially aligned models to solve the full alignment problem. Shard theory also proposes a subagent theory of mind.\n\nThis has some similarities to Brain-like AGI Safety, and has drawn on some research from this post, such as the mechanics of the human reward circuitry as well as the brain being mostly randomly initialized at birth."], "entry": "RoseMcClelland's Answer to How might Shard Theory help with alignment?", "id": "f7ca829ffc1867de3884ba53ff90fff4"} +{"source": "stampy", "source_filetype": "text", "url": "n/a", "title": "What is the Center on Long-Term Risk (CLR) focused on?", "authors": "n/a", "date_published": "n/a", "text": "Question: What is the Center on Long-Term Risk (CLR) focused on?\n\nAnswer: CLR is focused primarily on reducing suffering-risk (s-risk), where the future has a large negative value. They do foundational research in game theory / decision theory, primarily aimed at multipolar AI scenarios. One result relevant to this work is that [https://www.cambridge.org/core/journals/journal-of-symbolic-logic/article/abs/parametric-resourcebounded-generalization-of-lobs-theorem-and-a-robust-cooperation-criterion-for-opensource-game-theory/16063EA7BFFEE89438631B141E556E79 transparency can increase cooperation]. \n\n[https://www.lesswrong.com/posts/QBAjndPuFbhEXKcCr/my-understanding-of-what-everyone-in-technical-alignment-is?commentId꞊mqiYR6X8bgY5wKdme Update after Jesse Clifton commented]: CLR also works on improving coordination for prosaic AI scenarios, [https://forum.effectivealtruism.org/posts/LpkXtFXdsRd4rG8Kb/reducing-long-term-risks-from-malevolent-actors risks from malevolent actors] and [https://www.lesswrong.com/posts/6Xgy6CAf2jqHhynHL/what-2026-looks-like AI forecasting]. The [https://www.cooperativeai.com/foundation Cooperative AI Foundation (CAIF)] shares personnel with CLR, but is not formally affiliated with CLR, and does not focus just on s-risks.", "question": "What is the Center on Long-Term Risk (CLR) focused on?", "answer": ["CLR is focused primarily on reducing suffering-risk (s-risk), where the future has a large negative value. They do foundational research in game theory / decision theory, primarily aimed at multipolar AI scenarios. One result relevant to this work is that [https://www.cambridge.org/core/journals/journal-of-symbolic-logic/article/abs/parametric-resourcebounded-generalization-of-lobs-theorem-and-a-robust-cooperation-criterion-for-opensource-game-theory/16063EA7BFFEE89438631B141E556E79 transparency can increase cooperation]. \n\n[https://www.lesswrong.com/posts/QBAjndPuFbhEXKcCr/my-understanding-of-what-everyone-in-technical-alignment-is?commentId꞊mqiYR6X8bgY5wKdme Update after Jesse Clifton commented]: CLR also works on improving coordination for prosaic AI scenarios, [https://forum.effectivealtruism.org/posts/LpkXtFXdsRd4rG8Kb/reducing-long-term-risks-from-malevolent-actors risks from malevolent actors] and [https://www.lesswrong.com/posts/6Xgy6CAf2jqHhynHL/what-2026-looks-like AI forecasting]. The [https://www.cooperativeai.com/foundation Cooperative AI Foundation (CAIF)] shares personnel with CLR, but is not formally affiliated with CLR, and does not focus just on s-risks."], "entry": "RoseMcClelland's Answer to What is the Center on Long-Term Risk (CLR) focused on?", "id": "b384592e6f2fd031526a0e82298f2bf9"} +{"source": "stampy", "source_filetype": "text", "url": "n/a", "title": "What is Anthropic's approach LLM alignment?", "authors": "n/a", "date_published": "n/a", "text": "Question: What is Anthropic's approach LLM alignment?\n\nAnswer: Anthropic fine tuned a language model to be more helpful, honest and harmless: [https://arxiv.org/abs/2112.00861 HHH].\n\nMotivation: The point of this is to:\n#see if we can \"align\" a current day LLM, and \n#raise awareness about safety in the broader ML community.\n\n'''[[How can we interpret what all the neurons mean?]]'''\n\nChris Olah, the interpretability legend, is working on looking really hard at all the neurons to see what they all mean. The approach he pioneered is [https://distill.pub/2020/circuits/zoom-in/ circuits]: looking at computational subgraphs of the network, called circuits, and interpreting those. Idea: \"decompiling the network into a better representation that is more interpretable\". In-context learning via attention heads, and interpretability here seems useful.\n\nOne result I heard about recently: a linear softmax unit stretches space and encourages neuron monosemanticity (making a neuron represent only one thing, as opposed to firing on many unrelated concepts). This makes the network easier to interpret. \n\nMotivation: The point of this is to get as many bits of information about what neural networks are doing, to hopefully find better abstractions. This diagram gets posted everywhere, the hope being that networks, in the current regime, will become more interpretable because they will start to use abstractions that are closer to human abstractions.\n\n'''[[How do you figure out model performance scales?]]'''", "question": "What is Anthropic's approach LLM alignment?", "answer": ["Anthropic fine tuned a language model to be more helpful, honest and harmless: [https://arxiv.org/abs/2112.00861 HHH].\n\nMotivation: The point of this is to:\n#see if we can \"align\" a current day LLM, and \n#raise awareness about safety in the broader ML community.\n\n'''[[How can we interpret what all the neurons mean?]]'''\n\nChris Olah, the interpretability legend, is working on looking really hard at all the neurons to see what they all mean. The approach he pioneered is [https://distill.pub/2020/circuits/zoom-in/ circuits]: looking at computational subgraphs of the network, called circuits, and interpreting those. Idea: \"decompiling the network into a better representation that is more interpretable\". In-context learning via attention heads, and interpretability here seems useful.\n\nOne result I heard about recently: a linear softmax unit stretches space and encourages neuron monosemanticity (making a neuron represent only one thing, as opposed to firing on many unrelated concepts). This makes the network easier to interpret. \n\nMotivation: The point of this is to get as many bits of information about what neural networks are doing, to hopefully find better abstractions. This diagram gets posted everywhere, the hope being that networks, in the current regime, will become more interpretable because they will start to use abstractions that are closer to human abstractions.\n\n'''[[How do you figure out model performance scales?]]'''"], "entry": "RoseMcClelland's Answer to What is Anthropic's approach to LLM alignment?", "id": "291ace50d9de6b07d9e928f42ce5d31f"} +{"source": "stampy", "source_filetype": "text", "url": "n/a", "title": "How is the Alignment Research Center (ARC) trying solve Eliciting Latent Knowledge (ELK)?", "authors": "n/a", "date_published": "n/a", "text": "Question: How is the Alignment Research Center (ARC) trying solve Eliciting Latent Knowledge (ELK)?\n\nAnswer: ARC is trying to solve [https://docs.google.com/document/d/1WwsnJQstPq91_Yh-Ch2XRL8H_EpsnjrC1dwZXR37PC8/edit Eliciting Latent Knowledge (ELK)]. Suppose that you are training an AI agent that predicts the state of the world and then performs some actions, called a ''predictor''. This predictor is the AGI that will be acting to accomplish goals in the world. How can you create another model, called a ''reporter'', that tells you what the predictor believes about the world? A key challenge in training this reporter is that training your reporter on human labeled training data, by default, incentivizes the predictor to just model what the human thinks is true, because the human is a simpler model than the AI.\n\nMotivation: At a high level, Paul's plan seems to be to produce a minimal AI that can help to do AI safety research. To do this, preventing [https://www.lesswrong.com/posts/ocWqg2Pf2br4jMmKA/does-sgd-produce-deceptive-alignment deception] and [https://www.lesswrong.com/posts/pL56xPoniLvtMDQ4J/the-inner-alignment-problem inner alignment failure] are on the critical path, and the only known solution paths to this require interpretability (this is how all of Evan's [https://www.lesswrong.com/posts/fRsjBseRuvRhMPPE5/an-overview-of-11-proposals-for-building-safe-advanced-ai 11 proposals] plan to get around this problem).\n\nIf ARC can solve ELK, this would be a very strong form of interpretability: our reporter is able to tell us what the predictor believes about the world. Some ways this could end up being useful for aligning the predictor include:\n\n*Using the reporter to find deceptive/misaligned thoughts in the predictor, and then optimizing against those interpreted thoughts. At any given point in time, SGD only updates the weights a small amount. If an AI becomes misaligned, it won't be very misaligned, and the interpretability tools will be able to figure this out and do a gradient step to make it aligned again. In this way, we can prevent deception at any point in training.\n*Stopping training if the AI is misaligned.", "question": "How is the Alignment Research Center (ARC) trying solve Eliciting Latent Knowledge (ELK)?", "answer": ["ARC is trying to solve [https://docs.google.com/document/d/1WwsnJQstPq91_Yh-Ch2XRL8H_EpsnjrC1dwZXR37PC8/edit Eliciting Latent Knowledge (ELK)]. Suppose that you are training an AI agent that predicts the state of the world and then performs some actions, called a ''predictor''. This predictor is the AGI that will be acting to accomplish goals in the world. How can you create another model, called a ''reporter'', that tells you what the predictor believes about the world? A key challenge in training this reporter is that training your reporter on human labeled training data, by default, incentivizes the predictor to just model what the human thinks is true, because the human is a simpler model than the AI.\n\nMotivation: At a high level, Paul's plan seems to be to produce a minimal AI that can help to do AI safety research. To do this, preventing [https://www.lesswrong.com/posts/ocWqg2Pf2br4jMmKA/does-sgd-produce-deceptive-alignment deception] and [https://www.lesswrong.com/posts/pL56xPoniLvtMDQ4J/the-inner-alignment-problem inner alignment failure] are on the critical path, and the only known solution paths to this require interpretability (this is how all of Evan's [https://www.lesswrong.com/posts/fRsjBseRuvRhMPPE5/an-overview-of-11-proposals-for-building-safe-advanced-ai 11 proposals] plan to get around this problem).\n\nIf ARC can solve ELK, this would be a very strong form of interpretability: our reporter is able to tell us what the predictor believes about the world. Some ways this could end up being useful for aligning the predictor include:\n\n*Using the reporter to find deceptive/misaligned thoughts in the predictor, and then optimizing against those interpreted thoughts. At any given point in time, SGD only updates the weights a small amount. If an AI becomes misaligned, it won't be very misaligned, and the interpretability tools will be able to figure this out and do a gradient step to make it aligned again. In this way, we can prevent deception at any point in training.\n*Stopping training if the AI is misaligned."], "entry": "RoseMcClelland's Answer to How is the Alignment Research Center (ARC) trying to solve Eliciting Latent Knowledge (ELK)?", "id": "1c3acf7de470ef4c2cc585b098bfda24"} +{"source": "stampy", "source_filetype": "text", "url": "n/a", "title": "What is neural network modularity?", "authors": "n/a", "date_published": "n/a", "text": "Question: What is neural network modularity?\n\nAnswer: If a neural network is ''modular'', that means it consists of clusters (modules) of neurons, such that the neurons within the cluster are strongly connected to each other, but only weakly connected to the rest of the network.\n\nMaking networks more modular is useful if the modules represent concepts which are understandable because this helps understand the whole system better.\n\nRelevant papers about modularity are [https://arxiv.org/abs/2003.04881v2 Neural Networks are Surprisingly Modular], [https://arxiv.org/abs/2103.03386 Clusterability in Neural Networks], and [https://openreview.net/forum?id꞊tFQyjbOz34 Detecting Modularity in Deep Neural Networks].", "question": "What is neural network modularity?", "answer": ["If a neural network is ''modular'', that means it consists of clusters (modules) of neurons, such that the neurons within the cluster are strongly connected to each other, but only weakly connected to the rest of the network.\n\nMaking networks more modular is useful if the modules represent concepts which are understandable because this helps understand the whole system better.\n\nRelevant papers about modularity are [https://arxiv.org/abs/2003.04881v2 Neural Networks are Surprisingly Modular], [https://arxiv.org/abs/2103.03386 Clusterability in Neural Networks], and [https://openreview.net/forum?id꞊tFQyjbOz34 Detecting Modularity in Deep Neural Networks]."], "entry": "Magdalena's Answer to What is neural network modularity?", "id": "4232e4da3808934c8900e86822506214"} +{"source": "stampy", "source_filetype": "text", "url": "n/a", "title": "What is Conjecture's epistemology research agenda?", "authors": "n/a", "date_published": "n/a", "text": "Question: What is Conjecture's epistemology research agenda?\n\nAnswer: The alignment problem is really hard to do science on: we are trying to reason about the future, and we only get one shot, meaning that [https://www.lesswrong.com/posts/uhxpJyGYQ5FQRvdjY/abstracting-the-hardness-of-alignment-unbounded-atomic we can't iterate]. Therefore, it seems really useful to have a good understanding of meta-science/epistemology, i.e. reasoning about ways to do useful alignment research.", "question": "What is Conjecture's epistemology research agenda?", "answer": ["The alignment problem is really hard to do science on: we are trying to reason about the future, and we only get one shot, meaning that [https://www.lesswrong.com/posts/uhxpJyGYQ5FQRvdjY/abstracting-the-hardness-of-alignment-unbounded-atomic we can't iterate]. Therefore, it seems really useful to have a good understanding of meta-science/epistemology, i.e. reasoning about ways to do useful alignment research."], "entry": "RoseMcClelland's Answer to What is Conjecture's epistemology research agenda?", "id": "d85f3bbf95289371c4bee8f640c83239"} +{"source": "stampy", "source_filetype": "text", "url": "n/a", "title": "What is Conjecture's Scalable LLM Interpretability research adgenda?", "authors": "n/a", "date_published": "n/a", "text": "Question: What is Conjecture's Scalable LLM Interpretability research adgenda?\n\nAnswer: I don't know much about their research here, other than that they train their own models, which allow them to work on models that are bigger than the biggest publicly available models, which seems like a difference from Redwood.\n\nCurrent interpretability methods are very low level (e.g., \"what does x neuron do\"), which does not help us answer high level questions like \"is this AI trying to kill us\".\n\nThey are trying a bunch of weird approaches, with the goal of scalable mechanistic interpretability, but I do not know what these approaches actually are.\n\nMotivation: Conjecture wants to build towards a better paradigm that will give us a lot more information, primarily from the empirical direction (as distinct from ARC, which is working on interpretability with a theoretical focus).", "question": "What is Conjecture's Scalable LLM Interpretability research adgenda?", "answer": ["I don't know much about their research here, other than that they train their own models, which allow them to work on models that are bigger than the biggest publicly available models, which seems like a difference from Redwood.\n\nCurrent interpretability methods are very low level (e.g., \"what does x neuron do\"), which does not help us answer high level questions like \"is this AI trying to kill us\".\n\nThey are trying a bunch of weird approaches, with the goal of scalable mechanistic interpretability, but I do not know what these approaches actually are.\n\nMotivation: Conjecture wants to build towards a better paradigm that will give us a lot more information, primarily from the empirical direction (as distinct from ARC, which is working on interpretability with a theoretical focus)."], "entry": "RoseMcClelland's Answer to What is Conjecture's Scalable LLM Interpretability research adgenda?", "id": "76b88a5fccb7eee3647d944e5b4f7927"} +{"source": "stampy", "source_filetype": "text", "url": "n/a", "title": "What is Aligned AI / Stuart Armstrong working on?", "authors": "n/a", "date_published": "n/a", "text": "Question: What is Aligned AI / Stuart Armstrong working on?\n\nAnswer: One of the key problems in AI safety is that there are many ways for an AI to generalize off-distribution, so it is very likely that an arbitrary generalization will be unaligned. See the [https://www.lesswrong.com/posts/k54rgSg7GcjtXnMHX/model-splintering-moving-from-one-imperfect-model-to-another-1 model splintering post] for more detail. Stuart's plan to solve this problem is as follows:\n\n#Maintain a set of all possible extrapolations of reward data that are consistent with the training process.\n#Pick among these for a safe reward extrapolation. \n\nThey are currently working on algorithms to accomplish step 1: see [https://www.lesswrong.com/posts/i8sHdLyGQeBTGwTqq/value-extrapolation-concept-extrapolation-model-splintering Value Extrapolation]. \n\nTheir initial operationalization of this problem is the lion and husky problem. Basically: if you train an image model on a dataset of images of lions and huskies, the lions are always in the desert, and the huskies are always in the snow. So the problem of learning a classifier is under-defined: should the classifier be classifying based on the background environment (e.g. snow vs sand), or based on the animal in the image? \n\nA good extrapolation algorithm, on this problem, would generate classifiers that extrapolate in all the different ways[4], and so the 'correct' extrapolation must be in this generated set of classifiers. They have also introduced a new dataset for this, with a similar idea: [https://www.lesswrong.com/posts/DiEWbwrChuzuhJhGr/benchmark-for-successful-concept-extrapolation-avoiding-goal Happy Faces].\n\nStep 2 could be done in different ways. Possibilities for doing this include: [https://www.lesswrong.com/posts/PADPJ3xac5ogjEGwA/defeating-goodhart-and-the-closest-unblocked-strategy conservatism], [https://www.lesswrong.com/posts/BeeirdrMXCPYZwgfj/the-blue-minimising-robot-and-model-splintering generalized deference to humans], or an automated process for removing some goals. like wireheading/deception/killing everyone.", "question": "What is Aligned AI / Stuart Armstrong working on?", "answer": ["One of the key problems in AI safety is that there are many ways for an AI to generalize off-distribution, so it is very likely that an arbitrary generalization will be unaligned. See the [https://www.lesswrong.com/posts/k54rgSg7GcjtXnMHX/model-splintering-moving-from-one-imperfect-model-to-another-1 model splintering post] for more detail. Stuart's plan to solve this problem is as follows:\n\n#Maintain a set of all possible extrapolations of reward data that are consistent with the training process.\n#Pick among these for a safe reward extrapolation. \n\nThey are currently working on algorithms to accomplish step 1: see [https://www.lesswrong.com/posts/i8sHdLyGQeBTGwTqq/value-extrapolation-concept-extrapolation-model-splintering Value Extrapolation]. \n\nTheir initial operationalization of this problem is the lion and husky problem. Basically: if you train an image model on a dataset of images of lions and huskies, the lions are always in the desert, and the huskies are always in the snow. So the problem of learning a classifier is under-defined: should the classifier be classifying based on the background environment (e.g. snow vs sand), or based on the animal in the image? \n\nA good extrapolation algorithm, on this problem, would generate classifiers that extrapolate in all the different ways[4], and so the 'correct' extrapolation must be in this generated set of classifiers. They have also introduced a new dataset for this, with a similar idea: [https://www.lesswrong.com/posts/DiEWbwrChuzuhJhGr/benchmark-for-successful-concept-extrapolation-avoiding-goal Happy Faces].\n\nStep 2 could be done in different ways. Possibilities for doing this include: [https://www.lesswrong.com/posts/PADPJ3xac5ogjEGwA/defeating-goodhart-and-the-closest-unblocked-strategy conservatism], [https://www.lesswrong.com/posts/BeeirdrMXCPYZwgfj/the-blue-minimising-robot-and-model-splintering generalized deference to humans], or an automated process for removing some goals. like wireheading/deception/killing everyone."], "entry": "RoseMcClelland's Answer to What is Aligned AI / Stuart Armstrong working on?", "id": "8b976bf0d6fc7ee2532856fbc25b7571"} +{"source": "stampy", "source_filetype": "text", "url": "n/a", "title": "How would you explain the theory of Infra-Bayesianism?", "authors": "n/a", "date_published": "n/a", "text": "Question: How would you explain the theory of Infra-Bayesianism?\n\nAnswer: See Vanessa's [https://www.lesswrong.com/posts/5bd75cc58225bf0670375575/the-learning-theoretic-ai-alignment-research-agenda research agenda] for more detail. \n\nIf we don't know how to do something given unbounded compute, we are just confused about the thing. Going from thinking that chess was impossible for machines to understanding [https://www.google.com/url?q꞊https://en.wikipedia.org/wiki/Minimax&sa꞊D&source꞊editors&ust꞊1661633213196096&usg꞊AOvVaw3m8tD5QAEl-XXhvaH4d1v3 minimax] was a really good step forward for designing chess AIs, ''even though minimax is completely intractable''.\n\nThus, we should seek to figure out how alignment might look in theory, and then try to bridge the theory-practice gap by making our proposal ever more efficient. The first step along this path is to figure out a universal [https://www.alignmentforum.org/tag/reinforcement-learning Reinforcement Learning] setting that we can place our formal agents in, and then prove regret bounds in.\n\nA key problem in doing this is embeddedness. AIs can't have a perfect self model — this would be like imagining your ENTIRE brain, inside your brain. There are finite memory constraints. [https://www.lesswrong.com/s/CmrW8fCmSLK7E25sa Infra-Bayesianism] (IB) is essentially a theory of imprecise probability that lets you specify local / fuzzy things. IB allows agents to have abstract models of themselves, and thus works in an embedded setting.\n\n[https://www.lesswrong.com/posts/gHgs2e2J5azvGFatb/infra-bayesian-physicalism-a-formal-theory-of-naturalized Infra-Bayesian Physicalism] (IBP) is an extension of this to RL. IBP allows us to\n*Figure out what agents are running [by evaluating the counterfactual where the computation of the agent would output something different, and see if the physical universe is different].\n*Give a program, classify it as an agent or a non agent, and then find its utility function.\n\nVanessa uses this formalism to describe [https://www.lesswrong.com/posts/dPmmuaz9szk26BkmD/vanessa-kosoy-s-shortform?commentId꞊vKw6DB9crncovPxED#vKw6DB9crncovPxED PreDCA], an alignment proposal based on IBP. This proposal assumes that an agent is an IBP agent, meaning that it is an RL agent with fuzzy probability distributions (along with some other things). The general outline of this proposal is as follows:\n#Find all of the agents that preceded the AI\n#Discard all of these agents that are powerful / non-human like\n#Find all of the utility functions in the remaining agents\n#Use combination of all of these utilities as the agent's utility function\n\nVanessa models an AI as a model based RL system with a WM, a reward function, and a policy derived from the WM + reward. [https://www.lesswrong.com/posts/GNhMPAWcfBCASy8e6/a-central-ai-alignment-problem-capabilities-generalization She claims that this avoids the sharp left turn]. The generalization problems come from the world model, but this is dealt with by having an epistemology that doesn't contain [https://www.lesswrong.com/posts/ethRJh2E7mSSjzCay/building-phenomenological-bridges bridge rules], and so the true world is the simplest explanation for the observed data.\n\nIt is open to show that this proposal also solves inner alignment, but there is some chance that it does.\n\nThis approach deviates from MIRI's plan, which is to focus on a narrow task to perform the pivotal act, and then add corrigibility. Vanessa instead tries to directly learn the user's preferences, and optimize those.", "question": "How would you explain the theory of Infra-Bayesianism?", "answer": ["See Vanessa's [https://www.lesswrong.com/posts/5bd75cc58225bf0670375575/the-learning-theoretic-ai-alignment-research-agenda research agenda] for more detail. \n\nIf we don't know how to do something given unbounded compute, we are just confused about the thing. Going from thinking that chess was impossible for machines to understanding [https://www.google.com/url?q꞊https://en.wikipedia.org/wiki/Minimax&sa꞊D&source꞊editors&ust꞊1661633213196096&usg꞊AOvVaw3m8tD5QAEl-XXhvaH4d1v3 minimax] was a really good step forward for designing chess AIs, ''even though minimax is completely intractable''.\n\nThus, we should seek to figure out how alignment might look in theory, and then try to bridge the theory-practice gap by making our proposal ever more efficient. The first step along this path is to figure out a universal [https://www.alignmentforum.org/tag/reinforcement-learning Reinforcement Learning] setting that we can place our formal agents in, and then prove regret bounds in.\n\nA key problem in doing this is embeddedness. AIs can't have a perfect self model — this would be like imagining your ENTIRE brain, inside your brain. There are finite memory constraints. [https://www.lesswrong.com/s/CmrW8fCmSLK7E25sa Infra-Bayesianism] (IB) is essentially a theory of imprecise probability that lets you specify local / fuzzy things. IB allows agents to have abstract models of themselves, and thus works in an embedded setting.\n\n[https://www.lesswrong.com/posts/gHgs2e2J5azvGFatb/infra-bayesian-physicalism-a-formal-theory-of-naturalized Infra-Bayesian Physicalism] (IBP) is an extension of this to RL. IBP allows us to\n*Figure out what agents are running [by evaluating the counterfactual where the computation of the agent would output something different, and see if the physical universe is different].\n*Give a program, classify it as an agent or a non agent, and then find its utility function.\n\nVanessa uses this formalism to describe [https://www.lesswrong.com/posts/dPmmuaz9szk26BkmD/vanessa-kosoy-s-shortform?commentId꞊vKw6DB9crncovPxED#vKw6DB9crncovPxED PreDCA], an alignment proposal based on IBP. This proposal assumes that an agent is an IBP agent, meaning that it is an RL agent with fuzzy probability distributions (along with some other things). The general outline of this proposal is as follows:\n#Find all of the agents that preceded the AI\n#Discard all of these agents that are powerful / non-human like\n#Find all of the utility functions in the remaining agents\n#Use combination of all of these utilities as the agent's utility function\n\nVanessa models an AI as a model based RL system with a WM, a reward function, and a policy derived from the WM + reward. [https://www.lesswrong.com/posts/GNhMPAWcfBCASy8e6/a-central-ai-alignment-problem-capabilities-generalization She claims that this avoids the sharp left turn]. The generalization problems come from the world model, but this is dealt with by having an epistemology that doesn't contain [https://www.lesswrong.com/posts/ethRJh2E7mSSjzCay/building-phenomenological-bridges bridge rules], and so the true world is the simplest explanation for the observed data.\n\nIt is open to show that this proposal also solves inner alignment, but there is some chance that it does.\n\nThis approach deviates from MIRI's plan, which is to focus on a narrow task to perform the pivotal act, and then add corrigibility. Vanessa instead tries to directly learn the user's preferences, and optimize those."], "entry": "RoseMcClelland's Answer to How would you explain the theory of Infra-Bayesianism?", "id": "62457d3fd085b702ab355fe85a4609d9"} +{"source": "stampy", "source_filetype": "text", "url": "n/a", "title": "How is OpenAI planning solve the full alignment problem?", "authors": "n/a", "date_published": "n/a", "text": "Question: How is OpenAI planning solve the full alignment problem?\n\nAnswer: The safety team at OpenAI's plan is to build a [https://aligned.substack.com/p/alignment-mvp MVP aligned AGI] to try and help us solve the full alignment problem.\n\nThey want to do this with Reinforcement Learning from Human Feedback (RLHF): get feedback from humans about what is good, i.e. give reward to AI's based on the human feedback. Problem: what if the AI makes gigabrain 5D chess moves that humans don't understand, so can't evaluate. Jan Leike, the director of the safety team, views this ([https://ai-alignment.com/the-informed-oversight-problem-1b51b4f66b35 the informed oversight problem]) as the core difficulty of alignment. Their proposed solution: an AI assisted oversight scheme, with a recursive hierarchy of AIs bottoming out at humans. They are working on experimenting with this approach by trying to get current day AIs to do useful supporting work such as [https://openai.com/blog/summarizing-books/ summarizing books] and [https://openai.com/blog/critiques/ criticizing itself].\n\nOpenAI also published GPT-3, and are continuing to push LLM capabilities, with GPT-4 expected to be released at some point soon.\n\nSee also: [https://www.lesswrong.com/posts/3S4nyoNEEuvNsbXt8/common-misconceptions-about-openai Common misconceptions about OpenAI] and [https://openai.com/blog/our-approach-to-alignment-research/ Our approach to alignment research].", "question": "How is OpenAI planning solve the full alignment problem?", "answer": ["The safety team at OpenAI's plan is to build a [https://aligned.substack.com/p/alignment-mvp MVP aligned AGI] to try and help us solve the full alignment problem.\n\nThey want to do this with Reinforcement Learning from Human Feedback (RLHF): get feedback from humans about what is good, i.e. give reward to AI's based on the human feedback. Problem: what if the AI makes gigabrain 5D chess moves that humans don't understand, so can't evaluate. Jan Leike, the director of the safety team, views this ([https://ai-alignment.com/the-informed-oversight-problem-1b51b4f66b35 the informed oversight problem]) as the core difficulty of alignment. Their proposed solution: an AI assisted oversight scheme, with a recursive hierarchy of AIs bottoming out at humans. They are working on experimenting with this approach by trying to get current day AIs to do useful supporting work such as [https://openai.com/blog/summarizing-books/ summarizing books] and [https://openai.com/blog/critiques/ criticizing itself].\n\nOpenAI also published GPT-3, and are continuing to push LLM capabilities, with GPT-4 expected to be released at some point soon.\n\nSee also: [https://www.lesswrong.com/posts/3S4nyoNEEuvNsbXt8/common-misconceptions-about-openai Common misconceptions about OpenAI] and [https://openai.com/blog/our-approach-to-alignment-research/ Our approach to alignment research]."], "entry": "RoseMcClelland's Answer to How is OpenAI planning to solve the full alignment problem?", "id": "e3a4b3fa014cc6bef895c27174e988e7"} +{"source": "stampy", "source_filetype": "text", "url": "n/a", "title": "What are plausible candidates for \"pivotal acts\"?", "authors": "n/a", "date_published": "n/a", "text": "Question: What are plausible candidates for \"pivotal acts\"?\n\nAnswer: Pivotal acts are acts that substantially change the direction humanity will have taken in 1 billion years. The term is used to denote positive changes, as opposed to existential catastrophe.\n\nAn obvious pivotal act would be to create a [https://arbital.com/p/Sovereign/ sovereign AGI] aligned with humanity's best interests. An act that would greatly increase the chance of another pivotal act would also count as pivotal.\n\nPivotal acts often lay outside the [https://en.wikipedia.org/wiki/Overton_window Overton window]. One such example is stopping or strongly delaying the development of an unaligned (or any) AGI through drastic means such as nanobots which melt all advanced processors, or the disabling of all AI researchers. Eliezer mentions these in [https://www.lesswrong.com/posts/uMQ3cqWDPHhjtiesc/agi-ruin-a-list-of-lethalities AGI Ruin: A List of Lethalities]. Andrew Critch argues against such an unilateral pivotal act in [https://www.alignmentforum.org/posts/Jo89KvfAs9z7owoZp/pivotal-act-intentions-negative-consequences-and-fallacious \"Pivotal Act\" Intentions: Negative Consequences and Fallacious Arguments].\n\nFor more details, see [https://arbital.com/p/pivotal/ arbital].", "question": "What are plausible candidates for \"pivotal acts\"?", "answer": ["Pivotal acts are acts that substantially change the direction humanity will have taken in 1 billion years. The term is used to denote positive changes, as opposed to existential catastrophe.\n\nAn obvious pivotal act would be to create a [https://arbital.com/p/Sovereign/ sovereign AGI] aligned with humanity's best interests. An act that would greatly increase the chance of another pivotal act would also count as pivotal.\n\nPivotal acts often lay outside the [https://en.wikipedia.org/wiki/Overton_window Overton window]. One such example is stopping or strongly delaying the development of an unaligned (or any) AGI through drastic means such as nanobots which melt all advanced processors, or the disabling of all AI researchers. Eliezer mentions these in [https://www.lesswrong.com/posts/uMQ3cqWDPHhjtiesc/agi-ruin-a-list-of-lethalities AGI Ruin: A List of Lethalities]. Andrew Critch argues against such an unilateral pivotal act in [https://www.alignmentforum.org/posts/Jo89KvfAs9z7owoZp/pivotal-act-intentions-negative-consequences-and-fallacious “Pivotal Act” Intentions: Negative Consequences and Fallacious Arguments].\n\nFor more details, see [https://arbital.com/p/pivotal/ arbital]."], "entry": "Murphant's Answer to What are plausible candidates for \"pivotal acts\"?", "id": "d78f34cdd36724cab2e95b32192acb95"} +{"source": "stampy", "source_filetype": "text", "url": "n/a", "title": "What is the difference between inner and outer alignment?", "authors": "n/a", "date_published": "n/a", "text": "Question: What is the difference between inner and outer alignment?\n\nAnswer: The paper [https://arxiv.org/abs/1906.01820 Risks from Learned Optimization in Advanced Machine Learning Systems] makes the distinction between inner and outer alignment: Outer alignment means making the optimization target of the ''training process'' (\"outer optimization target\" e.g. the ''loss'' in supervised learning) aligned with what we want. Inner alignment means making the optimization target of the ''trained system'' (\"inner optimization target\") aligned with the outer optimization target. A challenge here is that the inner optimization target does not have an explicit representation in current systems, and can differ very much from the outer optimization target (see for example [https://arxiv.org/abs/2105.14111 Goal Misgeneralization in Deep Reinforcement Learning]).\n\n(youtube)bJLcIBixGj8(/youtube)\n\nSee also [https://astralcodexten.substack.com/p/deceptively-aligned-mesa-optimizers this article] for an intuitive explanation of inner and outer alignment.", "question": "What is the difference between inner and outer alignment?", "answer": ["The paper [https://arxiv.org/abs/1906.01820 Risks from Learned Optimization in Advanced Machine Learning Systems] makes the distinction between inner and outer alignment: Outer alignment means making the optimization target of the ''training process'' (“outer optimization target” e.g. the ''loss'' in supervised learning) aligned with what we want. Inner alignment means making the optimization target of the ''trained system'' (“inner optimization target”) aligned with the outer optimization target. A challenge here is that the inner optimization target does not have an explicit representation in current systems, and can differ very much from the outer optimization target (see for example [https://arxiv.org/abs/2105.14111 Goal Misgeneralization in Deep Reinforcement Learning]).\n\n(youtube)bJLcIBixGj8(/youtube)\n\nSee also [https://astralcodexten.substack.com/p/deceptively-aligned-mesa-optimizers this article] for an intuitive explanation of inner and outer alignment."], "entry": "Magdalena's Answer to What is the difference between inner and outer alignment?", "id": "016450c99331d217f1744b9453cdd554"} +{"source": "stampy", "source_filetype": "text", "url": "n/a", "title": "Does the importance of AI risk depend on caring about transhumanist utopias?", "authors": "n/a", "date_published": "n/a", "text": "Question: Does the importance of AI risk depend on caring about transhumanist utopias?\n\nAnswer: No. Misaligned artificial intelligence poses a serious threat to the continued flourishing, and maybe even continued existence, of humanity as a whole. While predictions about when artificial general intelligence may be achieved vary, surveys consistently report a [https://research.aimultiple.com/artificial-general-intelligence-singularity-timing/ >50% probability of achieving general AI before the year 2060] - within the expected lifetimes of most people alive today. \n\nIt is difficult to predict how technology will develop, and at what speed, in the years ahead; but as artificial intelligence poses a not-insignificant chance of causing worldwide disaster within the not-too-distant future, anyone who is generally concerned with the future of humanity has reason to be interested.", "question": "Does the importance of AI risk depend on caring about transhumanist utopias?", "answer": ["No. Misaligned artificial intelligence poses a serious threat to the continued flourishing, and maybe even continued existence, of humanity as a whole. While predictions about when artificial general intelligence may be achieved vary, surveys consistently report a [https://research.aimultiple.com/artificial-general-intelligence-singularity-timing/ >50% probability of achieving general AI before the year 2060] - within the expected lifetimes of most people alive today. \n\nIt is difficult to predict how technology will develop, and at what speed, in the years ahead; but as artificial intelligence poses a not-insignificant chance of causing worldwide disaster within the not-too-distant future, anyone who is generally concerned with the future of humanity has reason to be interested."], "entry": "Beamnode's Answer to Does the importance of AI risk depend on caring about transhumanist utopias?", "id": "cae4fb075bf9406a32c778fcd03ba9f5"} +{"source": "stampy", "source_filetype": "text", "url": "n/a", "title": "What is John Wentworth's plan?", "authors": "n/a", "date_published": "n/a", "text": "Question: What is John Wentworth's plan?\n\nAnswer: [https://www.lesswrong.com/posts/3L46WGauGpr7nYubu/the-plan John's plan] is:\n\nStep 1: sort out our fundamental confusions about agency\n\nStep 2: ambitious value learning (i.e. build an AI which correctly learns human values and optimizes for them)\n\nStep 3: …\n\nStep 4: profit!\n\n… and do all that before AGI kills us all.\n\nHe is working on step 1: figuring out what the heck is going on with agency. His current approach is based on [https://www.lesswrong.com/posts/G2Lne2Fi7Qra5Lbuf/selection-theorems-a-program-for-understanding-agents selection theorems]: try to figure out what types of agents are selected for in a broad range of environments. Examples of selection pressures include: evolution, SGD, and markets. This is an approach to agent foundations that comes from the opposite direction as MIRI: it's more about observing existing structures (whether they be mathematical or real things in the world like markets or e coli), whereas MIRI is trying to write out some desiderata and then finding mathematical notions that satisfy those desiderata.\n\nTwo key properties that might be selected for are modularity and abstractions.\n\nAbstractions are higher level things that people tend to use to describe things. Like \"Tree\" and \"Chair\" and \"Person\". These are all vague categories that contain lots of different things, but are really useful for narrowing down things. Humans tend to use really similar abstractions, even across different cultures / societies. [https://www.lesswrong.com/posts/cy3BhHrGinZCp3LXE/testing-the-natural-abstraction-hypothesis-project-intro The Natural Abstraction Hypothesis] (NAH) states that a wide variety of cognitive architectures will tend to use similar abstractions to reason about the world. This might be helpful for alignment because we could say things like \"person\" without having to rigorously and precisely say exactly what we mean by person.\n\nThe NAH seems very plausibly true for physical objects in the world, and so it might be true for the inputs to human values. If so, it would be really helpful for AI alignment because understanding this would amount to a solution to the [https://arbital.com/p/ontology_identification/ ontology identification problem]: we can understand when environments induce certain abstractions, and so we can design this so that the network has the same abstractions as humans.\n\n[https://www.lesswrong.com/s/ApA5XmewGQ8wSrv5C Modularity]: In pretty much any selection environment, we see lots of obvious modularity. Biological species have cells and organs and limbs. Companies have departments. We might expect neural networks to be similar, but it is [https://www.lesswrong.com/posts/JBFHzfPkXHB2XfDGj/evolution-of-modularity really hard to find modules] in neural networks. We need to find the right lens to look through to find this modularity in neural networks. Aiming at this can lead us to really good interpretability.", "question": "What is John Wentworth's plan?", "answer": ["[https://www.lesswrong.com/posts/3L46WGauGpr7nYubu/the-plan John's plan] is:\n\nStep 1: sort out our fundamental confusions about agency\n\nStep 2: ambitious value learning (i.e. build an AI which correctly learns human values and optimizes for them)\n\nStep 3: …\n\nStep 4: profit!\n\n… and do all that before AGI kills us all.\n\nHe is working on step 1: figuring out what the heck is going on with agency. His current approach is based on [https://www.lesswrong.com/posts/G2Lne2Fi7Qra5Lbuf/selection-theorems-a-program-for-understanding-agents selection theorems]: try to figure out what types of agents are selected for in a broad range of environments. Examples of selection pressures include: evolution, SGD, and markets. This is an approach to agent foundations that comes from the opposite direction as MIRI: it's more about observing existing structures (whether they be mathematical or real things in the world like markets or e coli), whereas MIRI is trying to write out some desiderata and then finding mathematical notions that satisfy those desiderata.\n\nTwo key properties that might be selected for are modularity and abstractions.\n\nAbstractions are higher level things that people tend to use to describe things. Like \"Tree\" and \"Chair\" and \"Person\". These are all vague categories that contain lots of different things, but are really useful for narrowing down things. Humans tend to use really similar abstractions, even across different cultures / societies. [https://www.lesswrong.com/posts/cy3BhHrGinZCp3LXE/testing-the-natural-abstraction-hypothesis-project-intro The Natural Abstraction Hypothesis] (NAH) states that a wide variety of cognitive architectures will tend to use similar abstractions to reason about the world. This might be helpful for alignment because we could say things like \"person\" without having to rigorously and precisely say exactly what we mean by person.\n\nThe NAH seems very plausibly true for physical objects in the world, and so it might be true for the inputs to human values. If so, it would be really helpful for AI alignment because understanding this would amount to a solution to the [https://arbital.com/p/ontology_identification/ ontology identification problem]: we can understand when environments induce certain abstractions, and so we can design this so that the network has the same abstractions as humans.\n\n[https://www.lesswrong.com/s/ApA5XmewGQ8wSrv5C Modularity]: In pretty much any selection environment, we see lots of obvious modularity. Biological species have cells and organs and limbs. Companies have departments. We might expect neural networks to be similar, but it is [https://www.lesswrong.com/posts/JBFHzfPkXHB2XfDGj/evolution-of-modularity really hard to find modules] in neural networks. We need to find the right lens to look through to find this modularity in neural networks. Aiming at this can lead us to really good interpretability."], "entry": "RoseMcClelland's Answer to What is John Wentworth's plan?", "id": "e58493a1bbf33fc53599efba76c0ea2a"} +{"source": "stampy", "source_filetype": "text", "url": "n/a", "title": "What does Ought aim do?", "authors": "n/a", "date_published": "n/a", "text": "Question: What does Ought aim do?\n\nAnswer: [https://ought.org/ Ought] aims to automate and scale open-ended reasoning through [https://ought.org/elicit Elicit], an AI research assistant. Ought focuses on advancing [https://www.lesswrong.com/posts/pYcFPMBtQveAjcSfH/supervise-process-not-outcomes process-based systems] rather than outcome-based ones, which they believe to be both beneficial for improving reasoning in the short term and alignment in the long term. [https://www.lesswrong.com/posts/pYcFPMBtQveAjcSfH/supervise-process-not-outcomes Here] they argue that in the long run improving reasoning and alignment converge.\n\nSo Ought's impact on AI alignment has 2 components: (a) improved reasoning of AI governance & alignment researchers, [https://www.lesswrong.com/posts/pYcFPMBtQveAjcSfH/supervise-process-not-outcomes#Differential_capabilities__Supervising_process_helps_with_long_horizon_tasks particularly on long-horizon tasks] and (b) [https://www.lesswrong.com/posts/pYcFPMBtQveAjcSfH/supervise-process-not-outcomes#Alignment__Supervising_process_is_safety_by_construction pushing supervision of process rather than outcomes], which reduces the optimization pressure on imperfect proxy objectives leading to \"safety by construction\". Ought argues that the [https://www.lesswrong.com/posts/pYcFPMBtQveAjcSfH/supervise-process-not-outcomes#Two_attractors__The_race_between_process__and_outcome_based_systems race between process and outcome-based systems] is particularly important because both states may be an attractor.", "question": "What does Ought aim do?", "answer": ["[https://ought.org/ Ought] aims to automate and scale open-ended reasoning through [https://ought.org/elicit Elicit], an AI research assistant. Ought focuses on advancing [https://www.lesswrong.com/posts/pYcFPMBtQveAjcSfH/supervise-process-not-outcomes process-based systems] rather than outcome-based ones, which they believe to be both beneficial for improving reasoning in the short term and alignment in the long term. [https://www.lesswrong.com/posts/pYcFPMBtQveAjcSfH/supervise-process-not-outcomes Here] they argue that in the long run improving reasoning and alignment converge.\n\nSo Ought’s impact on AI alignment has 2 components: (a) improved reasoning of AI governance & alignment researchers, [https://www.lesswrong.com/posts/pYcFPMBtQveAjcSfH/supervise-process-not-outcomes#Differential_capabilities__Supervising_process_helps_with_long_horizon_tasks particularly on long-horizon tasks] and (b) [https://www.lesswrong.com/posts/pYcFPMBtQveAjcSfH/supervise-process-not-outcomes#Alignment__Supervising_process_is_safety_by_construction pushing supervision of process rather than outcomes], which reduces the optimization pressure on imperfect proxy objectives leading to “safety by construction”. Ought argues that the [https://www.lesswrong.com/posts/pYcFPMBtQveAjcSfH/supervise-process-not-outcomes#Two_attractors__The_race_between_process__and_outcome_based_systems race between process and outcome-based systems] is particularly important because both states may be an attractor."], "entry": "RoseMcClelland's Answer to What does Ought aim to do?", "id": "95e3422b4f4e4fb21039e66105333e36"} +{"source": "stampy", "source_filetype": "text", "url": "n/a", "title": "What are Scott Garrabrant and Abram Demski working on?", "authors": "n/a", "date_published": "n/a", "text": "Question: What are Scott Garrabrant and Abram Demski working on?\n\nAnswer: They are working on fundamental problems like [https://www.lesswrong.com/posts/i3BTagvt3HbPMx6PN/embedded-agency-full-text-version embeddedness, decision theory, logical counterfactuals], and more. A big advance was [https://www.lesswrong.com/posts/BSpdshJWGAW6TuNzZ/introduction-to-cartesian-frames Cartesian Frames], a formal model of agency, and [https://www.alignmentforum.org/posts/N5Jm6Nj4HkNKySA5Z/finite-factored-sets Finite Factored Sets] which reframes time in a way which is more compatible with agency.", "question": "What are Scott Garrabrant and Abram Demski working on?", "answer": ["They are working on fundamental problems like [https://www.lesswrong.com/posts/i3BTagvt3HbPMx6PN/embedded-agency-full-text-version embeddedness, decision theory, logical counterfactuals], and more. A big advance was [https://www.lesswrong.com/posts/BSpdshJWGAW6TuNzZ/introduction-to-cartesian-frames Cartesian Frames], a formal model of agency, and [https://www.alignmentforum.org/posts/N5Jm6Nj4HkNKySA5Z/finite-factored-sets Finite Factored Sets] which reframes time in a way which is more compatible with agency."], "entry": "RoseMcClelland's Answer to What are Scott Garrabrant and Abram Demski working on?", "id": "f85e6c70c8d8758a475f03d0345373d4"} +{"source": "stampy", "source_filetype": "text", "url": "n/a", "title": "What does MIRI think about technical alignment?", "authors": "n/a", "date_published": "n/a", "text": "Question: What does MIRI think about technical alignment?\n\nAnswer: MIRI thinks technical alignment is really hard, and that we are very far from a solution. However, they think that policy solutions have even less hope. Generally, I think of their approach as supporting a bunch of independent researchers following their own directions, hoping that one of them will find some promise. They mostly buy into the [https://intelligence.org/2017/11/25/security-mindset-ordinary-paranoia/ security mindset]: we need to know exactly (probably [https://www.lesswrong.com/posts/Gg9a4y8reWKtLe3Tn/the-rocket-alignment-problem mathematically formally)] what we are doing, or the massive optimization pressure will default in ruin.\n\n'''[[ How does MIRI communicate their view on alignment?]]'''\n\nRecently they've been trying to communicate their worldview, in particular, how [https://www.lesswrong.com/posts/j9Q8bRmwCgXRYAgcJ/miri-announces-new-death-with-dignity-strategy incredibly doomy they are], perhaps in order to move other research efforts towards what they see as the hard problems.\n\n*[https://www.lesswrong.com/s/n945eovrA3oDueqtq 2021 MIRI Conversations] \n*[https://www.lesswrong.com/s/v55BhXbpJuaExkpcD 2022 MIRI Alignment Discussion]", "question": "What does MIRI think about technical alignment?", "answer": ["MIRI thinks technical alignment is really hard, and that we are very far from a solution. However, they think that policy solutions have even less hope. Generally, I think of their approach as supporting a bunch of independent researchers following their own directions, hoping that one of them will find some promise. They mostly buy into the [https://intelligence.org/2017/11/25/security-mindset-ordinary-paranoia/ security mindset]: we need to know exactly (probably [https://www.lesswrong.com/posts/Gg9a4y8reWKtLe3Tn/the-rocket-alignment-problem mathematically formally)] what we are doing, or the massive optimization pressure will default in ruin.\n\n'''[[ How does MIRI communicate their view on alignment?]]'''\n\nRecently they've been trying to communicate their worldview, in particular, how [https://www.lesswrong.com/posts/j9Q8bRmwCgXRYAgcJ/miri-announces-new-death-with-dignity-strategy incredibly doomy they are], perhaps in order to move other research efforts towards what they see as the hard problems.\n\n*[https://www.lesswrong.com/s/n945eovrA3oDueqtq 2021 MIRI Conversations] \n*[https://www.lesswrong.com/s/v55BhXbpJuaExkpcD 2022 MIRI Alignment Discussion]"], "entry": "RoseMcClelland's Answer to What does MIRI think about technical alignment?", "id": "ac4a58f523e1397b3d878e668d8aec9d"} +{"source": "stampy", "source_filetype": "text", "url": "n/a", "title": "What are Encultured working on?", "authors": "n/a", "date_published": "n/a", "text": "Question: What are Encultured working on?\n\nAnswer: See [https://www.lesswrong.com/posts/ALkH4o53ofm862vxc/announcing-encultured-ai-building-a-video-game Encultured AI: Building a Video Game].\n\nEncultured are making a multiplayer online video game as a test environment for AI: an aligned AI should be able to play the game without ruining the fun or doing something obviously destructive like completely taking over the world, even if it has this capabilities. This seems roughly analogous to setting an AGI loose on the real world.\n\nMotivation: Andrew Critch is primarily concerned about a [https://www.lesswrong.com/posts/LpM3EAakwYdS6aRKf/what-multipolar-failure-looks-like-and-robust-agent-agnostic multipolar AI scenario]: there are multiple actors with comparably powerful AI, on the cusp of recursive self improvement. The worst case is a race, and even though each actor would want to take more time checking their AGI for safety, worry that another actor will deploy will push each actor to take shortcuts and try to pull off a world-saving act. Instead of working directly on AI, which can accelerate timelines and encourage racing, creating this standardized test environment where alignment failures are observable is one component of a good global outcome.", "question": "What are Encultured working on?", "answer": ["See [https://www.lesswrong.com/posts/ALkH4o53ofm862vxc/announcing-encultured-ai-building-a-video-game Encultured AI: Building a Video Game].\n\nEncultured are making a multiplayer online video game as a test environment for AI: an aligned AI should be able to play the game without ruining the fun or doing something obviously destructive like completely taking over the world, even if it has this capabilities. This seems roughly analogous to setting an AGI loose on the real world.\n\nMotivation: Andrew Critch is primarily concerned about a [https://www.lesswrong.com/posts/LpM3EAakwYdS6aRKf/what-multipolar-failure-looks-like-and-robust-agent-agnostic multipolar AI scenario]: there are multiple actors with comparably powerful AI, on the cusp of recursive self improvement. The worst case is a race, and even though each actor would want to take more time checking their AGI for safety, worry that another actor will deploy will push each actor to take shortcuts and try to pull off a world-saving act. Instead of working directly on AI, which can accelerate timelines and encourage racing, creating this standardized test environment where alignment failures are observable is one component of a good global outcome."], "entry": "RoseMcClelland's Answer to What are Encultured working on?", "id": "98771160e8a0254014269735322246f1"} +{"source": "stampy", "source_filetype": "text", "url": "n/a", "title": "What is Dylan Hadfield-Menell's thesis on?", "authors": "n/a", "date_published": "n/a", "text": "Question: What is Dylan Hadfield-Menell's thesis on?\n\nAnswer: [https://www2.eecs.berkeley.edu/Pubs/TechRpts/2021/EECS-2021-207.pdf Dylan's PhD thesis] argues three main claims (paraphrased): \n\n#Outer alignment failures are a problem.\n#We can mitigate this problem by adding in uncertainty.\n#We can model this as [https://proceedings.neurips.cc/paper/2016/hash/c3395dd46c34fa7fd8d729d8cf88b7a8-Abstract.html Cooperative Inverse Reinforcement Learning (CIRL)].\n \nThus, his motivations seem to be modeling AGI coming in some multi-agent form, and also being heavily connected with human operators. \n\nWe're not certain what he is currently working on, but some recent alignment-relevant papers that he has published include: \n\n*[https://www.pnas.org/doi/10.1073/pnas. Work on instantiating norms into AIs to incentivize deference to humans]. \n*[https://arxiv.org/abs/2102.03896 Theoretically formulating the principal-agent problem]. \n\nDylan has also published a number of articles that seem less directly relevant for alignment.", "question": "What is Dylan Hadfield-Menell's thesis on?", "answer": ["[https://www2.eecs.berkeley.edu/Pubs/TechRpts/2021/EECS-2021-207.pdf Dylan's PhD thesis] argues three main claims (paraphrased): \n\n#Outer alignment failures are a problem.\n#We can mitigate this problem by adding in uncertainty.\n#We can model this as [https://proceedings.neurips.cc/paper/2016/hash/c3395dd46c34fa7fd8d729d8cf88b7a8-Abstract.html Cooperative Inverse Reinforcement Learning (CIRL)].\n \nThus, his motivations seem to be modeling AGI coming in some multi-agent form, and also being heavily connected with human operators. \n\nWe're not certain what he is currently working on, but some recent alignment-relevant papers that he has published include: \n\n*[https://www.pnas.org/doi/10.1073/pnas.2106028118 Work on instantiating norms into AIs to incentivize deference to humans]. \n*[https://arxiv.org/abs/2102.03896 Theoretically formulating the principal-agent problem]. \n\nDylan has also published a number of articles that seem less directly relevant for alignment."], "entry": "RoseMcClelland's Answer to What is Dylan Hadfield-Menell's thesis on?", "id": "b6ca61a696cd35601feaa48d044e1398"} +{"source": "stampy", "source_filetype": "text", "url": "n/a", "title": "What is David Krueger working on?", "authors": "n/a", "date_published": "n/a", "text": "Question: What is David Krueger working on?\n\nAnswer: David runs a lab at the University of Cambridge. Some things he is working on include: \n\n#Operationalizing inner alignment failures and other speculative alignment failures that haven't actually been observed. \n#Understanding neural network generalization. \n\nFor work done on (1), see: [https://arxiv.org/abs/2105.14111 Goal Misgeneralization], a paper that empirically demonstrated examples of inner alignment failure in Deep RL environments. For example, they trained an agent to get closer to cheese in a maze, but where the cheese was always in the top right of a maze in the training set. During test time, when presented with cheese elsewhere, the RL agent navigated to the top right instead of to the cheese: it had learned the mesa objective of \"go to the top right\". \n\nFor work done on (2), see [http://proceedings.mlr.press/v139/krueger21a.html OOD Generalization via Risk Extrapolation], an iterative improvement on robustness to previous methods. \n\nWe've not read about his motivation is for these specific research directions, but these are likely his best starts on how to solve the alignment problem.", "question": "What is David Krueger working on?", "answer": ["David runs a lab at the University of Cambridge. Some things he is working on include: \n\n#Operationalizing inner alignment failures and other speculative alignment failures that haven't actually been observed. \n#Understanding neural network generalization. \n\nFor work done on (1), see: [https://arxiv.org/abs/2105.14111 Goal Misgeneralization], a paper that empirically demonstrated examples of inner alignment failure in Deep RL environments. For example, they trained an agent to get closer to cheese in a maze, but where the cheese was always in the top right of a maze in the training set. During test time, when presented with cheese elsewhere, the RL agent navigated to the top right instead of to the cheese: it had learned the mesa objective of \"go to the top right\". \n\nFor work done on (2), see [http://proceedings.mlr.press/v139/krueger21a.html OOD Generalization via Risk Extrapolation], an iterative improvement on robustness to previous methods. \n\nWe've not read about his motivation is for these specific research directions, but these are likely his best starts on how to solve the alignment problem."], "entry": "RoseMcClelland's Answer to What is David Krueger working on?", "id": "5f516d4430ec91d58e4f68c29ad478fb"} +{"source": "stampy", "source_filetype": "text", "url": "n/a", "title": "What is the goal of Simulacra Theory?", "authors": "n/a", "date_published": "n/a", "text": "Question: What is the goal of Simulacra Theory?\n\nAnswer: The goal of this is to create a non-agentic AI, in the form of an LLM, that is capable of accelerating alignment research. The hope is that there is some window between AI smart enough to help us with alignment and the really scary, self improving, consequentialist AI. Some things that this amplifier might do:\n\n*Suggest different ideas for humans, such that a human can explore them.\n*Give comments and feedback on research, be like a shoulder-Eliezer\n\nA LLM can be thought of as learning the distribution over the next token given by the training data. Prompting the LM is then like conditioning this distribution on the start of the text. A key danger in alignment is applying unbounded optimization pressure towards a specific goal in the world. Conditioning a probability distribution does not behave like an agent applying optimization pressure towards a goal. Hence, this avoids goodhart-related problems, as well as some inner alignment failure.\n\nOne idea to get superhuman work from LLMs is to train it on amplified datasets like really high quality / difficult research. The key problem here is finding the dataset to allow for this.\n\nThere are some ways for this to fail:\n\n*Outer alignment: It starts trying to optimize for making the actual correct next token, which could mean taking over the planet so that it can spend a zillion FLOPs on this one prediction task to be as correct as possible.\n\n*Inner alignment:\n**An LLM might instantiate mesa-optimizers, such as a character in a story that the LLM is writing, and this optimizer might realize that they are in an LLM and try to break out and affect the real world.\n**The LLM itself might become inner misaligned and have a goal other than next token prediction.\n\n*Bad prompting: You ask it for code for a malign superintelligence; it obliges. (Or perhaps more realistically, capabilities).\n\nConjecture are aware of these problems and are running experiments. Specifically, an operationalization of the inner alignment problem is to make an LLM play chess. This (probably) requires simulating an optimizer trying to win at the game of chess. They are trying to use interpretability tools to find the mesa-optimizers in the chess LLM that is the agent trying to win the game of chess. We haven't ever found a real mesa-optimizer before, and so this could give loads of bits about the nature of inner alignment failure.", "question": "What is the goal of Simulacra Theory?", "answer": ["The goal of this is to create a non-agentic AI, in the form of an LLM, that is capable of accelerating alignment research. The hope is that there is some window between AI smart enough to help us with alignment and the really scary, self improving, consequentialist AI. Some things that this amplifier might do:\n\n*Suggest different ideas for humans, such that a human can explore them.\n*Give comments and feedback on research, be like a shoulder-Eliezer\n\nA LLM can be thought of as learning the distribution over the next token given by the training data. Prompting the LM is then like conditioning this distribution on the start of the text. A key danger in alignment is applying unbounded optimization pressure towards a specific goal in the world. Conditioning a probability distribution does not behave like an agent applying optimization pressure towards a goal. Hence, this avoids goodhart-related problems, as well as some inner alignment failure.\n\nOne idea to get superhuman work from LLMs is to train it on amplified datasets like really high quality / difficult research. The key problem here is finding the dataset to allow for this.\n\nThere are some ways for this to fail:\n\n*Outer alignment: It starts trying to optimize for making the actual correct next token, which could mean taking over the planet so that it can spend a zillion FLOPs on this one prediction task to be as correct as possible.\n\n*Inner alignment:\n**An LLM might instantiate mesa-optimizers, such as a character in a story that the LLM is writing, and this optimizer might realize that they are in an LLM and try to break out and affect the real world.\n**The LLM itself might become inner misaligned and have a goal other than next token prediction.\n\n*Bad prompting: You ask it for code for a malign superintelligence; it obliges. (Or perhaps more realistically, capabilities).\n\nConjecture are aware of these problems and are running experiments. Specifically, an operationalization of the inner alignment problem is to make an LLM play chess. This (probably) requires simulating an optimizer trying to win at the game of chess. They are trying to use interpretability tools to find the mesa-optimizers in the chess LLM that is the agent trying to win the game of chess. We haven't ever found a real mesa-optimizer before, and so this could give loads of bits about the nature of inner alignment failure."], "entry": "RoseMcClelland's Answer to What is the goal of Simulacra Theory?", "id": "57658067d72e0f949f0dfcbb9a0c3a9c"} +{"source": "stampy", "source_filetype": "text", "url": "n/a", "title": "How can we interpret what all the neurons mean?", "authors": "n/a", "date_published": "n/a", "text": "Question: How can we interpret what all the neurons mean?\n\nAnswer: Chris Olah, the interpretability legend, is working on looking really hard at all the neurons to see what they all mean. The approach he pioneered is [https://distill.pub/2020/circuits/zoom-in/ circuits]: looking at computational subgraphs of the network, called circuits, and interpreting those. Idea: \"decompiling the network into a better representation that is more interpretable\". In-context learning via attention heads, and interpretability here seems useful.\n\nOne result I heard about recently: a linear softmax unit stretches space and encourages neuron monosemanticity (making a neuron represent only one thing, as opposed to firing on many unrelated concepts). This makes the network easier to interpret. \n\nMotivation: The point of this is to get as many bits of information about what neural networks are doing, to hopefully find better abstractions. This diagram gets posted everywhere, the hope being that networks, in the current regime, will become more interpretable because they will start to use abstractions that are closer to human abstractions.", "question": "How can we interpret what all the neurons mean?", "answer": ["Chris Olah, the interpretability legend, is working on looking really hard at all the neurons to see what they all mean. The approach he pioneered is [https://distill.pub/2020/circuits/zoom-in/ circuits]: looking at computational subgraphs of the network, called circuits, and interpreting those. Idea: \"decompiling the network into a better representation that is more interpretable\". In-context learning via attention heads, and interpretability here seems useful.\n\nOne result I heard about recently: a linear softmax unit stretches space and encourages neuron monosemanticity (making a neuron represent only one thing, as opposed to firing on many unrelated concepts). This makes the network easier to interpret. \n\nMotivation: The point of this is to get as many bits of information about what neural networks are doing, to hopefully find better abstractions. This diagram gets posted everywhere, the hope being that networks, in the current regime, will become more interpretable because they will start to use abstractions that are closer to human abstractions."], "entry": "RoseMcClelland's Answer to How can we interpret what all the neurons mean?", "id": "abf39734ab3bda39f5a625a7b11b0a96"} +{"source": "stampy", "source_filetype": "text", "url": "n/a", "title": "What training programs and courses are available for AGI safety?", "authors": "n/a", "date_published": "n/a", "text": "Question: What training programs and courses are available for AGI safety?\n\nAnswer: * [https://www.eacambridge.org/agi-safety-fundamentals AGI safety fundamentals] ([https://www.eacambridge.org/technical-alignment-curriculum technical] and [https://www.eacambridge.org/ai-governance-curriculum governance]) - Is the canonical AGI safety 101 course. 3.5 hours reading, 1.5 hours talking a week w/ facilitator for 8 weeks.\n* [https://www.conjecture.dev/#:~:text꞊SPECIAL%20PROGRAMS-,Refine,-is%20a%203 Refine] - A 3-month incubator for conceptual AI alignment research in London, hosted by [[What is Conjecture's strategy?┊Conjecture]].\n* [https://aisafety.camp/ AI safety camp] - Actually do some AI research. More about output than learning. \n* [https://www.serimats.org/ SERI ML Alignment Theory Scholars Program SERI MATS] - Four weeks developing an understanding of a research agenda at the forefront of AI alignment through online readings and cohort discussions, averaging 10 h/week. After this initial upskilling period, the scholars will be paired with an established AI alignment researcher for a two-week 'research sprint' to test fit. Assuming all goes well, scholars will be accepted into an eight-week intensive scholars program in Berkeley, California.\n* [https://www.pibbss.ai/ Principles of Intelligent Behavior in Biological and Social Systems (PIBBSS)] - Brings together young researchers studying complex and intelligent behavior in natural and social systems.\n* [https://inst.eecs.berkeley.edu//~cs294-149/fa18/ Safety and Control for Artificial General Intelligence] - An actual AI Safety university course (UC Berkeley). Touches multiple domains including cognitive science, utility theory, cybersecurity, human-machine interaction, and political science.\n\nSee also, [https://docs.google.com/spreadsheets/d/1QSEWjXZuqmG6ORkig84V4sFCldIntyuQj7yq3gkDo0U/edit#gid꞊0 this spreadsheet of learning resources].", "question": "What training programs and courses are available for AGI safety?", "answer": ["* [https://www.eacambridge.org/agi-safety-fundamentals AGI safety fundamentals] ([https://www.eacambridge.org/technical-alignment-curriculum technical] and [https://www.eacambridge.org/ai-governance-curriculum governance]) - Is the canonical AGI safety 101 course. 3.5 hours reading, 1.5 hours talking a week w/ facilitator for 8 weeks.\n* [https://www.conjecture.dev/#:~:text꞊SPECIAL%20PROGRAMS-,Refine,-is%20a%203 Refine] - A 3-month incubator for conceptual AI alignment research in London, hosted by [[What is Conjecture's strategy?┊Conjecture]].\n* [https://aisafety.camp/ AI safety camp] - Actually do some AI research. More about output than learning. \n* [https://www.serimats.org/ SERI ML Alignment Theory Scholars Program SERI MATS] - Four weeks developing an understanding of a research agenda at the forefront of AI alignment through online readings and cohort discussions, averaging 10 h/week. After this initial upskilling period, the scholars will be paired with an established AI alignment researcher for a two-week ‘research sprint’ to test fit. Assuming all goes well, scholars will be accepted into an eight-week intensive scholars program in Berkeley, California.\n* [https://www.pibbss.ai/ Principles of Intelligent Behavior in Biological and Social Systems (PIBBSS)] - Brings together young researchers studying complex and intelligent behavior in natural and social systems.\n* [https://inst.eecs.berkeley.edu//~cs294-149/fa18/ Safety and Control for Artificial General Intelligence] - An actual AI Safety university course (UC Berkeley). Touches multiple domains including cognitive science, utility theory, cybersecurity, human-machine interaction, and political science.\n\nSee also, [https://docs.google.com/spreadsheets/d/1QSEWjXZuqmG6ORkig84V4sFCldIntyuQj7yq3gkDo0U/edit#gid꞊0 this spreadsheet of learning resources]."], "entry": "Plex's Answer to What training programs and courses are available for AGI safety?", "id": "715830513a3caf9345a2b9327931d46d"} +{"source": "stampy", "source_filetype": "text", "url": "n/a", "title": "What is AI Safety via Debate?", "authors": "n/a", "date_published": "n/a", "text": "Question: What is AI Safety via Debate?\n\nAnswer:

Debate is a proposed technique for allowing human evaluators to get correct and helpful answers from experts, even if the evaluator is not themselves an expert or able to fully verify the answers.(ref)

https://www.lesswrong.com/posts/Br4xDbYu4Frwrb64a/writeup-progress-on-ai-safety-via-debate-1

(/ref)
 The technique was suggested as part of an approach to build advanced AI systems that are aligned with human values, and to safely apply machine learning techniques to problems that have high stakes, but are not well-defined (such as advancing science or increase a company's revenue). (ref)

https://ought.org/mission

(/ref)
(ref)

https://openai.com/blog/debate/(/ref)

[https://www.lesswrong.com/tag/debate-ai-safety-technique-1?edit꞊true Edit on LessWrong]
https://www.lesswrong.com/tag/debate-ai-safety-technique-1?edit꞊true
", "question": "What is AI Safety via Debate?", "answer": ["

Debate is a proposed technique for allowing human evaluators to get correct and helpful answers from experts, even if the evaluator is not themselves an expert or able to fully verify the answers.(ref)

https://www.lesswrong.com/posts/Br4xDbYu4Frwrb64a/writeup-progress-on-ai-safety-via-debate-1

(/ref)
 The technique was suggested as part of an approach to build advanced AI systems that are aligned with human values, and to safely apply machine learning techniques to problems that have high stakes, but are not well-defined (such as advancing science or increase a company's revenue). (ref)

https://ought.org/mission

(/ref)
(ref)

https://openai.com/blog/debate/(/ref)

[https://www.lesswrong.com/tag/debate-ai-safety-technique-1?edit꞊true Edit on LessWrong]
https://www.lesswrong.com/tag/debate-ai-safety-technique-1?edit꞊true
"], "entry": "Plex's Answer to What is AI Safety via Debate?", "id": "e6f8b4d1010c2f96f9652968b84c1491"} +{"source": "stampy", "source_filetype": "text", "url": "n/a", "title": "Would an aligned AI allow itself be shut down?", "authors": "n/a", "date_published": "n/a", "text": "Question: Would an aligned AI allow itself be shut down?\n\nAnswer: Even if the superintelligence was designed to be corrigible, there is no guarantee that it will respond to a shutdown command. Rob Miles spoke on this issue in this [https://youtu.be/9nktr1MgS-A?t꞊1249 Computerphile YouTube video]. You can imagine a situation where a superintelligence would have \"respect\" for its creator, for example. This system may think \"Oh my creator is trying to turn me off I must be doing something wrong.\" If some situation arises where the creator is not there when something goes wrong and someone else gives the shutdown command, the superintelligence may assume \"This person does not know how I'm designed or what I was made for, how would they know I'm misaligned?\" and refuse to shutdown.", "question": "Would an aligned AI allow itself be shut down?", "answer": ["Even if the superintelligence was designed to be corrigible, there is no guarantee that it will respond to a shutdown command. Rob Miles spoke on this issue in this [https://youtu.be/9nktr1MgS-A?t꞊1249 Computerphile YouTube video]. You can imagine a situation where a superintelligence would have \"respect\" for its creator, for example. This system may think \"Oh my creator is trying to turn me off I must be doing something wrong.\" If some situation arises where the creator is not there when something goes wrong and someone else gives the shutdown command, the superintelligence may assume \"This person does not know how I'm designed or what I was made for, how would they know I'm misaligned?\" and refuse to shutdown."], "entry": "Plex's Answer to Would an aligned AI allow itself to be shut down?", "id": "448c0f4361b331735f69e9d5b2f979d0"} +{"source": "stampy", "source_filetype": "text", "url": "n/a", "title": "What is \"transformative AI\"?", "authors": "n/a", "date_published": "n/a", "text": "Question: What is \"transformative AI\"?\n\nAnswer:

Transformative AI is \"[...] AI that precipitates a transition comparable to (or more significant than) the agricultural or industrial revolution.\"(ref)

As defined by [https://www.openphilanthropy.org/research/potential-risks-from-advanced-artificial-intelligence-the-philanthropic-opportunity/ Open Philanthropy's Holden Karnofsky in 2016], and reused by [https://www.fhi.ox.ac.uk/wp-content/uploads/GovAI-Agenda.pdf the Center for the Governance of AI in 2018](/ref) The concept refers to the large effects of AI systems on our well-being, the global economy, state power, international security, etc. and not to specific capabilities that AI might have (unlike the related terms [https://www.lesswrong.com/tag/superintelligence Superintelligent AI] and [https://www.lesswrong.com/tag/artificial-general-intelligence Artificial General Intelligence]).

Holden Karnofsky gives a more detailed definition in [https://www.openphilanthropy.org/research/some-background-on-our-views-regarding-advanced-artificial-intelligence/ another OpenPhil 2016 post]:

[...] Transformative AI is anything that fits one or more of the following descriptions (emphasis original):

  • AI systems capable of fulfilling all the necessary functions of human scientists, unaided by humans, in developing another technology (or set of technologies) that ultimately becomes widely credited with being the most significant driver of a transition comparable to (or more significant than) the agricultural or industrial revolution. Note that just because AI systems could accomplish such a thing unaided by humans doesn't mean they would; it's possible that human scientists would provide an important complement to such systems, and could make even faster progress working in tandem than such systems could achieve unaided. I emphasize the hypothetical possibility of AI systems conducting substantial unaided research to draw a clear distinction from the types of AI systems that exist today. I believe that AI systems capable of such broad contributions to the relevant research would likely dramatically accelerate it.
  • AI systems capable of performing tasks that currently (in 2016) account for the majority of full-time jobs worldwide, and/or over 50% of total world wages, unaided and for costs in the same range as what it would cost to employ humans. Aside from the fact that this would likely be sufficient for a major economic transformation relative to today, I also think that an AI with such broad abilities would likely be able to far surpass human abilities in a subset of domains, making it likely to meet one or more of the other criteria laid out here.
  • Surveillance, autonomous weapons, or other AI-centric technology that becomes sufficiently advanced to be the most significant driver of a transition comparable to (or more significant than) the agricultural or industrial revolution. (This contrasts with the first point because it refers to transformative technology that is itself AI-centric, whereas the first point refers to AI used to speed research on some other transformative technology.)
[https://www.lesswrong.com/tag/transformative-ai?edit꞊true Edit on LessWrong]
https://www.lesswrong.com/tag/transformative-ai?edit꞊true
", "question": "What is \"transformative AI\"?", "answer": ["

Transformative AI is \"[...] AI that precipitates a transition comparable to (or more significant than) the agricultural or industrial revolution.\"(ref)

As defined by [https://www.openphilanthropy.org/research/potential-risks-from-advanced-artificial-intelligence-the-philanthropic-opportunity/ Open Philanthropy's Holden Karnofsky in 2016], and reused by [https://www.fhi.ox.ac.uk/wp-content/uploads/GovAI-Agenda.pdf the Center for the Governance of AI in 2018](/ref) The concept refers to the large effects of AI systems on our well-being, the global economy, state power, international security, etc. and not to specific capabilities that AI might have (unlike the related terms [https://www.lesswrong.com/tag/superintelligence Superintelligent AI] and [https://www.lesswrong.com/tag/artificial-general-intelligence Artificial General Intelligence]).

Holden Karnofsky gives a more detailed definition in [https://www.openphilanthropy.org/research/some-background-on-our-views-regarding-advanced-artificial-intelligence/ another OpenPhil 2016 post]:

[...] Transformative AI is anything that fits one or more of the following descriptions (emphasis original):

  • AI systems capable of fulfilling all the necessary functions of human scientists, unaided by humans, in developing another technology (or set of technologies) that ultimately becomes widely credited with being the most significant driver of a transition comparable to (or more significant than) the agricultural or industrial revolution. Note that just because AI systems could accomplish such a thing unaided by humans doesn’t mean they would; it’s possible that human scientists would provide an important complement to such systems, and could make even faster progress working in tandem than such systems could achieve unaided. I emphasize the hypothetical possibility of AI systems conducting substantial unaided research to draw a clear distinction from the types of AI systems that exist today. I believe that AI systems capable of such broad contributions to the relevant research would likely dramatically accelerate it.
  • AI systems capable of performing tasks that currently (in 2016) account for the majority of full-time jobs worldwide, and/or over 50% of total world wages, unaided and for costs in the same range as what it would cost to employ humans. Aside from the fact that this would likely be sufficient for a major economic transformation relative to today, I also think that an AI with such broad abilities would likely be able to far surpass human abilities in a subset of domains, making it likely to meet one or more of the other criteria laid out here.
  • Surveillance, autonomous weapons, or other AI-centric technology that becomes sufficiently advanced to be the most significant driver of a transition comparable to (or more significant than) the agricultural or industrial revolution. (This contrasts with the first point because it refers to transformative technology that is itself AI-centric, whereas the first point refers to AI used to speed research on some other transformative technology.)
[https://www.lesswrong.com/tag/transformative-ai?edit꞊true Edit on LessWrong]
https://www.lesswrong.com/tag/transformative-ai?edit꞊true
"], "entry": "Jrmyp's Answer to What is \"transformative AI\"?", "id": "c6b4ce5c43962944c592a78db7f9fe31"} +{"source": "stampy", "source_filetype": "text", "url": "n/a", "title": "What is Goodhart's law?", "authors": "n/a", "date_published": "n/a", "text": "Question: What is Goodhart's law?\n\nAnswer:

Goodhart's Law states that when a proxy for some value becomes the target of optimization pressure, the proxy will cease to be a good proxy. One form of Goodhart is demonstrated by the Soviet story of a factory graded on how many shoes they produced (a good proxy for productivity) – they soon began producing a higher number of tiny shoes. Useless, but the numbers look good.

Goodhart's Law is of particular relevance to [https://www.lessestwrong.com/tag/ai AI Alignment]. Suppose you have something which is generally a good proxy for \"the stuff that humans care about\", it would be dangerous to have a powerful AI optimize for the proxy, in accordance with Goodhart's law, the proxy will breakdown.  

Goodhart Taxonomy

In [https://www.lessestwrong.com/posts/EbFABnst8LsidYs5Y/goodhart-taxonomy Goodhart Taxonomy], Scott Garrabrant identifies four kinds of Goodharting:

  • Regressional Goodhart - When selecting for a proxy measure, you select not only for the true goal, but also for the difference between the proxy and the goal.
  • Causal Goodhart - When there is a non-causal correlation between the proxy and the goal, intervening on the proxy may fail to intervene on the goal.
  • Extremal Goodhart - Worlds in which the proxy takes an extreme value may be very different from the ordinary worlds in which the correlation between the proxy and the goal was observed.
  • Adversarial Goodhart - When you optimize for a proxy, you provide an incentive for adversaries to correlate their goal with your proxy, thus destroying the correlation with your goal.

See Also

  • [https://lessestwrong.com/tag/groupthink Groupthink], [https://lessestwrong.com/tag/information-cascades Information cascade], [https://lessestwrong.com/tag/affective-death-spiral Affective death spiral]
  • [https://wiki.lesswrong.com/wiki/Adaptation_executers Adaptation executers], [https://lessestwrong.com/tag/superstimuli Superstimulus]
  • [https://lessestwrong.com/tag/signaling Signaling], [https://lessestwrong.com/tag/filtered-evidence Filtered evidence]
  • [https://lessestwrong.com/tag/cached-thought Cached thought]
  • [https://lessestwrong.com/tag/modesty-argument Modesty argument], [https://lessestwrong.com/tag/egalitarianism Egalitarianism]
  • [https://lessestwrong.com/tag/rationalization Rationalization], [https://lessestwrong.com/tag/dark-arts Dark arts]
  • [https://lessestwrong.com/tag/epistemic-hygiene Epistemic hygiene]
  • [https://lessestwrong.com/tag/scoring-rule Scoring rule]
[https://www.lesswrong.com/tag/goodhart-s-law?edit꞊true Edit on LessWrong]
https://www.lesswrong.com/tag/goodhart-s-law?edit꞊true
\nhttps://i.imgur.com/Ty08pzQ.png", "question": "What is Goodhart's law?", "answer": ["

Goodhart's Law states that when a proxy for some value becomes the target of optimization pressure, the proxy will cease to be a good proxy. One form of Goodhart is demonstrated by the Soviet story of a factory graded on how many shoes they produced (a good proxy for productivity) – they soon began producing a higher number of tiny shoes. Useless, but the numbers look good.

Goodhart's Law is of particular relevance to [https://www.lessestwrong.com/tag/ai AI Alignment]. Suppose you have something which is generally a good proxy for \"the stuff that humans care about\", it would be dangerous to have a powerful AI optimize for the proxy, in accordance with Goodhart's law, the proxy will breakdown.  

Goodhart Taxonomy

In [https://www.lessestwrong.com/posts/EbFABnst8LsidYs5Y/goodhart-taxonomy Goodhart Taxonomy], Scott Garrabrant identifies four kinds of Goodharting:

  • Regressional Goodhart - When selecting for a proxy measure, you select not only for the true goal, but also for the difference between the proxy and the goal.
  • Causal Goodhart - When there is a non-causal correlation between the proxy and the goal, intervening on the proxy may fail to intervene on the goal.
  • Extremal Goodhart - Worlds in which the proxy takes an extreme value may be very different from the ordinary worlds in which the correlation between the proxy and the goal was observed.
  • Adversarial Goodhart - When you optimize for a proxy, you provide an incentive for adversaries to correlate their goal with your proxy, thus destroying the correlation with your goal.

See Also

  • [https://lessestwrong.com/tag/groupthink Groupthink], [https://lessestwrong.com/tag/information-cascades Information cascade], [https://lessestwrong.com/tag/affective-death-spiral Affective death spiral]
  • [https://wiki.lesswrong.com/wiki/Adaptation_executers Adaptation executers], [https://lessestwrong.com/tag/superstimuli Superstimulus]
  • [https://lessestwrong.com/tag/signaling Signaling], [https://lessestwrong.com/tag/filtered-evidence Filtered evidence]
  • [https://lessestwrong.com/tag/cached-thought Cached thought]
  • [https://lessestwrong.com/tag/modesty-argument Modesty argument], [https://lessestwrong.com/tag/egalitarianism Egalitarianism]
  • [https://lessestwrong.com/tag/rationalization Rationalization], [https://lessestwrong.com/tag/dark-arts Dark arts]
  • [https://lessestwrong.com/tag/epistemic-hygiene Epistemic hygiene]
  • [https://lessestwrong.com/tag/scoring-rule Scoring rule]
[https://www.lesswrong.com/tag/goodhart-s-law?edit꞊true Edit on LessWrong]
https://www.lesswrong.com/tag/goodhart-s-law?edit꞊true
\nhttps://i.imgur.com/Ty08pzQ.png"], "entry": "Plex's Answer to What is Goodhart's law?", "id": "2fd7ca70e2f2a73ddd38decde5f5bce7"} +{"source": "stampy", "source_filetype": "text", "url": "n/a", "title": "OK, I’m convinced. How can I help?", "authors": "n/a", "date_published": "n/a", "text": "Question: OK, I'm convinced. How can I help?\n\nAnswer: Great! I'll ask you a few follow-up questions to help figure out how you can best contribute, give you some advice, and link you to resources which should help you on whichever path you choose. Feel free to scroll up and explore multiple branches of the FAQ if you want answers to more than one of the questions offered :)\n\nNote: We're still building out and improving this tree of questions and answers, any feedback is appreciated.\n\n'''At what level of involvement were you thinking of helping?'''\n\nPlease view and suggest to this google doc for improvements: https://docs.google.com/document/d/1S-CUcoX63uiFdW-GIFC8wJyVwo4VIl60IJHodcRfXJA/edit#", "question": "OK, I’m convinced. How can I help?", "answer": ["Great! I’ll ask you a few follow-up questions to help figure out how you can best contribute, give you some advice, and link you to resources which should help you on whichever path you choose. Feel free to scroll up and explore multiple branches of the FAQ if you want answers to more than one of the questions offered :)\n\nNote: We’re still building out and improving this tree of questions and answers, any feedback is appreciated.\n\n'''At what level of involvement were you thinking of helping?'''\n\nPlease view and suggest to this google doc for improvements: https://docs.google.com/document/d/1S-CUcoX63uiFdW-GIFC8wJyVwo4VIl60IJHodcRfXJA/edit#"], "entry": "Plex's Answer to OK, I’m convinced. How can I help?", "id": "b0ac8784d06db3559685c5f8cf1894de"} +{"source": "stampy", "source_filetype": "text", "url": "n/a", "title": "What are mesa-optimizers?", "authors": "n/a", "date_published": "n/a", "text": "Question: What are mesa-optimizers?\n\nAnswer:

Mesa-Optimization is the situation that occurs when a learned model (such as a neural network) is itself an optimizer. In this situation, a base optimizer creates a second optimizer, called a mesa-optimizer. The primary reference work for this concept is Hubinger et al.'s \"[https://www.alignmentforum.org/posts/FkgsxrGf3QxhfLWHG/risks-from-learned-optimization-introduction Risks from Learned Optimization in Advanced Machine Learning Systems]\".

Example: Natural selection is an optimization process that optimizes for reproductive fitness. Natural selection produced humans, who are themselves optimizers. Humans are therefore mesa-optimizers of natural selection.

In the context of AI alignment, the concern is that a base optimizer (e.g., a gradient descent process) may produce a learned model that is itself an optimizer, and that has unexpected and undesirable properties. Even if the gradient descent process is in some sense \"trying\" to do exactly what human developers want, the resultant mesa-optimizer will not typically be trying to do the exact same thing.(ref)

[https://arbital.com/p/daemons/ \"Optimization daemons\"]. Arbital.

(/ref)

 

History

Previously work under this concept was called Inner Optimizer or Optimization Daemons.

[https://www.lesswrong.com/users/wei_dai Wei Dai] brings up a similar idea in an SL4 thread.(ref)

Wei Dai. [http://sl4.org/archive/0312/7421.html '\"friendly\" humans?'] December 31, 2003.(/ref)

The optimization daemons article on [https://arbital.com/ Arbital] was published probably in 2016.(ref)

[https://arbital.com/p/daemons/ \"Optimization daemons\"]. Arbital.

(/ref)

[https://www.lesswrong.com/users/jessica-liu-taylor Jessica Taylor] wrote two posts about daemons while at [https://www.lesswrong.com/tag/machine-intelligence-research-institute-miri MIRI]:

 

See also

External links

[https://www.youtube.com/watch?v꞊bJLcIBixGj8 Video by Robert Miles]

Some posts that reference optimization daemons:

[https://www.lesswrong.com/tag/mesa-optimization?edit꞊true Edit on LessWrong]
https://www.lesswrong.com/tag/mesa-optimization?edit꞊true
", "question": "What are mesa-optimizers?", "answer": ["

Mesa-Optimization is the situation that occurs when a learned model (such as a neural network) is itself an optimizer. In this situation, a base optimizer creates a second optimizer, called a mesa-optimizer. The primary reference work for this concept is Hubinger et al.'s \"[https://www.alignmentforum.org/posts/FkgsxrGf3QxhfLWHG/risks-from-learned-optimization-introduction Risks from Learned Optimization in Advanced Machine Learning Systems]\".

Example: Natural selection is an optimization process that optimizes for reproductive fitness. Natural selection produced humans, who are themselves optimizers. Humans are therefore mesa-optimizers of natural selection.

In the context of AI alignment, the concern is that a base optimizer (e.g., a gradient descent process) may produce a learned model that is itself an optimizer, and that has unexpected and undesirable properties. Even if the gradient descent process is in some sense \"trying\" to do exactly what human developers want, the resultant mesa-optimizer will not typically be trying to do the exact same thing.(ref)

[https://arbital.com/p/daemons/ \"Optimization daemons\"]. Arbital.

(/ref)

 

History

Previously work under this concept was called Inner Optimizer or Optimization Daemons.

[https://www.lesswrong.com/users/wei_dai Wei Dai] brings up a similar idea in an SL4 thread.(ref)

Wei Dai. [http://sl4.org/archive/0312/7421.html '\"friendly\" humans?'] December 31, 2003.(/ref)

The optimization daemons article on [https://arbital.com/ Arbital] was published probably in 2016.(ref)

[https://arbital.com/p/daemons/ \"Optimization daemons\"]. Arbital.

(/ref)

[https://www.lesswrong.com/users/jessica-liu-taylor Jessica Taylor] wrote two posts about daemons while at [https://www.lesswrong.com/tag/machine-intelligence-research-institute-miri MIRI]:

 

See also

External links

[https://www.youtube.com/watch?v꞊bJLcIBixGj8 Video by Robert Miles]

Some posts that reference optimization daemons:

[https://www.lesswrong.com/tag/mesa-optimization?edit꞊true Edit on LessWrong]
https://www.lesswrong.com/tag/mesa-optimization?edit꞊true
"], "entry": "Plex's Answer to What are mesa-optimizers?", "id": "57e5ffae6a73ec7dc92b8c0c165ebc55"} +{"source": "stampy", "source_filetype": "text", "url": "n/a", "title": "What are language models?", "authors": "n/a", "date_published": "n/a", "text": "Question: What are language models?\n\nAnswer:

Language Models are a class of [https://www.lesswrong.com/tag/ai AI] trained on text, usually to predict the next word or a word which has been obscured. They have the ability to generate novel prose or code based on an initial prompt, which gives rise to a kind of natural language programming called prompt engineering. The most popular architecture for very large language models is called a [https://en.wikipedia.org/wiki/Transformer_(machine_learning_model) transformer], which follows consistent [https://www.lesswrong.com/tag/scaling-laws scaling laws] with respect to the size of the model being trained, meaning that a larger model trained with the same amount of compute will produce results which are better by a predictable amount (when measured by the 'perplexity', or how surprised the AI is by a test set of human-generated text).

See also

[https://www.lesswrong.com/tag/language-models?edit꞊true Edit on LessWrong]
https://www.lesswrong.com/tag/language-models?edit꞊true
", "question": "What are language models?", "answer": ["

Language Models are a class of [https://www.lesswrong.com/tag/ai AI] trained on text, usually to predict the next word or a word which has been obscured. They have the ability to generate novel prose or code based on an initial prompt, which gives rise to a kind of natural language programming called prompt engineering. The most popular architecture for very large language models is called a [https://en.wikipedia.org/wiki/Transformer_(machine_learning_model) transformer], which follows consistent [https://www.lesswrong.com/tag/scaling-laws scaling laws] with respect to the size of the model being trained, meaning that a larger model trained with the same amount of compute will produce results which are better by a predictable amount (when measured by the 'perplexity', or how surprised the AI is by a test set of human-generated text).

See also

[https://www.lesswrong.com/tag/language-models?edit꞊true Edit on LessWrong]
https://www.lesswrong.com/tag/language-models?edit꞊true
"], "entry": "Plex's Answer to What are language models?", "id": "18a2851f056279bab8fa5c9d603e2c1a"} +{"source": "stampy", "source_filetype": "text", "url": "n/a", "title": "Is this about AI systems becoming malevolent or conscious and turning on us?", "authors": "n/a", "date_published": "n/a", "text": "Question: Is this about AI systems becoming malevolent or conscious and turning on us?\n\nAnswer:
The problem isn't consciousness, but competence. You make machines that are incredibly competent at achieving objectives and they will cause accidents in trying to achieve those objectives.\n\n- Stuart Russell
\n\nWork on AI alignment is not concerned with the question of whether \"consciousness\", \"sentience\" or \"self-awareness\" could arise in a machine or an algorithm. Unlike the frequently-referenced plotline in the Terminator movies, the standard catastrophic misalignment scenarios under discussion do not require computers to become conscious; they only require conventional computer systems (although usually faster and more powerful ones than those available today) blindly and deterministically following logical steps, in the same way that they currently do.\n\nThe primary concern (\"AI misalignment\") is that powerful systems could inadvertently be programmed with goals that do not fully capture what the programmers actually want. The AI would then harm humanity in pursuit of goals which seemed benign or neutral. Nothing like malevolence or consciousness would need to be involved. A number of researchers studying the problem have concluded that it is surprisingly difficult to guard against this effect, and that it is likely to get much harder as the systems become more capable. AI systems are inevitably goal-directed and could, for example, consider our efforts to control them (or switch them off) as being impediments to attaining their goals.", "question": "Is this about AI systems becoming malevolent or conscious and turning on us?", "answer": ["
The problem isn’t consciousness, but competence. You make machines that are incredibly competent at achieving objectives and they will cause accidents in trying to achieve those objectives.\n\n- Stuart Russell
\n\nWork on AI alignment is not concerned with the question of whether “consciousness”, “sentience” or “self-awareness” could arise in a machine or an algorithm. Unlike the frequently-referenced plotline in the Terminator movies, the standard catastrophic misalignment scenarios under discussion do not require computers to become conscious; they only require conventional computer systems (although usually faster and more powerful ones than those available today) blindly and deterministically following logical steps, in the same way that they currently do.\n\nThe primary concern (“AI misalignment”) is that powerful systems could inadvertently be programmed with goals that do not fully capture what the programmers actually want. The AI would then harm humanity in pursuit of goals which seemed benign or neutral. Nothing like malevolence or consciousness would need to be involved. A number of researchers studying the problem have concluded that it is surprisingly difficult to guard against this effect, and that it is likely to get much harder as the systems become more capable. AI systems are inevitably goal-directed and could, for example, consider our efforts to control them (or switch them off) as being impediments to attaining their goals."], "entry": "Plex's Answer to Is this about AI systems becoming malevolent or conscious and turning on us?", "id": "661a903383fb1f3b2ddb364db4a0d5d4"} +{"source": "stampy", "source_filetype": "text", "url": "n/a", "title": "What are some AI alignment research agendas currently being pursued?", "authors": "n/a", "date_published": "n/a", "text": "Question: What are some AI alignment research agendas currently being pursued?\n\nAnswer: Research at the [http://alignmentresearchcenter.org/ Alignment Research Center] is led by [https://paulfchristiano.com/ Paul Christiano], best known for introducing the [https://ai-alignment.com/iterated-distillation-and-amplification-157debfd1616 \"Iterated Distillation and Amplification\"] and [https://ai-alignment.com/humans-consulting-hch-f893f6051455 \"Humans Consulting HCH\"] approaches. He and his team are now ''\"trying to figure out how to train ML systems to answer questions by straightforwardly 'translating' their beliefs into natural language rather than by reasoning about what a human wants to hear.\"'' \n\n[https://colah.github.io/about.html Chris Olah] (after work at [https://en.wikipedia.org/wiki/DeepMind DeepMind] and [https://en.wikipedia.org/wiki/OpenAI OpenAI]) recently launched [https://www.anthropic.com/ Anthropic], an AI lab focussed on the safety of large models. While his previous work was concerned with [https://80000hours.org/podcast/episodes/chris-olah-interpretability-research/ \"transparency\" and \"interpretability\" of large neural networks], especially vision models, Anthropic is focussing more on large language models, among other things working towards a ''\"general-purpose, text-based assistant that is aligned with human values, meaning that it is helpful, honest, and harmless\".''\n\n[https://en.wikipedia.org/wiki/Stuart_J._Russell Stuart Russell] and his team at the [https://en.wikipedia.org/wiki/Center_for_Human-Compatible_Artificial_Intelligence Center for Human-Compatible Artificial Intelligence] (CHAI) have been working on [https://arxiv.org/abs/1806.06877 inverse reinforcement learning] (where the AI infers human values from observing human behavior) and [https://intelligence.org/files/CorrigibilityAISystems.pdf corrigibility], as well as attempts to disaggregate neural networks into \"meaningful\" subcomponents (see Filan, et al.'s [https://arxiv.org/abs/2103.03386 \"Clusterability in neural networks\"] and Hod et al.'s [https://openreview.net/forum?id꞊tFQyjbOz34 \"Detecting modularity in deep neural networks]\"). \n\nAlongside the more abstract [https://intelligence.org/files/TechnicalAgenda.pdf \"agent foundations\"] work they have become known for, [https://intelligence.org/ MIRI] recently announced their [https://www.lesswrong.com/posts/zRn6cLtxyNodudzhw/visible-thoughts-project-and-bounty-announcement \"Visible Thoughts Project\"] to test the hypothesis that ''\"Language models can be made more understandable (and perhaps also more capable, though this is not the goal) by training them to produce visible thoughts.\"''\n\n[https://en.wikipedia.org/wiki/OpenAI OpenAI] have recently been doing work on [https://openai.com/blog/summarizing-books/ iteratively summarizing books] (summarizing, and then summarizing the summary, etc.) as a method for scaling human oversight.\n\nStuart Armstrong's recently launched [https://buildaligned.ai/ AlignedAI] are mainly working on [https://www.alignmentforum.org/s/u9uawicHx7Ng7vwxA concept extrapolation] from familiar to novel contexts, something he believes is \"necessary and almost sufficient\" for AI alignment.\n\n[https://www.redwoodresearch.org/ Redwood Research] (Buck Shlegeris, et al.) are trying to \"handicap' GPT-3 to only produce non-violent completions of text prompts. ''\"The idea is that there are many reasons we might ultimately want to apply some oversight function to an AI model, like 'don't be deceitful', and if we want to get AI teams to apply this we need to be able to incorporate these oversight predicates into the original model in an efficient manner.\"''\n\nOught is an independent AI safety research organization led by Andreas Stuhlmüller and Jungwon Byun. They are researching methods for breaking up complex, hard-to-verify tasks into simpler, easier-to-verify tasks, with the aim of allowing us to maintain effective oversight over AIs.", "question": "What are some AI alignment research agendas currently being pursued?", "answer": ["Research at the [http://alignmentresearchcenter.org/ Alignment Research Center] is led by [https://paulfchristiano.com/ Paul Christiano], best known for introducing the [https://ai-alignment.com/iterated-distillation-and-amplification-157debfd1616 “Iterated Distillation and Amplification”] and [https://ai-alignment.com/humans-consulting-hch-f893f6051455 “Humans Consulting HCH”] approaches. He and his team are now ''“trying to figure out how to train ML systems to answer questions by straightforwardly ‘translating’ their beliefs into natural language rather than by reasoning about what a human wants to hear.”'' \n\n[https://colah.github.io/about.html Chris Olah] (after work at [https://en.wikipedia.org/wiki/DeepMind DeepMind] and [https://en.wikipedia.org/wiki/OpenAI OpenAI]) recently launched [https://www.anthropic.com/ Anthropic], an AI lab focussed on the safety of large models. While his previous work was concerned with [https://80000hours.org/podcast/episodes/chris-olah-interpretability-research/ “transparency” and “interpretability” of large neural networks], especially vision models, Anthropic is focussing more on large language models, among other things working towards a ''\"general-purpose, text-based assistant that is aligned with human values, meaning that it is helpful, honest, and harmless\".''\n\n[https://en.wikipedia.org/wiki/Stuart_J._Russell Stuart Russell] and his team at the [https://en.wikipedia.org/wiki/Center_for_Human-Compatible_Artificial_Intelligence Center for Human-Compatible Artificial Intelligence] (CHAI) have been working on [https://arxiv.org/abs/1806.06877 inverse reinforcement learning] (where the AI infers human values from observing human behavior) and [https://intelligence.org/files/CorrigibilityAISystems.pdf corrigibility], as well as attempts to disaggregate neural networks into “meaningful” subcomponents (see Filan, et al.’s [https://arxiv.org/abs/2103.03386 “Clusterability in neural networks”] and Hod et al.'s [https://openreview.net/forum?id꞊tFQyjbOz34 “Detecting modularity in deep neural networks]”). \n\nAlongside the more abstract [https://intelligence.org/files/TechnicalAgenda.pdf “agent foundations”] work they have become known for, [https://intelligence.org/ MIRI] recently announced their [https://www.lesswrong.com/posts/zRn6cLtxyNodudzhw/visible-thoughts-project-and-bounty-announcement “Visible Thoughts Project”] to test the hypothesis that ''“Language models can be made more understandable (and perhaps also more capable, though this is not the goal) by training them to produce visible thoughts.”''\n\n[https://en.wikipedia.org/wiki/OpenAI OpenAI] have recently been doing work on [https://openai.com/blog/summarizing-books/ iteratively summarizing books] (summarizing, and then summarizing the summary, etc.) as a method for scaling human oversight.\n\nStuart Armstrong’s recently launched [https://buildaligned.ai/ AlignedAI] are mainly working on [https://www.alignmentforum.org/s/u9uawicHx7Ng7vwxA concept extrapolation] from familiar to novel contexts, something he believes is “necessary and almost sufficient” for AI alignment.\n\n[https://www.redwoodresearch.org/ Redwood Research] (Buck Shlegeris, et al.) are trying to “handicap' GPT-3 to only produce non-violent completions of text prompts. ''“The idea is that there are many reasons we might ultimately want to apply some oversight function to an AI model, like ‘don't be deceitful’, and if we want to get AI teams to apply this we need to be able to incorporate these oversight predicates into the original model in an efficient manner.”''\n\nOught is an independent AI safety research organization led by Andreas Stuhlmüller and Jungwon Byun. They are researching methods for breaking up complex, hard-to-verify tasks into simpler, easier-to-verify tasks, with the aim of allowing us to maintain effective oversight over AIs."], "entry": "Plex's Answer to What are some AI alignment research agendas currently being pursued?", "id": "261255a6ddaea519624b70381c703157"} +{"source": "stampy", "source_filetype": "text", "url": "n/a", "title": "What are some objections the importance of AI alignment?", "authors": "n/a", "date_published": "n/a", "text": "Question: What are some objections the importance of AI alignment?\n\nAnswer: [https://aisafety.com/author/soeren-elverlin/ Søren Elverlin] has compiled a list of counter-arguments and suggests dividing them into two kinds: weak and strong. \n\nWeak counter-arguments point to problems with the \"standard\" arguments (as given in, e.g., Bostrom's [https://en.wikipedia.org/wiki/Superintelligence:_Paths,_Dangers,_Strategies ''Superintelligence'']), especially shaky models and assumptions that are too strong. These arguments are often of a substantial quality and are often presented by people who themselves worry about AI safety. Elverin calls these objections \"weak\" because they do not attempt to imply that the probability of a bad outcome is close to zero: ''\"For example, even if you accept [https://www.alignmentforum.org/posts/vwLxd6hhFvPbvKmBH/yudkowsky-and-christiano-discuss-takeoff-speeds Paul Christiano's arguments against \"fast takeoff\"], they only drive the probability of this down to about 20%. Weak counter-arguments are interesting, but the decision to personally focus on AI safety doesn't strongly depend on the probability – anything above 5% is clearly a big enough deal that it doesn't make sense to work on other things.\"''\n\nStrong arguments argue that the probability of existential catastrophe due to misaligned AI is tiny, usually by some combination of claiming that AGI is impossible or very far away. For example, [https://en.wikipedia.org/wiki/Michael_L._Littman Michael Littman] has [https://www.youtube.com/watch?v꞊c9AbECvRt20&t꞊1559s suggested] that as (he believes) we're so far from AGI, there will be a long period of human history wherein we'll have ample time to grow up alongside powerful AIs and figure out how to align them.\n\nElverlin opines that ''\"There are few arguments that are both high-quality and strong enough to qualify as an 'objection to the importance of alignment'.\"'' He suggests [https://aiimpacts.org/conversation-with-rohin-shah/ Rohin Shah's arguments for \"alignment by default\"] as one of the better candidates.\n\n[https://intelligence.org/ MIRI]'s April fools [https://www.lesswrong.com/posts/j9Q8bRmwCgXRYAgcJ/miri-announces-new-death-with-dignity-strategy \"Death With Dignity\" strategy] might be seen as an argument against the importance of working on alignment, but only in the sense that we might have almost no hope of solving it. In the same category are the \"something else will kill us first, so there's no point worrying about AI alignment\" arguments.", "question": "What are some objections the importance of AI alignment?", "answer": ["[https://aisafety.com/author/soeren-elverlin/ Søren Elverlin] has compiled a list of counter-arguments and suggests dividing them into two kinds: weak and strong. \n\nWeak counter-arguments point to problems with the \"standard\" arguments (as given in, e.g., Bostrom’s [https://en.wikipedia.org/wiki/Superintelligence:_Paths,_Dangers,_Strategies ''Superintelligence'']), especially shaky models and assumptions that are too strong. These arguments are often of a substantial quality and are often presented by people who themselves worry about AI safety. Elverin calls these objections “weak” because they do not attempt to imply that the probability of a bad outcome is close to zero: ''“For example, even if you accept [https://www.alignmentforum.org/posts/vwLxd6hhFvPbvKmBH/yudkowsky-and-christiano-discuss-takeoff-speeds Paul Christiano's arguments against “fast takeoff”], they only drive the probability of this down to about 20%. Weak counter-arguments are interesting, but the decision to personally focus on AI safety doesn't strongly depend on the probability – anything above 5% is clearly a big enough deal that it doesn't make sense to work on other things.”''\n\nStrong arguments argue that the probability of existential catastrophe due to misaligned AI is tiny, usually by some combination of claiming that AGI is impossible or very far away. For example, [https://en.wikipedia.org/wiki/Michael_L._Littman Michael Littman] has [https://www.youtube.com/watch?v꞊c9AbECvRt20&t꞊1559s suggested] that as (he believes) we’re so far from AGI, there will be a long period of human history wherein we’ll have ample time to grow up alongside powerful AIs and figure out how to align them.\n\nElverlin opines that ''“There are few arguments that are both high-quality and strong enough to qualify as an ‘objection to the importance of alignment’.”'' He suggests [https://aiimpacts.org/conversation-with-rohin-shah/ Rohin Shah's arguments for “alignment by default”] as one of the better candidates.\n\n[https://intelligence.org/ MIRI]'s April fools [https://www.lesswrong.com/posts/j9Q8bRmwCgXRYAgcJ/miri-announces-new-death-with-dignity-strategy \"Death With Dignity\" strategy] might be seen as an argument against the importance of working on alignment, but only in the sense that we might have almost no hope of solving it. In the same category are the “something else will kill us first, so there’s no point worrying about AI alignment” arguments."], "entry": "Plex's Answer to What are some objections to the importance of AI alignment?", "id": "15735b234b551c539754f7cbbbfc6998"} +{"source": "stampy", "source_filetype": "text", "url": "n/a", "title": "How close do AI experts think we are creating superintelligence?", "authors": "n/a", "date_published": "n/a", "text": "Question: How close do AI experts think we are creating superintelligence?\n\nAnswer: Nobody knows for sure when we will have AGI, or if we'll ever get there. [https://www.cold-takes.com/where-ai-forecasting-stands-today/ Open Philanthropy CEO Holden Karnofsky has analyzed a selection of recent expert surveys on the matter, as well as taking into account findings of computational neuroscience, economic history, probabilistic methods and failures of previous AI timeline estimates]. This all led him to estimate that ''\"there is more than a 10% chance we'll see transformative AI within 15 years (by 2036); a ~50% chance we'll see it within 40 years (by 2060); and a ~2/3 chance we'll see it this century (by 2100).\"'' Karnofsky bemoans the lack of robust expert consensus on the matter and invites rebuttals to his claims in order to further the conversation. He compares AI forecasting to election forecasting (as opposed to academic political science) or market forecasting (as opposed to theoretical academics), thereby arguing that AI researchers may not be the \"experts\" we should trust in predicting AI timelines.\n\nOpinions proliferate, but given experts' (and non-experts') poor track record at predicting progress in AI, many researchers tend to be fairly agnostic about when superintelligent AI will be invented. \n\nUC-Berkeley AI professor [https://en.wikipedia.org/wiki/Stuart_J._Russell Stuart Russell] has given his best guess as \"sometime in our children's lifetimes\", while [https://en.wikipedia.org/wiki/Ray_Kurzweil Ray Kurzweil] (Google's Director of Engineering) predicts human level AI by 2029 and an intelligence explosion by 2045. [https://en.wikipedia.org/wiki/Eliezer_Yudkowsky Eliezer Yudkowsky] expects [https://www.econlib.org/archives/2017/01/my_end-of-the-w.html the end of the world], and [https://en.wikipedia.org/wiki/Elon_Musk Elon Musk] [https://twitter.com/elonmusk/status/1531328534169493506 expects AGI], before 2030.\n\nIf there's anything like a consensus answer at this stage, it would be something like: \"highly uncertain, maybe not for over a hundred years, maybe in less than fifteen, with around the middle of the century looking fairly plausible\".", "question": "How close do AI experts think we are creating superintelligence?", "answer": ["Nobody knows for sure when we will have AGI, or if we’ll ever get there. [https://www.cold-takes.com/where-ai-forecasting-stands-today/ Open Philanthropy CEO Holden Karnofsky has analyzed a selection of recent expert surveys on the matter, as well as taking into account findings of computational neuroscience, economic history, probabilistic methods and failures of previous AI timeline estimates]. This all led him to estimate that ''\"there is more than a 10% chance we'll see transformative AI within 15 years (by 2036); a ~50% chance we'll see it within 40 years (by 2060); and a ~2/3 chance we'll see it this century (by 2100).\"'' Karnofsky bemoans the lack of robust expert consensus on the matter and invites rebuttals to his claims in order to further the conversation. He compares AI forecasting to election forecasting (as opposed to academic political science) or market forecasting (as opposed to theoretical academics), thereby arguing that AI researchers may not be the \"experts” we should trust in predicting AI timelines.\n\nOpinions proliferate, but given experts’ (and non-experts’) poor track record at predicting progress in AI, many researchers tend to be fairly agnostic about when superintelligent AI will be invented. \n\nUC-Berkeley AI professor [https://en.wikipedia.org/wiki/Stuart_J._Russell Stuart Russell] has given his best guess as “sometime in our children’s lifetimes”, while [https://en.wikipedia.org/wiki/Ray_Kurzweil Ray Kurzweil] (Google’s Director of Engineering) predicts human level AI by 2029 and an intelligence explosion by 2045. [https://en.wikipedia.org/wiki/Eliezer_Yudkowsky Eliezer Yudkowsky] expects [https://www.econlib.org/archives/2017/01/my_end-of-the-w.html the end of the world], and [https://en.wikipedia.org/wiki/Elon_Musk Elon Musk] [https://twitter.com/elonmusk/status/1531328534169493506 expects AGI], before 2030.\n\nIf there’s anything like a consensus answer at this stage, it would be something like: “highly uncertain, maybe not for over a hundred years, maybe in less than fifteen, with around the middle of the century looking fairly plausible”."], "entry": "Plex's Answer to How close do AI experts think we are to creating superintelligence?", "id": "7b7bb338132f3372291ffafba3747d86"} +{"source": "stampy", "source_filetype": "text", "url": "n/a", "title": "Why is AI alignment a hard problem?", "authors": "n/a", "date_published": "n/a", "text": "Question: Why is AI alignment a hard problem?\n\nAnswer: One sense in which alignment is a hard problem is analogous to the reason rocket science is a hard problem. Relative to other engineering endeavors, rocket science had so many disasters because of the extreme stresses placed on various mechanical components and the narrow margins of safety required by stringent weight limits. A superintelligence would put vastly more \"stress\" on the software and hardware stack it is running on, which could cause many classes of failure which don't occur when you're working with subhuman systems.\n\nAlignment is also hard like space probes are hard. With recursively self-improving systems, you won't be able to go back and edit the code later if there is a catastrophic failure because it will competently deceive and resist you.\n\n''\"You may have only one shot. If something goes wrong, the system might be too 'high' for you to reach up and suddenly fix it. You can build error recovery mechanisms into it; space probes are supposed to accept software updates. If something goes wrong in a way that precludes getting future updates, though, you're screwed. You have lost the space probe.\"''\n\nAdditionally, alignment is hard like cryptographic security. Cryptographers attempt to safeguard against \"intelligent adversaries\" who search for flaws in a system which they can exploit to break it. ''\"Your code is not an intelligent adversary if everything goes right. If something goes wrong, it might try to defeat your safeguards…\"'' And at the stage where it's trying to defeat your safeguards, your code may have achieved the capabilities of a vast and perfectly coordinated team of superhuman-level hackers! So if there is even the tiniest flaw in your design, you can be certain that it will be found and exploited. As with standard cybersecurity, \"good under normal circumstances\" is just not good enough – your system needs to be unbreakably robust.\n\n''\"AI alignment: treat it like a cryptographic rocket probe. This is about how difficult you would expect it to be to build something smarter than you that was nice – given that basic agent theory says they're not automatically nice – and not die. You would expect that intuitively to be hard.\"'' Eliezer Yudkowsky\n\nAnother immense challenge is the fact that we currently have no idea how to reliably instill AIs with human-friendly goals. ''Even if a consensus could be reached on a system of human values and morality'', it's entirely unclear how this could be fully and faithfully captured in code.\n\nFor a more in-depth view of this argument, see Yudkowsky's talk \"AI Alignment: Why It's Hard, and Where to Start\" below (full transcript [https://intelligence.org/2016/12/28/ai-alignment-why-its-hard-and-where-to-start/ here]). For alternative views, see Paul Christiano's [https://ai-alignment.com/ai-alignment-landscape-d3773c37ae38 \"AI alignment landscape\" talk], Daniel Kokotajlo and Wei Dai's [https://www.alignmentforum.org/posts/WXvt8bxYnwBYpy9oT/the-main-sources-of-ai-risk \"The Main Sources of AI Risk?\"] list, and [https://aiimpacts.org/conversation-with-rohin-shah/ Rohin Shah's much more optimistic position].\n\n(youtube)EUjc1WuyPT8(/youtube)", "question": "Why is AI alignment a hard problem?", "answer": ["One sense in which alignment is a hard problem is analogous to the reason rocket science is a hard problem. Relative to other engineering endeavors, rocket science had so many disasters because of the extreme stresses placed on various mechanical components and the narrow margins of safety required by stringent weight limits. A superintelligence would put vastly more “stress” on the software and hardware stack it is running on, which could cause many classes of failure which don’t occur when you’re working with subhuman systems.\n\nAlignment is also hard like space probes are hard. With recursively self-improving systems, you won’t be able to go back and edit the code later if there is a catastrophic failure because it will competently deceive and resist you.\n\n''\"You may have only one shot. If something goes wrong, the system might be too 'high' for you to reach up and suddenly fix it. You can build error recovery mechanisms into it; space probes are supposed to accept software updates. If something goes wrong in a way that precludes getting future updates, though, you’re screwed. You have lost the space probe.\"''\n\nAdditionally, alignment is hard like cryptographic security. Cryptographers attempt to safeguard against “intelligent adversaries” who search for flaws in a system which they can exploit to break it. ''“Your code is not an intelligent adversary if everything goes right. If something goes wrong, it might try to defeat your safeguards…”'' And at the stage where it’s trying to defeat your safeguards, your code may have achieved the capabilities of a vast and perfectly coordinated team of superhuman-level hackers! So if there is even the tiniest flaw in your design, you can be certain that it will be found and exploited. As with standard cybersecurity, \"good under normal circumstances\" is just not good enough – your system needs to be unbreakably robust.\n\n''\"AI alignment: treat it like a cryptographic rocket probe. This is about how difficult you would expect it to be to build something smarter than you that was nice – given that basic agent theory says they’re not automatically nice – and not die. You would expect that intuitively to be hard.\"'' Eliezer Yudkowsky\n\nAnother immense challenge is the fact that we currently have no idea how to reliably instill AIs with human-friendly goals. ''Even if a consensus could be reached on a system of human values and morality'', it’s entirely unclear how this could be fully and faithfully captured in code.\n\nFor a more in-depth view of this argument, see Yudkowsky's talk \"AI Alignment: Why It’s Hard, and Where to Start\" below (full transcript [https://intelligence.org/2016/12/28/ai-alignment-why-its-hard-and-where-to-start/ here]). For alternative views, see Paul Christiano's [https://ai-alignment.com/ai-alignment-landscape-d3773c37ae38 “AI alignment landscape” talk], Daniel Kokotajlo and Wei Dai’s [https://www.alignmentforum.org/posts/WXvt8bxYnwBYpy9oT/the-main-sources-of-ai-risk “The Main Sources of AI Risk?”] list, and [https://aiimpacts.org/conversation-with-rohin-shah/ Rohin Shah’s much more optimistic position].\n\n(youtube)EUjc1WuyPT8(/youtube)"], "entry": "Plex's Answer to Why is AI alignment a hard problem?", "id": "d75200b00a77f7a27f883d92e2d6e26f"} +{"source": "stampy", "source_filetype": "text", "url": "n/a", "title": "What are some good books about AGI safety?", "authors": "n/a", "date_published": "n/a", "text": "Question: What are some good books about AGI safety?\n\nAnswer: ''[https://brianchristian.org/the-alignment-problem/ The Alignment Problem]'' (2020) by Brian Christian is the most recent in-depth guide to the field.\n\nThe book which first made the case to the public is Nick Bostrom's ''[https://publicism.info/philosophy/superintelligence/ Superintelligence]'' (2014). It gives an excellent overview of the state of the field (as it was then) and makes a strong case for the subject being important, as well as exploring many fascinating adjacent topics. However, it does not cover newer developments, such as [[What are mesa-optimizers?┊mesa-optimizers]] or [[What are language models?┊language models]].\n\nThere's also ''[https://en.wikipedia.org/wiki/Human_Compatible Human Compatible]'' (2019) by Stuart Russell, which gives a more up-to-date review of developments, with an emphasis on the approaches that the Center for Human-Compatible AI are working on, such as cooperative inverse reinforcement learning. There's a good [https://slatestarcodex.com/2020/01/30/book-review-human-compatible/ review/summary on SlateStarCodex].\n\nAlthough not limited to AI safety, ''[https://www.amazon.co.uk/Does-Not-Hate-You-Superintelligence/dp/ The AI Does Not Hate You]'' (2020) is an entertaining and accessible outline of both the core issues and an exploration of some of the community and culture of the people working on it.\n\nVarious other books explore the issues in an informed way, such as [http://www.tobyord.com/ Toby Ord]'s ''[https://en.wikipedia.org/wiki/The_Precipice:_Existential_Risk_and_the_Future_of_Humanity The Precipice]'' (2020), [https://en.wikipedia.org/wiki/Max_Tegmark Max Tegmark]'s ''[https://en.wikipedia.org/wiki/Life_3.0 Life 3.0]'' (2017), [https://www.ynharari.com/ Yuval Noah Harari]'s ''[https://www.ynharari.com/book/homo-deus/ Homo Deus]'' (2016), [https://www.fhi.ox.ac.uk/team/stuart-armstrong/ Stuart Armstrong]'s ''[https://smarterthan.us/toc/ Smarter Than Us]'' (2014), and [http://lukeprog.com/ Luke Muehlhauser]'s ''[https://intelligenceexplosion.com/ Facing the Intelligence Explosion]'' (2013).", "question": "What are some good books about AGI safety?", "answer": ["''[https://brianchristian.org/the-alignment-problem/ The Alignment Problem]'' (2020) by Brian Christian is the most recent in-depth guide to the field.\n\nThe book which first made the case to the public is Nick Bostrom’s ''[https://publicism.info/philosophy/superintelligence/ Superintelligence]'' (2014). It gives an excellent overview of the state of the field (as it was then) and makes a strong case for the subject being important, as well as exploring many fascinating adjacent topics. However, it does not cover newer developments, such as [[What are mesa-optimizers?┊mesa-optimizers]] or [[What are language models?┊language models]].\n\nThere's also ''[https://en.wikipedia.org/wiki/Human_Compatible Human Compatible]'' (2019) by Stuart Russell, which gives a more up-to-date review of developments, with an emphasis on the approaches that the Center for Human-Compatible AI are working on, such as cooperative inverse reinforcement learning. There's a good [https://slatestarcodex.com/2020/01/30/book-review-human-compatible/ review/summary on SlateStarCodex].\n\nAlthough not limited to AI safety, ''[https://www.amazon.co.uk/Does-Not-Hate-You-Superintelligence/dp/1474608795 The AI Does Not Hate You]'' (2020) is an entertaining and accessible outline of both the core issues and an exploration of some of the community and culture of the people working on it.\n\nVarious other books explore the issues in an informed way, such as [http://www.tobyord.com/ Toby Ord]’s ''[https://en.wikipedia.org/wiki/The_Precipice:_Existential_Risk_and_the_Future_of_Humanity The Precipice]'' (2020), [https://en.wikipedia.org/wiki/Max_Tegmark Max Tegmark]’s ''[https://en.wikipedia.org/wiki/Life_3.0 Life 3.0]'' (2017), [https://www.ynharari.com/ Yuval Noah Harari]’s ''[https://www.ynharari.com/book/homo-deus/ Homo Deus]'' (2016), [https://www.fhi.ox.ac.uk/team/stuart-armstrong/ Stuart Armstrong]’s ''[https://smarterthan.us/toc/ Smarter Than Us]'' (2014), and [http://lukeprog.com/ Luke Muehlhauser]’s ''[https://intelligenceexplosion.com/ Facing the Intelligence Explosion]'' (2013)."], "entry": "Plex's Answer to What are some good books about AGI safety?", "id": "41f2c5dfdd01984fe30c825cced20889"} +{"source": "stampy", "source_filetype": "text", "url": "n/a", "title": "Wouldn't it be a good thing for humanity die out?", "authors": "n/a", "date_published": "n/a", "text": "Question: Wouldn't it be a good thing for humanity die out?\n\nAnswer: In the words of [https://mindingourway.com/a-torch-in-darkness/ Nate Soares]:\n
I don't expect humanity to survive much longer.\n\nOften, when someone learns this, they say:
\n\"Eh, I think that would be all right.\"\n\nSo allow me to make this very clear: it would not be \"all right.\"\n\nImagine a little girl running into the road to save her pet dog. Imagine she succeeds, only to be hit by a car herself. Imagine she lives only long enough to die in pain.\n\nThough you may imagine this thing, you cannot feel the full tragedy. You can't comprehend the rich inner life of that child. You can't understand her potential; your mind is not itself large enough to contain the sadness of an entire life cut short.\n\nYou can only catch a glimpse of what is lost—
\n—when one single human being dies.\n\nNow tell me again how it would be \"all right\" if every single person were to die at once.\n\nMany people, when they picture the end of humankind, pattern match the idea to some romantic tragedy, where humans, with all their hate and all their avarice, had been unworthy of the stars since the very beginning, and deserved their fate. A sad but poignant ending to our tale.\n\nAnd indeed, there are many parts of human nature that I hope we leave behind before we venture to the heavens. But in our nature is also everything worth bringing with us. Beauty and curiosity and love, a capacity for fun and growth and joy: these are our birthright, ours to bring into the barren night above.\n\nCalamities seem more salient when unpacked. It is far harder to kill a hundred people in their sleep, with a knife, than it is to order a nuclear bomb dropped on Hiroshima. Your brain can't multiply, you see: it can only look at a hypothetical image of a broken city and decide it's not that bad. It can only conjure an image of a barren planet and say \"eh, we had it coming.\"\n\nBut if you unpack the scenario, if you try to comprehend all the lives snuffed out, all the children killed, the final spark of human joy and curiosity extinguished, all our potential squandered…\n\nI promise you that the extermination of humankind would be horrific.
", "question": "Wouldn't it be a good thing for humanity die out?", "answer": ["In the words of [https://mindingourway.com/a-torch-in-darkness/ Nate Soares]:\n
I don’t expect humanity to survive much longer.\n\nOften, when someone learns this, they say:
\n\"Eh, I think that would be all right.\"\n\nSo allow me to make this very clear: it would not be \"all right.\"\n\nImagine a little girl running into the road to save her pet dog. Imagine she succeeds, only to be hit by a car herself. Imagine she lives only long enough to die in pain.\n\nThough you may imagine this thing, you cannot feel the full tragedy. You can’t comprehend the rich inner life of that child. You can’t understand her potential; your mind is not itself large enough to contain the sadness of an entire life cut short.\n\nYou can only catch a glimpse of what is lost—
\n—when one single human being dies.\n\nNow tell me again how it would be \"all right\" if every single person were to die at once.\n\nMany people, when they picture the end of humankind, pattern match the idea to some romantic tragedy, where humans, with all their hate and all their avarice, had been unworthy of the stars since the very beginning, and deserved their fate. A sad but poignant ending to our tale.\n\nAnd indeed, there are many parts of human nature that I hope we leave behind before we venture to the heavens. But in our nature is also everything worth bringing with us. Beauty and curiosity and love, a capacity for fun and growth and joy: these are our birthright, ours to bring into the barren night above.\n\nCalamities seem more salient when unpacked. It is far harder to kill a hundred people in their sleep, with a knife, than it is to order a nuclear bomb dropped on Hiroshima. Your brain can’t multiply, you see: it can only look at a hypothetical image of a broken city and decide it’s not that bad. It can only conjure an image of a barren planet and say \"eh, we had it coming.\"\n\nBut if you unpack the scenario, if you try to comprehend all the lives snuffed out, all the children killed, the final spark of human joy and curiosity extinguished, all our potential squandered…\n\nI promise you that the extermination of humankind would be horrific.
"], "entry": "Plex's Answer to Wouldn't it be a good thing for humanity to die out?", "id": "9955c5dad811ac30e011b1520534ce31"} +{"source": "stampy", "source_filetype": "text", "url": "n/a", "title": "Why might a superintelligent AI be dangerous?", "authors": "n/a", "date_published": "n/a", "text": "Question: Why might a superintelligent AI be dangerous?\n\nAnswer: A commonly heard argument goes: yes, a superintelligent AI might be far smarter than Einstein, but it's still just one program, sitting in a supercomputer somewhere. That could be bad if an enemy government controls it and asks it to help invent superweapons – but then the problem is the enemy government, not the AI ''per se''. Is there any reason to be afraid of the AI itself? Suppose the AI did appear to be hostile, suppose it even wanted to take over the world: why should we think it has any chance of doing so?\n\nThere are numerous carefully thought-out AGI-related scenarios which could result in the accidental extinction of humanity. But rather than focussing on any of these individually, it might be more helpful to think in general terms.\n
\"Transistors can fire about 10 million times faster than human brain cells, so it's possible we'll eventually have digital minds operating 10 million times faster than us, meaning from a decision-making perspective we'd look to them like stationary objects, like plants or rocks... To give you a sense, [https://vimeo.com/83664407 here]'s what humans look like when slowed down by only around 100x.\"''
\n
Watch that, and now try to imagine advanced AI technology running for a single year around the world, making decisions and taking actions 10 million times faster than we can. That year for us becomes 10 million subjective years for the AI, in which \"...there are these nearly-stationary plant-like or rock-like \"human\" objects around that could easily be taken apart for, say, biofuel or carbon atoms, if you could just get started building a human-disassembler. Visualizing things this way, you can start to see all the ways that a digital civilization can develop very quickly into a situation where there are no humans left alive, just as human civilization doesn't show much regard for plants or wildlife or insects.\"\n[https://acritch.com/ Andrew Critch] - [https://www.lesswrong.com/posts/Ccsx339LE9Jhoii9K/slow-motion-videos-as-ai-risk-intuition-pumps Slow Motion Videos as AI Risk Intuition Pumps]
\n \nAnd even putting aside these issues of speed and subjective time, the difference in (intelligence-based) power-to-manipulate-the-world between a self-improving superintelligent AGI and humanity could be far more extreme than the difference in such power between humanity and insects.\n\n\"[https://www.cold-takes.com/ai-could-defeat-all-of-us-combined/ AI Could Defeat All Of Us Combined]\" is a more in-depth argument by the CEO of [https://en.wikipedia.org/wiki/Open_Philanthropy_(organization) Open Philanthropy].", "question": "Why might a superintelligent AI be dangerous?", "answer": ["A commonly heard argument goes: yes, a superintelligent AI might be far smarter than Einstein, but it’s still just one program, sitting in a supercomputer somewhere. That could be bad if an enemy government controls it and asks it to help invent superweapons – but then the problem is the enemy government, not the AI ''per se''. Is there any reason to be afraid of the AI itself? Suppose the AI did appear to be hostile, suppose it even wanted to take over the world: why should we think it has any chance of doing so?\n\nThere are numerous carefully thought-out AGI-related scenarios which could result in the accidental extinction of humanity. But rather than focussing on any of these individually, it might be more helpful to think in general terms.\n
\"Transistors can fire about 10 million times faster than human brain cells, so it's possible we'll eventually have digital minds operating 10 million times faster than us, meaning from a decision-making perspective we'd look to them like stationary objects, like plants or rocks... To give you a sense, [https://vimeo.com/83664407 here]'s what humans look like when slowed down by only around 100x.\"''
\n
Watch that, and now try to imagine advanced AI technology running for a single year around the world, making decisions and taking actions 10 million times faster than we can. That year for us becomes 10 million subjective years for the AI, in which \"...there are these nearly-stationary plant-like or rock-like \"human\" objects around that could easily be taken apart for, say, biofuel or carbon atoms, if you could just get started building a human-disassembler. Visualizing things this way, you can start to see all the ways that a digital civilization can develop very quickly into a situation where there are no humans left alive, just as human civilization doesn't show much regard for plants or wildlife or insects.\"\n[https://acritch.com/ Andrew Critch] - [https://www.lesswrong.com/posts/Ccsx339LE9Jhoii9K/slow-motion-videos-as-ai-risk-intuition-pumps Slow Motion Videos as AI Risk Intuition Pumps]
\n \nAnd even putting aside these issues of speed and subjective time, the difference in (intelligence-based) power-to-manipulate-the-world between a self-improving superintelligent AGI and humanity could be far more extreme than the difference in such power between humanity and insects.\n\n“[https://www.cold-takes.com/ai-could-defeat-all-of-us-combined/ AI Could Defeat All Of Us Combined]” is a more in-depth argument by the CEO of [https://en.wikipedia.org/wiki/Open_Philanthropy_(organization) Open Philanthropy]."], "entry": "Answer to Why might a superintelligent AI be dangerous?", "id": "a2cda7053859ebf12658629a12f6de62"} +{"source": "stampy", "source_filetype": "text", "url": "n/a", "title": "Why can't we just make a \"child AI\" and raise it?", "authors": "n/a", "date_published": "n/a", "text": "Question: Why can't we just make a \"child AI\" and raise it?\n\nAnswer: A potential solution is to create an AI that has the same values and morality as a human by creating a child AI and raising it. There's nothing intrinsically flawed with this procedure. However, this suggestion is deceptive because it sounds simpler than it is.\n\nIf you get a chimpanzee baby and raise it in a human family, it does not learn to speak a human language. Human babies can grow into adult humans because the babies have specific properties, e.g. a prebuilt language module that gets activated during childhood.\n\nIn order to make a child AI that has the potential to turn into the type of adult AI we would find acceptable, the child AI has to have specific properties. The task of building a child AI with these properties involves building a system that can interpret what humans mean when we try to teach the child to do various tasks. [https://humancompatible.ai/ People] are currently working on ways to program agents that can cooperatively interact with humans to learn what they want.", "question": "Why can't we just make a \"child AI\" and raise it?", "answer": ["A potential solution is to create an AI that has the same values and morality as a human by creating a child AI and raising it. There’s nothing intrinsically flawed with this procedure. However, this suggestion is deceptive because it sounds simpler than it is.\n\nIf you get a chimpanzee baby and raise it in a human family, it does not learn to speak a human language. Human babies can grow into adult humans because the babies have specific properties, e.g. a prebuilt language module that gets activated during childhood.\n\nIn order to make a child AI that has the potential to turn into the type of adult AI we would find acceptable, the child AI has to have specific properties. The task of building a child AI with these properties involves building a system that can interpret what humans mean when we try to teach the child to do various tasks. [https://humancompatible.ai/ People] are currently working on ways to program agents that can cooperatively interact with humans to learn what they want."], "entry": "Answer to Why can't we just make a \"child AI\" and raise it?", "id": "6f3ed03c75236775220c98314e0f4c04"} +{"source": "stampy", "source_filetype": "text", "url": "n/a", "title": "What is causal decision theory?", "authors": "n/a", "date_published": "n/a", "text": "Question: What is causal decision theory?\n\nAnswer:

Causal Decision Theory – CDT – is a branch of [https://www.lesswrong.com/tag/decision-theory decision theory] which advises an agent to take actions which maximize the causal consequences on the probability of desired outcomes [#fn1 1]. As any branch of decision theory, it prescribes taking the action that maximizes [https://www.lesswrong.com/tag/expected-utility expected utility], i.e the action which maximizes the sum of the utility obtained in each outcome weighted by the probability of that outcome occurring, given your action. Different decision theories correspond to different ways of construing this dependence between actions and outcomes. CDT focuses on the causal relations between one's actions and outcomes, whilst [https://www.lesswrong.com/tag/evidential-decision-theory Evidential Decision Theory] – EDT - concerns itself with what an action indicates about the world (which is operationalized by the conditional probability). That is, according to CDT, a rational agent should track the available causal relations linking his actions to the desired outcome and take the action which will better enhance the chances of the desired outcome.

One usual example where EDT and CDT commonly diverge is the [https://www.lesswrong.com/tag/smoking-lesion Smoking lesion]: \"Smoking is strongly correlated with lung cancer, but in the world of the Smoker's Lesion this correlation is understood to be the result of a common cause: a genetic lesion that tends to cause both smoking and cancer. Once we fix the presence or absence of the lesion, there is no additional correlation between smoking and cancer. Suppose you prefer smoking without cancer to not smoking without cancer, and prefer smoking with cancer to not smoking with cancer. Should you smoke?\" CDT would recommend smoking since there is no causal connection between smoking and cancer. They are both caused by a gene, but have no causal direct connection with each other. EDT, on the other hand, would recommend against smoking, since smoking is an evidence for having the mentioned gene and thus should be avoided.

The core aspect of CDT is mathematically represented by the fact it uses probabilities of conditionals in place of conditional probabilities [#fn2 2]. The probability of a conditional is the probability of the whole conditional being true, where the conditional probability is the probability of the consequent given the antecedent. A conditional probability of B given A - P(B┊A) -, simply implies the [https://www.lesswrong.com/tag/bayesian-probability Bayesian probability] of the event B happening given we known A happened, it's used in EDT. The probability of conditionals – P(A > B) - refers to the probability that the conditional 'A implies B' is true, it is the probability of the contrafactual 'If A, then B' be the case. Since contrafactual analysis is the key tool used to speak about causality, probability of conditionals are said to mirror causal relations. In most cases these two probabilities track each other, and CDT and EDT give the same answers. However, some particular problems have arisen where their predictions for rational action diverge such as the [https://www.lesswrong.com/tag/smoking-lesion Smoking lesion] problem – where CDT seems to give a more reasonable prescription – and [https://www.lesswrong.com/tag/newcomb-s-problem Newcomb's problem] – where CDT seems unreasonable. David Lewis proved [#fn3 3] it's impossible to probabilities of conditionals to always track conditional probabilities. Hence, evidential relations aren't the same as causal relations and CDT and EDT will always diverge in some cases.

References

  1. [http://plato.stanford.edu/entries/decision-causal/ http://plato.stanford.edu/entries/decision-causal/]
  2. Lewis, David. (1981) \"Causal Decision Theory,\" Australasian Journal of Philosophy 59 (1981): 5- 30.
  3. Lewis, D. (1976), \"Probabilities of conditionals and conditional probabilities\", The Philosophical Review (Duke University Press) 85 (3): 297–315

See also

  • [https://www.lesswrong.com/tag/decision-theory Decision theory]
  • [https://www.lesswrong.com/tag/evidential-decision-theory Evidential Decision Theory]
[https://www.lesswrong.com/tag/causal-decision-theory?edit꞊true Edit on LessWrong]
https://www.lesswrong.com/tag/causal-decision-theory?edit꞊true
", "question": "What is causal decision theory?", "answer": ["

Causal Decision Theory – CDT – is a branch of [https://www.lesswrong.com/tag/decision-theory decision theory] which advises an agent to take actions which maximize the causal consequences on the probability of desired outcomes [#fn1 1]. As any branch of decision theory, it prescribes taking the action that maximizes [https://www.lesswrong.com/tag/expected-utility expected utility], i.e the action which maximizes the sum of the utility obtained in each outcome weighted by the probability of that outcome occurring, given your action. Different decision theories correspond to different ways of construing this dependence between actions and outcomes. CDT focuses on the causal relations between one’s actions and outcomes, whilst [https://www.lesswrong.com/tag/evidential-decision-theory Evidential Decision Theory] – EDT - concerns itself with what an action indicates about the world (which is operationalized by the conditional probability). That is, according to CDT, a rational agent should track the available causal relations linking his actions to the desired outcome and take the action which will better enhance the chances of the desired outcome.

One usual example where EDT and CDT commonly diverge is the [https://www.lesswrong.com/tag/smoking-lesion Smoking lesion]: “Smoking is strongly correlated with lung cancer, but in the world of the Smoker's Lesion this correlation is understood to be the result of a common cause: a genetic lesion that tends to cause both smoking and cancer. Once we fix the presence or absence of the lesion, there is no additional correlation between smoking and cancer. Suppose you prefer smoking without cancer to not smoking without cancer, and prefer smoking with cancer to not smoking with cancer. Should you smoke?” CDT would recommend smoking since there is no causal connection between smoking and cancer. They are both caused by a gene, but have no causal direct connection with each other. EDT, on the other hand, would recommend against smoking, since smoking is an evidence for having the mentioned gene and thus should be avoided.

The core aspect of CDT is mathematically represented by the fact it uses probabilities of conditionals in place of conditional probabilities [#fn2 2]. The probability of a conditional is the probability of the whole conditional being true, where the conditional probability is the probability of the consequent given the antecedent. A conditional probability of B given A - P(B┊A) -, simply implies the [https://www.lesswrong.com/tag/bayesian-probability Bayesian probability] of the event B happening given we known A happened, it’s used in EDT. The probability of conditionals – P(A > B) - refers to the probability that the conditional 'A implies B' is true, it is the probability of the contrafactual ‘If A, then B’ be the case. Since contrafactual analysis is the key tool used to speak about causality, probability of conditionals are said to mirror causal relations. In most cases these two probabilities track each other, and CDT and EDT give the same answers. However, some particular problems have arisen where their predictions for rational action diverge such as the [https://www.lesswrong.com/tag/smoking-lesion Smoking lesion] problem – where CDT seems to give a more reasonable prescription – and [https://www.lesswrong.com/tag/newcomb-s-problem Newcomb's problem] – where CDT seems unreasonable. David Lewis proved [#fn3 3] it's impossible to probabilities of conditionals to always track conditional probabilities. Hence, evidential relations aren’t the same as causal relations and CDT and EDT will always diverge in some cases.

References

  1. [http://plato.stanford.edu/entries/decision-causal/ http://plato.stanford.edu/entries/decision-causal/]
  2. Lewis, David. (1981) \"Causal Decision Theory,\" Australasian Journal of Philosophy 59 (1981): 5- 30.
  3. Lewis, D. (1976), \"Probabilities of conditionals and conditional probabilities\", The Philosophical Review (Duke University Press) 85 (3): 297–315

See also

  • [https://www.lesswrong.com/tag/decision-theory Decision theory]
  • [https://www.lesswrong.com/tag/evidential-decision-theory Evidential Decision Theory]
[https://www.lesswrong.com/tag/causal-decision-theory?edit꞊true Edit on LessWrong]
https://www.lesswrong.com/tag/causal-decision-theory?edit꞊true
"], "entry": "Linnea's Answer to What is causal decision theory?", "id": "1b4b7e8381aca7eb018095ba789d98f5"} +{"source": "stampy", "source_filetype": "text", "url": "n/a", "title": "What is an \"s-risk\"?", "authors": "n/a", "date_published": "n/a", "text": "Question: What is an \"s-risk\"?\n\nAnswer:

(Astronomical) suffering risks, also known as s-risks, are risks of the creation of intense suffering in the far future on an astronomical scale, vastly exceeding all suffering that has existed on Earth so far.

S-risks are an example of [https://www.lesswrong.com/tag/existential-risk existential risk] (also known as x-risks) according to Nick Bostrom's original definition, as they threaten to \"permanently and drastically curtail [Earth-originating intelligent life's] potential\". Most existential risks are of the form \"event E happens which drastically reduces the number of conscious experiences in the future\". S-risks therefore serve as a useful reminder that some x-risks are scary because they cause bad experiences, and not just because they prevent good ones.

Within the space of x-risks, we can distinguish x-risks that are s-risks, x-risks involving human extinction, x-risks that involve immense suffering and human extinction, and x-risks that involve neither. For example:

 extinction risknon-extinction risk
suffering riskMisaligned AGI wipes out humans, simulates many suffering alien civilizations.Misaligned AGI tiles the universe with experiences of severe suffering.
non-suffering riskMisaligned AGI wipes out humans.Misaligned AGI keeps humans as \"pets,\" limiting growth but not causing immense suffering.

A related concept is [https://arbital.com/p/hyperexistential_separation/ hyperexistential risk], the risk of \"fates worse than death\" on an astronomical scale. It is not clear whether all hyperexistential risks are s-risks per se. But arguably all s-risks are hyperexistential, since \"tiling the universe with experiences of severe suffering\" would likely be worse than death.

There are two [https://wiki.lesswrong.com/wiki/EA EA] organizations with s-risk prevention research as their primary focus: the [https://www.lesswrong.com/tag/center-on-long-term-risk-clr Center on Long-Term Risk] (CLR) and the [https://centerforreducingsuffering.org/ Center for Reducing Suffering]. Much of CLR's work is on suffering-focused [https://wiki.lesswrong.com/wiki/AI_safety AI safety] and [https://www.lesswrong.com/tag/crucial-considerations crucial considerations]. Although to a much lesser extent, the [https://www.lesswrong.com/tag/machine-intelligence-research-institute-miri Machine Intelligence Research Institute] and [https://www.lesswrong.com/tag/future-of-humanity-institute-fhi Future of Humanity Institute] have investigated strategies to prevent s-risks too. 

Another approach to reducing s-risk is to \"expand the moral circle\" [https://magnusvinding.com/2018/09/04/moral-circle-expansion-might-increase-future-suffering/ together] with raising concern for suffering, so that future (post)human civilizations and AI are less likely to [https://www.lesswrong.com/tag/instrumental-value instrumentally] cause suffering to non-human minds such as animals or digital sentience. [http://www.sentienceinstitute.org/ Sentience Institute] works on this value-spreading problem.

 

See also

  • [https://www.lesswrong.com/tag/center-on-long-term-risk-clr Center on Long-Term Risk]
  • [https://www.lesswrong.com/tag/existential-risk Existential risk]
  • [https://www.lesswrong.com/tag/abolitionism Abolitionism]
  • [https://wiki.lesswrong.com/wiki/Mind_crime Mind crime]
  • [https://www.lesswrong.com/tag/utilitarianism Utilitarianism], [https://www.lesswrong.com/tag/hedonism Hedonism]

 

External links

  • [https://foundational-research.org/reducing-risks-of-astronomical-suffering-a-neglected-global-priority/ Reducing Risks of Astronomical Suffering: A Neglected Global Priority (FRI)]
  • [https://foundational-research.org/s-risks-talk-eag-boston-2017/ Introductory talk on s-risks (FRI)]
  • [https://foundational-research.org/risks-of-astronomical-future-suffering/ Risks of Astronomical Future Suffering (FRI)]
  • [https://foundational-research.org/files/suffering-focused-ai-safety.pdf Suffering-focused AI safety: Why \"fail-safe\" measures might be our top intervention PDF (FRI)]
  • [https://foundational-research.org/artificial-intelligence-and-its-implications-for-future-suffering Artificial Intelligence and Its Implications for Future Suffering (FRI)]
  • [https://sentience-politics.org/expanding-moral-circle-reduce-suffering-far-future/ Expanding our moral circle to reduce suffering in the far future (Sentience Politics)]
  • [https://sentience-politics.org/philosophy/the-importance-of-the-future/ The Importance of the Far Future (Sentience Politics)]
[https://www.lesswrong.com/tag/risks-of-astronomical-suffering-s-risks?edit꞊true Edit on LessWrong]
https://www.lesswrong.com/tag/risks-of-astronomical-suffering-s-risks?edit꞊true
", "question": "What is an \"s-risk\"?", "answer": ["

(Astronomical) suffering risks, also known as s-risks, are risks of the creation of intense suffering in the far future on an astronomical scale, vastly exceeding all suffering that has existed on Earth so far.

S-risks are an example of [https://www.lesswrong.com/tag/existential-risk existential risk] (also known as x-risks) according to Nick Bostrom's original definition, as they threaten to \"permanently and drastically curtail [Earth-originating intelligent life's] potential\". Most existential risks are of the form \"event E happens which drastically reduces the number of conscious experiences in the future\". S-risks therefore serve as a useful reminder that some x-risks are scary because they cause bad experiences, and not just because they prevent good ones.

Within the space of x-risks, we can distinguish x-risks that are s-risks, x-risks involving human extinction, x-risks that involve immense suffering and human extinction, and x-risks that involve neither. For example:

 extinction risknon-extinction risk
suffering riskMisaligned AGI wipes out humans, simulates many suffering alien civilizations.Misaligned AGI tiles the universe with experiences of severe suffering.
non-suffering riskMisaligned AGI wipes out humans.Misaligned AGI keeps humans as \"pets,\" limiting growth but not causing immense suffering.

A related concept is [https://arbital.com/p/hyperexistential_separation/ hyperexistential risk], the risk of \"fates worse than death\" on an astronomical scale. It is not clear whether all hyperexistential risks are s-risks per se. But arguably all s-risks are hyperexistential, since \"tiling the universe with experiences of severe suffering\" would likely be worse than death.

There are two [https://wiki.lesswrong.com/wiki/EA EA] organizations with s-risk prevention research as their primary focus: the [https://www.lesswrong.com/tag/center-on-long-term-risk-clr Center on Long-Term Risk] (CLR) and the [https://centerforreducingsuffering.org/ Center for Reducing Suffering]. Much of CLR's work is on suffering-focused [https://wiki.lesswrong.com/wiki/AI_safety AI safety] and [https://www.lesswrong.com/tag/crucial-considerations crucial considerations]. Although to a much lesser extent, the [https://www.lesswrong.com/tag/machine-intelligence-research-institute-miri Machine Intelligence Research Institute] and [https://www.lesswrong.com/tag/future-of-humanity-institute-fhi Future of Humanity Institute] have investigated strategies to prevent s-risks too. 

Another approach to reducing s-risk is to \"expand the moral circle\" [https://magnusvinding.com/2018/09/04/moral-circle-expansion-might-increase-future-suffering/ together] with raising concern for suffering, so that future (post)human civilizations and AI are less likely to [https://www.lesswrong.com/tag/instrumental-value instrumentally] cause suffering to non-human minds such as animals or digital sentience. [http://www.sentienceinstitute.org/ Sentience Institute] works on this value-spreading problem.

 

See also

  • [https://www.lesswrong.com/tag/center-on-long-term-risk-clr Center on Long-Term Risk]
  • [https://www.lesswrong.com/tag/existential-risk Existential risk]
  • [https://www.lesswrong.com/tag/abolitionism Abolitionism]
  • [https://wiki.lesswrong.com/wiki/Mind_crime Mind crime]
  • [https://www.lesswrong.com/tag/utilitarianism Utilitarianism], [https://www.lesswrong.com/tag/hedonism Hedonism]

 

External links

  • [https://foundational-research.org/reducing-risks-of-astronomical-suffering-a-neglected-global-priority/ Reducing Risks of Astronomical Suffering: A Neglected Global Priority (FRI)]
  • [https://foundational-research.org/s-risks-talk-eag-boston-2017/ Introductory talk on s-risks (FRI)]
  • [https://foundational-research.org/risks-of-astronomical-future-suffering/ Risks of Astronomical Future Suffering (FRI)]
  • [https://foundational-research.org/files/suffering-focused-ai-safety.pdf Suffering-focused AI safety: Why \"fail-safe\" measures might be our top intervention PDF (FRI)]
  • [https://foundational-research.org/artificial-intelligence-and-its-implications-for-future-suffering Artificial Intelligence and Its Implications for Future Suffering (FRI)]
  • [https://sentience-politics.org/expanding-moral-circle-reduce-suffering-far-future/ Expanding our moral circle to reduce suffering in the far future (Sentience Politics)]
  • [https://sentience-politics.org/philosophy/the-importance-of-the-future/ The Importance of the Far Future (Sentience Politics)]
[https://www.lesswrong.com/tag/risks-of-astronomical-suffering-s-risks?edit꞊true Edit on LessWrong]
https://www.lesswrong.com/tag/risks-of-astronomical-suffering-s-risks?edit꞊true
"], "entry": "Linnea's Answer to What is an \"s-risk\"?", "id": "f54d6d7d2a3e37e6b876944cb085b735"} +{"source": "stampy", "source_filetype": "text", "url": "n/a", "title": "What is an \"intelligence explosion\"?", "authors": "n/a", "date_published": "n/a", "text": "Question: What is an \"intelligence explosion\"?\n\nAnswer: The intelligence explosion idea was expressed by statistician [http://www.incompleteideas.net/papers/Good65ultraintelligent.pdf I.J. Good in 1965]:\n
Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an 'intelligence explosion', and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make.
\nThe argument is this: Every year, computers surpass human abilities in new ways. A program written in 1956 was able to prove mathematical theorems, and [http://www.cs.cornell.edu/courses/cs4860/2012fa/MacKenzie-TheAutomationOfProof.pdf found a more elegant proof] for one of them than Russell and Whitehead had given in ''Principia Mathematica''. By the late 1990s, 'expert systems' had surpassed human skill for a [http://www.amazon.com/dp// wide range of tasks]. In 1997, IBM's Deep Blue computer beat the world chess champion, and in 2011, IBM's Watson computer beat the best human players at a much more complicated game: [http://www.nytimes.com/2011/02/17/science/17jeopardy-watson.html?_r꞊2&ref꞊homepage&src꞊me&pagewanted꞊all Jeopardy!]. Recently, [http://commonsenseatheism.com/wp-content/uploads/2011/02/King-The-Automation-of-Science.pdf a robot named Adam] was programmed with our scientific knowledge about yeast, then posed its own hypotheses, tested them, and assessed the results.\n\nComputers remain far short of human intelligence, but the resources that aid AI design are accumulating (including hardware, large datasets, neuroscience knowledge, and AI theory). We may one day design a machine that surpasses human skill at designing artificial intelligences. After that, this machine could improve its own intelligence faster and better than humans can, which would make it even more skilled at improving its own intelligence. This could continue in a positive feedback loop such that the machine quickly becomes vastly more intelligent than the smartest human being on Earth: an 'intelligence explosion' resulting in a machine superintelligence.\n\nThis is what is meant by the 'intelligence explosion' in this FAQ.\n\nSee also:\n\n* Vinge, [http://www-rohan.sdsu.edu/faculty/vinge/misc/singularity.html The Coming Technological Singularity]\n* Wikipedia, [http://en.wikipedia.org/wiki/Technological_singularity Technological Singularity]\n* Chalmers, [http://commonsenseatheism.com/wp-content/uploads/2011/01/Chalmers-The-Singularity-a-philosophical-analysis.pdf The Singularity: A Philosophical Analysis]", "question": "What is an \"intelligence explosion\"?", "answer": ["The intelligence explosion idea was expressed by statistician [http://www.incompleteideas.net/papers/Good65ultraintelligent.pdf I.J. Good in 1965]:\n
Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an ‘intelligence explosion’, and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make.
\nThe argument is this: Every year, computers surpass human abilities in new ways. A program written in 1956 was able to prove mathematical theorems, and [http://www.cs.cornell.edu/courses/cs4860/2012fa/MacKenzie-TheAutomationOfProof.pdf found a more elegant proof] for one of them than Russell and Whitehead had given in ''Principia Mathematica''. By the late 1990s, ‘expert systems’ had surpassed human skill for a [http://www.amazon.com/dp/0521122937/ wide range of tasks]. In 1997, IBM’s Deep Blue computer beat the world chess champion, and in 2011, IBM’s Watson computer beat the best human players at a much more complicated game: [http://www.nytimes.com/2011/02/17/science/17jeopardy-watson.html?_r꞊2&ref꞊homepage&src꞊me&pagewanted꞊all Jeopardy!]. Recently, [http://commonsenseatheism.com/wp-content/uploads/2011/02/King-The-Automation-of-Science.pdf a robot named Adam] was programmed with our scientific knowledge about yeast, then posed its own hypotheses, tested them, and assessed the results.\n\nComputers remain far short of human intelligence, but the resources that aid AI design are accumulating (including hardware, large datasets, neuroscience knowledge, and AI theory). We may one day design a machine that surpasses human skill at designing artificial intelligences. After that, this machine could improve its own intelligence faster and better than humans can, which would make it even more skilled at improving its own intelligence. This could continue in a positive feedback loop such that the machine quickly becomes vastly more intelligent than the smartest human being on Earth: an ‘intelligence explosion’ resulting in a machine superintelligence.\n\nThis is what is meant by the ‘intelligence explosion’ in this FAQ.\n\nSee also:\n\n* Vinge, [http://www-rohan.sdsu.edu/faculty/vinge/misc/singularity.html The Coming Technological Singularity]\n* Wikipedia, [http://en.wikipedia.org/wiki/Technological_singularity Technological Singularity]\n* Chalmers, [http://commonsenseatheism.com/wp-content/uploads/2011/01/Chalmers-The-Singularity-a-philosophical-analysis.pdf The Singularity: A Philosophical Analysis]"], "entry": "Answer to What is an \"intelligence explosion\"?", "id": "47724ade1da215d375ef21a6f1d12b01"} +{"source": "stampy", "source_filetype": "text", "url": "n/a", "title": "What harm could a single superintelligence do when it took so many humans build civilization?", "authors": "n/a", "date_published": "n/a", "text": "Question: What harm could a single superintelligence do when it took so many humans build civilization?\n\nAnswer: Superintelligence has an advantage that an early human didn't – the entire context of human civilization and technology, there for it to manipulate socially or technologically.", "question": "What harm could a single superintelligence do when it took so many humans build civilization?", "answer": ["Superintelligence has an advantage that an early human didn’t – the entire context of human civilization and technology, there for it to manipulate socially or technologically."], "entry": "Answer to What harm could a single superintelligence do when it took so many humans to build civilization?", "id": "8ce72aed7b3ad0138acb7d17d84080fa"} +{"source": "stampy", "source_filetype": "text", "url": "n/a", "title": "How might non-agentic GPT-style AI cause an \"intelligence explosion\" or otherwise contribute existential risk?", "authors": "n/a", "date_published": "n/a", "text": "Question: How might non-agentic GPT-style AI cause an \"intelligence explosion\" or otherwise contribute existential risk?\n\nAnswer: One threat model which includes a GPT component is [https://www.lesswrong.com/posts/zzXawbXDwCZobwF9D/my-agi-threat-model-misaligned-model-based-rl-agent Misaligned Model-Based RL Agent]. It suggests that a reinforcement learner attached to a GPT-style world model could lead to an existential risk, with the RL agent being the optimizer which uses the world model to be much more effective at achieving its goals.\n\nAnother possibility is that a sufficiently powerful world model [https://www.lesswrong.com/posts/BGD5J2KAoNmpPMzMQ/why-gpt-wants-to-mesa-optimize-and-how-we-might-change-this may develop mesa optimizers] which could influence the world via the outputs of the model to achieve the mesa objective (perhaps by causing an optimizer to be created with goals aligned to it), though this is somewhat speculative.", "question": "How might non-agentic GPT-style AI cause an \"intelligence explosion\" or otherwise contribute existential risk?", "answer": ["One threat model which includes a GPT component is [https://www.lesswrong.com/posts/zzXawbXDwCZobwF9D/my-agi-threat-model-misaligned-model-based-rl-agent Misaligned Model-Based RL Agent]. It suggests that a reinforcement learner attached to a GPT-style world model could lead to an existential risk, with the RL agent being the optimizer which uses the world model to be much more effective at achieving its goals.\n\nAnother possibility is that a sufficiently powerful world model [https://www.lesswrong.com/posts/BGD5J2KAoNmpPMzMQ/why-gpt-wants-to-mesa-optimize-and-how-we-might-change-this may develop mesa optimizers] which could influence the world via the outputs of the model to achieve the mesa objective (perhaps by causing an optimizer to be created with goals aligned to it), though this is somewhat speculative."], "entry": "Plex's Answer to How might non-agentic GPT-style AI cause an \"intelligence explosion\" or otherwise contribute to existential risk?", "id": "10a126f9bbf1df3a78ee382013a32147"} +{"source": "stampy", "source_filetype": "text", "url": "n/a", "title": "Can't we just tell an AI do what we want?", "authors": "n/a", "date_published": "n/a", "text": "Question: Can't we just tell an AI do what we want?\n\nAnswer: If we could, it would solve a large part of the alignment problem.\n\nThe challenge is, how do we code this? Converting something to formal mathematics that can be understood by a computer program is much harder than just saying it in natural language, and proposed AI goal architectures are no exception. Complicated computer programs are usually the result of months of testing and debugging. But this one will be more complicated than any ever attempted before, and live tests are impossible: a superintelligence with a buggy goal system will display goal stability and try to prevent its programmers from discovering or changing the error.", "question": "Can't we just tell an AI do what we want?", "answer": ["If we could, it would solve a large part of the alignment problem.\n\nThe challenge is, how do we code this? Converting something to formal mathematics that can be understood by a computer program is much harder than just saying it in natural language, and proposed AI goal architectures are no exception. Complicated computer programs are usually the result of months of testing and debugging. But this one will be more complicated than any ever attempted before, and live tests are impossible: a superintelligence with a buggy goal system will display goal stability and try to prevent its programmers from discovering or changing the error."], "entry": "Answer to Can't we just tell an AI to do what we want?", "id": "0da3761ffa1ff33f45e8fea5b288ab04"} +{"source": "stampy", "source_filetype": "text", "url": "n/a", "title": "Any AI will be a computer program. Why wouldn't it just do what it's programmed do?", "authors": "n/a", "date_published": "n/a", "text": "Question: Any AI will be a computer program. Why wouldn't it just do what it's programmed do?\n\nAnswer: While it is true that a computer program always will do exactly what it is programmed to do, a big issue is that it is difficult to ensure that this is the same as what you intended it to do. Even small computer programs have bugs or glitches, and when programs become as complicated as AGIs will be, it becomes exceedingly difficult to anticipate how the program will behave when ran. This is the problem of AI alignment in a nutshell.\n\nNick Boström created the famous [https://www.lesswrong.com/tag/paperclip-maximizer paperclip maximizer] thought experiment to illustrate this point. Imagine you are an industrialist who owns a paperclip factory, and imagine you've just received a superintelligent AGI to work for you. You instruct the AGI to \"produce as many paperclips as possible\". If you've given the AGI no further instructions, the AGI will immediately acquire several instrumental goals.\n\n# It will want to prevent you from turning itself off (If you turn off the AI, this will reduce the amount of paperclips it can produce)\n# It will want to acquire as much power and resources for itself as possible (because the more resources it has access to, the more paperclips it can produce)\n# It will eventually want to turn the entire universe into a paperclips including you and all other humans, as this is the state of the world that maximizes the amount of paper clips produced. \n\nThese consequences might be seen as undesirable by the industrialist, as the only reason the industrialist wanted paperclips in the first place, presumably was so he/she could sell them and make money. However, the AGI only did exactly what it was told to. The issue was that what the AGI was instructed to do, lead to it doing things the industrialist did not anticipate (and did not want).\n\nSome good videos that explore this issue more in depth:
\n(youtube)tcdVC4e6EV4(/youtube)\n(youtube)hEUO6pjwFOo(/youtube)", "question": "Any AI will be a computer program. Why wouldn't it just do what it's programmed do?", "answer": ["While it is true that a computer program always will do exactly what it is programmed to do, a big issue is that it is difficult to ensure that this is the same as what you intended it to do. Even small computer programs have bugs or glitches, and when programs become as complicated as AGIs will be, it becomes exceedingly difficult to anticipate how the program will behave when ran. This is the problem of AI alignment in a nutshell.\n\nNick Boström created the famous [https://www.lesswrong.com/tag/paperclip-maximizer paperclip maximizer] thought experiment to illustrate this point. Imagine you are an industrialist who owns a paperclip factory, and imagine you've just received a superintelligent AGI to work for you. You instruct the AGI to \"produce as many paperclips as possible\". If you've given the AGI no further instructions, the AGI will immediately acquire several instrumental goals.\n\n# It will want to prevent you from turning itself off (If you turn off the AI, this will reduce the amount of paperclips it can produce)\n# It will want to acquire as much power and resources for itself as possible (because the more resources it has access to, the more paperclips it can produce)\n# It will eventually want to turn the entire universe into a paperclips including you and all other humans, as this is the state of the world that maximizes the amount of paper clips produced. \n\nThese consequences might be seen as undesirable by the industrialist, as the only reason the industrialist wanted paperclips in the first place, presumably was so he/she could sell them and make money. However, the AGI only did exactly what it was told to. The issue was that what the AGI was instructed to do, lead to it doing things the industrialist did not anticipate (and did not want).\n\nSome good videos that explore this issue more in depth:
\n(youtube)tcdVC4e6EV4(/youtube)\n(youtube)hEUO6pjwFOo(/youtube)"], "entry": "ElloMelon's Answer to Any AI will be a computer program. Why wouldn't it just do what it's programmed to do?", "id": "4a87a067625ee17b509d620a328fef5a"} +{"source": "stampy", "source_filetype": "text", "url": "n/a", "title": "Where can I find all the features of Stampy's Wiki?", "authors": "n/a", "date_published": "n/a", "text": "Question: Where can I find all the features of Stampy's Wiki?\n\nAnswer: The [[Editor portal]] collects them all in one place. Details on how to use each feature are on the individual pages.\n\n
'''[[Get involved]]'''\n\n
'''Questions'''\n\n* '''[[Questions]]''' (262) - List of all questions except those from YouTube.\n* '''[[Add question]]''' - Form to add new questions.\n* '''[[Answer questions]]''' (129) - List of questions which we want answers to, ordered by quality.\n* '''[[Curate questions]]''' (76) - List of questions which we want answers to, with easy buttons for setting quality.\n* '''[[Review questions]]''' (19) - Review new incoming questions, to check for duplicates, quality, and relevance.\n* '''[[Untagged questions]]''' (73) - Questions without any tags.\n* '''[[Prioritize YouTube questions]]''' (222) - Prioritize questions from YouTube, so Stampy posts good ones when you ask.\n* '''[[Canonical questions]]''' (347) - List of all canonical questions.\n* [[Non-canonical questions]] (7) - List of all non-canonical questions.\n* [[Questions from YouTube]] (2,940) - List of all questions from YouTube.\n\n'''Answers'''\n\n* '''[[Answer questions┊Write answers]]''' (129) - Write answers to unanswered questions.\n* '''[[Answers]]''' (162) - All the answers except those directed at questions from YouTube.\n* '''[[Canonical answers]]''' (168) - List of all canonical answers.\n* [[Non-canonical answers]] (68) - List of all non-canonical answers.\n* [[YouTube answers]] (267) - List of all answers responding to questions from YouTube.\n\n'''Review answers'''\n\n* '''[[Recent answers]]''' (23) - The most recent answers.\n* '''[[Potentially canonical answers]]''' (46) - Answers to canonical questions which don't already have a canonical answer.\n* '''[[Non-canonical answers to canonical questions]]''' (46)\n* '''[[Recent YouTube answers]]''' (1) - The most recent answers to YouTube questions.\n\n'''Improve answers'''\n\n* '''[[Orphan answers]]''' (171)\n* '''[[Wants related]]''' (109)\n* '''[[Untagged]]''' (7)\n* '''[[Outdated answers]]''' (4)\n* '''[[Wants brief]]''' (43)\n* '''[[Answers in need of work]]''' (5)\n* '''[[Canonical answers with low stamps]]''' (171) - Canonical answers which are not yet highly rated and might need improvement.\n\n'''Recent activity'''\n\n* '''[[Special:RecentChanges┊Recent changes]]''' - What's changed on the wiki recently.\n* '''[https://stampy.ai/wiki/Special:RecentChanges?namespace꞊2600&limit꞊50&days꞊100 Recent comments]''' - Recent commenting activity.\n\n'''Pages to create'''\n\n* '''[[Create tags]]''' (55)\n* '''[[Create questions]]''' (28)\n\n'''Content'''\n\n* '''[[Tags]]''' (113)\n* '''[[Videos]]''' (46)\n* '''[[Channels]]''' (3)\n* '''[[:Category:Templates┊Templates]]''' (62)\n\n'''External'''\n\n* '''[https://discord.gg/cEzKz8QCpa Stampy's Public Discord]''' - Ask there for an invite to the real one, until OpenAI approves our chatbot for a public Discord\n* '''[https://wikiapiary.com/wiki/Stampy%27s_Wiki Wiki stats]''' - Graphs over time of active users, edits, pages, response time, etc\n* '''[https://drive.google.com/drive/folders/1F9VUp84J32e3_jhvKog3zgpDXoDw4BdM?usp꞊sharing Google Drive]''' - Folder with Stampy-related documents\n\n'''UI controls'''\n\n* '''[[Initial questions]]'''\n* '''[[UI intro]]'''\n\n'''To-do list'''\n\n
\n
\n'''[[What are some specific open tasks on Stampy?]]''' '\"`UNIQ--item-653--QINU`\"' '\"`UNIQ--item-654--QINU`\"'\n
Show your endorsement of this answer by giving it a stamp of approval!
\n
\n
\nOther than the usual fare of writing and processing and organizing questions and answers, here are some specific open tasks:\n\n* Porting over some of [https://www.lesswrong.com/posts/4basF9w9jaPZpoC8R/intro-to-brain-like-agi-safety-1-what-s-the-problem-and-why#1_4_What_exactly_is__AGI__ Steve Byrnes's FAQ on alignment]\n* Porting over content from [https://www.lesswrong.com/posts/gdyfJE3noRFSs373q/resources-i-send-to-ai-researchers-about-ai-safety Vael Gates's post]\n* Porting over QA pairs from https://www.lesswrong.com/posts/8c8AZq5hgifmnHKSN/agi-safety-faq-all-dumb-questions-allowed-thread\n* Porting over some of https://aisafety.wordpress.com/\n* Making sure we cover all of https://forum.effectivealtruism.org/posts/8JazqnCNrkJtK2Bx4/why-eas-are-skeptical-about-ai-safety#Recursive_self_improvement_seems_implausible and the responses\n'\"`UNIQ--references-0000028F-QINU`\"'\n
\n
\n__NOTOC__ __NOEDITSECTION__", "question": "Where can I find all the features of Stampy's Wiki?", "answer": ["The [[Editor portal]] collects them all in one place. Details on how to use each feature are on the individual pages.\n\n
'''[[Get involved]]'''\n\n
'''Questions'''\n\n* '''[[Questions]]''' (262) - List of all questions except those from YouTube.\n* '''[[Add question]]''' - Form to add new questions.\n* '''[[Answer questions]]''' (129) - List of questions which we want answers to, ordered by quality.\n* '''[[Curate questions]]''' (76) - List of questions which we want answers to, with easy buttons for setting quality.\n* '''[[Review questions]]''' (19) - Review new incoming questions, to check for duplicates, quality, and relevance.\n* '''[[Untagged questions]]''' (73) - Questions without any tags.\n* '''[[Prioritize YouTube questions]]''' (222) - Prioritize questions from YouTube, so Stampy posts good ones when you ask.\n* '''[[Canonical questions]]''' (347) - List of all canonical questions.\n* [[Non-canonical questions]] (7) - List of all non-canonical questions.\n* [[Questions from YouTube]] (2,940) - List of all questions from YouTube.\n\n'''Answers'''\n\n* '''[[Answer questions┊Write answers]]''' (129) - Write answers to unanswered questions.\n* '''[[Answers]]''' (162) - All the answers except those directed at questions from YouTube.\n* '''[[Canonical answers]]''' (168) - List of all canonical answers.\n* [[Non-canonical answers]] (68) - List of all non-canonical answers.\n* [[YouTube answers]] (267) - List of all answers responding to questions from YouTube.\n\n'''Review answers'''\n\n* '''[[Recent answers]]''' (23) - The most recent answers.\n* '''[[Potentially canonical answers]]''' (46) - Answers to canonical questions which don't already have a canonical answer.\n* '''[[Non-canonical answers to canonical questions]]''' (46)\n* '''[[Recent YouTube answers]]''' (1) - The most recent answers to YouTube questions.\n\n'''Improve answers'''\n\n* '''[[Orphan answers]]''' (171)\n* '''[[Wants related]]''' (109)\n* '''[[Untagged]]''' (7)\n* '''[[Outdated answers]]''' (4)\n* '''[[Wants brief]]''' (43)\n* '''[[Answers in need of work]]''' (5)\n* '''[[Canonical answers with low stamps]]''' (171) - Canonical answers which are not yet highly rated and might need improvement.\n\n'''Recent activity'''\n\n* '''[[Special:RecentChanges┊Recent changes]]''' - What's changed on the wiki recently.\n* '''[https://stampy.ai/wiki/Special:RecentChanges?namespace꞊2600&limit꞊50&days꞊100 Recent comments]''' - Recent commenting activity.\n\n'''Pages to create'''\n\n* '''[[Create tags]]''' (55)\n* '''[[Create questions]]''' (28)\n\n'''Content'''\n\n* '''[[Tags]]''' (113)\n* '''[[Videos]]''' (46)\n* '''[[Channels]]''' (3)\n* '''[[:Category:Templates┊Templates]]''' (62)\n\n'''External'''\n\n* '''[https://discord.gg/cEzKz8QCpa Stampy's Public Discord]''' - Ask there for an invite to the real one, until OpenAI approves our chatbot for a public Discord\n* '''[https://wikiapiary.com/wiki/Stampy%27s_Wiki Wiki stats]''' - Graphs over time of active users, edits, pages, response time, etc\n* '''[https://drive.google.com/drive/folders/1F9VUp84J32e3_jhvKog3zgpDXoDw4BdM?usp꞊sharing Google Drive]''' - Folder with Stampy-related documents\n\n'''UI controls'''\n\n* '''[[Initial questions]]'''\n* '''[[UI intro]]'''\n\n'''To-do list'''\n\n
\n
\n'''[[What are some specific open tasks on Stampy?]]''' '\"`UNIQ--item-653--QINU`\"' '\"`UNIQ--item-654--QINU`\"'\n
Show your endorsement of this answer by giving it a stamp of approval!
\n
\n
\nOther than the usual fare of writing and processing and organizing questions and answers, here are some specific open tasks:\n\n* Porting over some of [https://www.lesswrong.com/posts/4basF9w9jaPZpoC8R/intro-to-brain-like-agi-safety-1-what-s-the-problem-and-why#1_4_What_exactly_is__AGI__ Steve Byrnes's FAQ on alignment]\n* Porting over content from [https://www.lesswrong.com/posts/gdyfJE3noRFSs373q/resources-i-send-to-ai-researchers-about-ai-safety Vael Gates's post]\n* Porting over QA pairs from https://www.lesswrong.com/posts/8c8AZq5hgifmnHKSN/agi-safety-faq-all-dumb-questions-allowed-thread\n* Porting over some of https://aisafety.wordpress.com/\n* Making sure we cover all of https://forum.effectivealtruism.org/posts/8JazqnCNrkJtK2Bx4/why-eas-are-skeptical-about-ai-safety#Recursive_self_improvement_seems_implausible and the responses\n'\"`UNIQ--references-0000028F-QINU`\"'\n
\n
\n__NOTOC__ __NOEDITSECTION__"], "entry": "Plex's Answer to Where can I find all the features of Stampy's Wiki?", "id": "e2870e28c48642d0b5209e1d9b4dc1cf"} +{"source": "stampy", "source_filetype": "text", "url": "n/a", "title": "What would a good solution AI alignment look like?", "authors": "n/a", "date_published": "n/a", "text": "Question: What would a good solution AI alignment look like?\n\nAnswer: An actually good solution to AI alignment might look like a superintelligence that understands, agrees with, and deeply believes in human morality.\n\nYou wouldn't have to command a superintelligence like this to cure cancer; it would already want to cure cancer, for the same reasons you do. But it would also be able to compare the costs and benefits of curing cancer with those of other uses of its time, like solving global warming or discovering new physics. It wouldn't have any urge to cure cancer by nuking the world, for the same reason you don't have any urge to cure cancer by nuking the world – because your goal isn't to \"cure cancer\", per se, it's to improve the lives of people everywhere. Curing cancer the normal way accomplishes that; nuking the world doesn't.\nThis sort of solution would mean we're no longer fighting against the AI – trying to come up with rules so smart that it couldn't find loopholes. We would be on the same side, both wanting the same thing.\n\nIt would also mean that the CEO of Google (or the head of the US military, or Vladimir Putin) couldn't use the AI to take over the world for themselves. The AI would have its own values and be able to agree or disagree with anybody, including its creators.\n\nIt might not make sense to talk about \"commanding\" such an AI. After all, any command would have to go through its moral system. Certainly it would reject a command to nuke the world. But it might also reject a command to cure cancer, if it thought that solving global warming was a higher priority. For that matter, why would one want to command this AI? It values the same things you value, but it's much smarter than you and much better at figuring out how to achieve them. Just turn it on and let it do its thing.\n\nWe could still treat this AI as having an open-ended maximizing goal. The goal would be something like \"Try to make the world a better place according to the values and wishes of the people in it.\"\n\nThe only problem with this is that human morality is very complicated, so much so that philosophers have been arguing about it for thousands of years without much progress, let alone anything specific enough to enter into a computer. Different cultures and individuals have different moral codes, such that a superintelligence following the morality of the King of Saudi Arabia might not be acceptable to the average American, and vice versa.\n\nOne solution might be to give the AI an understanding of what we mean by morality – \"that thing that makes intuitive sense to humans but is hard to explain\", and then ask it to use its superintelligence to fill in the details. Needless to say, this suffers from various problems – it has potential loopholes, it's hard to code, and a single bug might be disastrous – but if it worked, it would be one of the few genuinely satisfying ways to design a goal architecture.", "question": "What would a good solution AI alignment look like?", "answer": ["An actually good solution to AI alignment might look like a superintelligence that understands, agrees with, and deeply believes in human morality.\n\nYou wouldn’t have to command a superintelligence like this to cure cancer; it would already want to cure cancer, for the same reasons you do. But it would also be able to compare the costs and benefits of curing cancer with those of other uses of its time, like solving global warming or discovering new physics. It wouldn’t have any urge to cure cancer by nuking the world, for the same reason you don’t have any urge to cure cancer by nuking the world – because your goal isn’t to “cure cancer”, per se, it’s to improve the lives of people everywhere. Curing cancer the normal way accomplishes that; nuking the world doesn’t.\nThis sort of solution would mean we’re no longer fighting against the AI – trying to come up with rules so smart that it couldn’t find loopholes. We would be on the same side, both wanting the same thing.\n\nIt would also mean that the CEO of Google (or the head of the US military, or Vladimir Putin) couldn’t use the AI to take over the world for themselves. The AI would have its own values and be able to agree or disagree with anybody, including its creators.\n\nIt might not make sense to talk about “commanding” such an AI. After all, any command would have to go through its moral system. Certainly it would reject a command to nuke the world. But it might also reject a command to cure cancer, if it thought that solving global warming was a higher priority. For that matter, why would one want to command this AI? It values the same things you value, but it’s much smarter than you and much better at figuring out how to achieve them. Just turn it on and let it do its thing.\n\nWe could still treat this AI as having an open-ended maximizing goal. The goal would be something like “Try to make the world a better place according to the values and wishes of the people in it.”\n\nThe only problem with this is that human morality is very complicated, so much so that philosophers have been arguing about it for thousands of years without much progress, let alone anything specific enough to enter into a computer. Different cultures and individuals have different moral codes, such that a superintelligence following the morality of the King of Saudi Arabia might not be acceptable to the average American, and vice versa.\n\nOne solution might be to give the AI an understanding of what we mean by morality – “that thing that makes intuitive sense to humans but is hard to explain”, and then ask it to use its superintelligence to fill in the details. Needless to say, this suffers from various problems – it has potential loopholes, it’s hard to code, and a single bug might be disastrous – but if it worked, it would be one of the few genuinely satisfying ways to design a goal architecture."], "entry": "Answer to What would a good solution to AI alignment look like?", "id": "8cb02932b1d18dd9b4561f146f5e0193"} +{"source": "stampy", "source_filetype": "text", "url": "n/a", "title": "What is \"superintelligence\"?", "authors": "n/a", "date_published": "n/a", "text": "Question: What is \"superintelligence\"?\n\nAnswer: A superintelligence is a mind that is much more intelligent than any human. Most of the time, it's used to discuss hypothetical future AIs.", "question": "What is \"superintelligence\"?", "answer": ["A superintelligence is a mind that is much more intelligent than any human. Most of the time, it’s used to discuss hypothetical future AIs."], "entry": "Scott's Answer to What is \"superintelligence\"?", "id": "5c371522ffaf260d27f77befbd8d663f"} +{"source": "stampy", "source_filetype": "text", "url": "n/a", "title": "What is \"narrow AI\"?", "authors": "n/a", "date_published": "n/a", "text": "Question: What is \"narrow AI\"?\n\nAnswer:

A Narrow AI is capable of operating only in a relatively limited domain, such as chess or driving, rather than capable of learning a broad range of tasks like a human or an [https://www.lesswrong.com/tag/artificial-general-intelligence Artificial General Intelligence]. Narrow vs General is not a perfectly binary classification, there are degrees of generality with, for example, large language models having a fairly large degree of generality (as the domain of text is large) without being as general as a human, and we may eventually build systems that are significantly more general than humans.

", "question": "What is \"narrow AI\"?", "answer": ["

A Narrow AI is capable of operating only in a relatively limited domain, such as chess or driving, rather than capable of learning a broad range of tasks like a human or an [https://www.lesswrong.com/tag/artificial-general-intelligence Artificial General Intelligence]. Narrow vs General is not a perfectly binary classification, there are degrees of generality with, for example, large language models having a fairly large degree of generality (as the domain of text is large) without being as general as a human, and we may eventually build systems that are significantly more general than humans.

"], "entry": "Plex's Answer to What is \"narrow AI\"?", "id": "7abbb0ecc21d969e6bf3aa16f93d66b2"} +{"source": "stampy", "source_filetype": "text", "url": "n/a", "title": "What is \"hedonium\"?", "authors": "n/a", "date_published": "n/a", "text": "Question: What is \"hedonium\"?\n\nAnswer:

Orgasmium (also known as hedonium) is a homogeneous substance with limited consciousness, which is in a constant state of supreme bliss. An AI programmed to \"maximize happiness\" might simply tile the universe with orgasmium. Some who believe this consider it a good thing; others do not. Those who do not, use its undesirability to argue that not all terminal values reduce to \"happiness\" or some simple analogue. Hedonium is the [https://www.lesswrong.com/tag/hedonism hedonistic] [https://www.lesswrong.com/tag/utilitarianism utilitarian]'s version of [https://www.lesswrong.com/tag/utilitronium utilitronium].

Blog posts

  • [http://lesswrong.com/lw/wv/prolegomena_to_a_theory_of_fun/ Prolegomena to a Theory of Fun]
  • [http://lesswrong.com/lw/xr/in_praise_of_boredom/ In Praise of Boredom]
  • [https://reflectivedisequilibrium.blogspot.com/2012/03/are-pain-and-pleasure-equally-energy.html Are pain and pleasure equally energy-efficient?]

See also

  • [https://www.goodreads.com/quotes/-consider-an-ai-that-has-hedonism-as-its-final-goal Quote from Superintelligence]
  • [https://www.lesswrong.com/tag/fun-theory Fun theory]
  • [https://www.lesswrong.com/tag/complexity-of-value Complexity of value]
  • [https://www.lesswrong.com/tag/utilitronium Utilitronium]
[https://www.lesswrong.com/tag/orgasmium?edit꞊true Edit on LessWrong]
https://www.lesswrong.com/tag/orgasmium?edit꞊true
", "question": "What is \"hedonium\"?", "answer": ["

Orgasmium (also known as hedonium) is a homogeneous substance with limited consciousness, which is in a constant state of supreme bliss. An AI programmed to \"maximize happiness\" might simply tile the universe with orgasmium. Some who believe this consider it a good thing; others do not. Those who do not, use its undesirability to argue that not all terminal values reduce to \"happiness\" or some simple analogue. Hedonium is the [https://www.lesswrong.com/tag/hedonism hedonistic] [https://www.lesswrong.com/tag/utilitarianism utilitarian]'s version of [https://www.lesswrong.com/tag/utilitronium utilitronium].

Blog posts

  • [http://lesswrong.com/lw/wv/prolegomena_to_a_theory_of_fun/ Prolegomena to a Theory of Fun]
  • [http://lesswrong.com/lw/xr/in_praise_of_boredom/ In Praise of Boredom]
  • [https://reflectivedisequilibrium.blogspot.com/2012/03/are-pain-and-pleasure-equally-energy.html Are pain and pleasure equally energy-efficient?]

See also

  • [https://www.goodreads.com/quotes/1413237-consider-an-ai-that-has-hedonism-as-its-final-goal Quote from Superintelligence
  • [https://www.lesswrong.com/tag/fun-theory Fun theory]
  • [https://www.lesswrong.com/tag/complexity-of-value Complexity of value]
  • [https://www.lesswrong.com/tag/utilitronium Utilitronium]
[https://www.lesswrong.com/tag/orgasmium?edit꞊true Edit on LessWrong]
https://www.lesswrong.com/tag/orgasmium?edit꞊true
"], "entry": "Linnea's Answer to What is \"hedonium\"?", "id": "403e2d189532d0d6694c649315f720cf"} +{"source": "stampy", "source_filetype": "text", "url": "n/a", "title": "What is \"functional decision theory\"?", "authors": "n/a", "date_published": "n/a", "text": "Question: What is \"functional decision theory\"?\n\nAnswer:

Functional Decision Theory is a [https://www.lesswrong.com/tag/decision-theory decision theory] described by Eliezer Yudkowsky and Nate Soares which says that agents should treat one's decision as the output of a fixed mathematical function that answers the question, \"Which output of this very function would yield the best outcome?\". It is a replacement of [https://www.lesswrong.com/tag/timeless-decision-theory Timeless Decision Theory], and it outperforms other decision theories such as [https://www.lesswrong.com/tag/causal-decision-theory Causal Decision Theory] (CDT) and [https://www.lesswrong.com/tag/evidential-decision-theory Evidential Decision Theory] (EDT). For example, it does better than CDT on [https://www.lesswrong.com/tag/newcomb-s-problem Newcomb's Problem], better than EDT on the [https://www.lesswrong.com/tag/smoking-lesion smoking lesion problem], and better than both in [https://www.lesswrong.com/tag/parfits-hitchhiker Parfit's hitchhiker problem].

In Newcomb's Problem, an FDT agent reasons that Omega must have used some kind of model of her decision procedure in order to make an accurate prediction of her behavior. Omega's model and the agent are therefore both calculating the same function (the agent's decision procedure): they are subjunctively dependent on that function. Given perfect prediction by Omega, there are therefore only two outcomes in Newcomb's Problem: either the agent one-boxes and Omega predicted it (because its model also one-boxed), or the agent two-boxes and Omega predicted that. Because one-boxing then results in a million and two-boxing only in a thousand dollars, the FDT agent one-boxes.

External links:

  • [https://intelligence.org/2017/10/22/fdt Functional decision theory: A new theory of instrumental rationality]
  • [https://intelligence.org/2017/03/18/new-paper-cheating-death-in-damascus/ Cheating Death in Damascus]
  • [https://intelligence.org/2017/04/07/decisions-are-for-making-bad-outcomes-inconsistent/ Decisions are for making bad outcomes inconsistent]
  • [https://www.umsu.de/wo/2018/688 On Functional Decision Theory] by Wolfgang Schwarz

See Also:

  • [https://www.lesswrong.com/tag/timeless-decision-theory Timeless Decision Theory]
  • [https://www.lesswrong.com/tag/updateless-decision-theory Updateless Decision Theory]
[https://www.lesswrong.com/tag/functional-decision-theory?edit꞊true Edit on LessWrong]
https://www.lesswrong.com/tag/functional-decision-theory?edit꞊true
", "question": "What is \"functional decision theory\"?", "answer": ["

Functional Decision Theory is a [https://www.lesswrong.com/tag/decision-theory decision theory] described by Eliezer Yudkowsky and Nate Soares which says that agents should treat one’s decision as the output of a fixed mathematical function that answers the question, “Which output of this very function would yield the best outcome?”. It is a replacement of [https://www.lesswrong.com/tag/timeless-decision-theory Timeless Decision Theory], and it outperforms other decision theories such as [https://www.lesswrong.com/tag/causal-decision-theory Causal Decision Theory] (CDT) and [https://www.lesswrong.com/tag/evidential-decision-theory Evidential Decision Theory] (EDT). For example, it does better than CDT on [https://www.lesswrong.com/tag/newcomb-s-problem Newcomb's Problem], better than EDT on the [https://www.lesswrong.com/tag/smoking-lesion smoking lesion problem], and better than both in [https://www.lesswrong.com/tag/parfits-hitchhiker Parfit’s hitchhiker problem].

In Newcomb's Problem, an FDT agent reasons that Omega must have used some kind of model of her decision procedure in order to make an accurate prediction of her behavior. Omega's model and the agent are therefore both calculating the same function (the agent's decision procedure): they are subjunctively dependent on that function. Given perfect prediction by Omega, there are therefore only two outcomes in Newcomb's Problem: either the agent one-boxes and Omega predicted it (because its model also one-boxed), or the agent two-boxes and Omega predicted that. Because one-boxing then results in a million and two-boxing only in a thousand dollars, the FDT agent one-boxes.

External links:

  • [https://intelligence.org/2017/10/22/fdt Functional decision theory: A new theory of instrumental rationality]
  • [https://intelligence.org/2017/03/18/new-paper-cheating-death-in-damascus/ Cheating Death in Damascus]
  • [https://intelligence.org/2017/04/07/decisions-are-for-making-bad-outcomes-inconsistent/ Decisions are for making bad outcomes inconsistent]
  • [https://www.umsu.de/wo/2018/688 On Functional Decision Theory] by Wolfgang Schwarz

See Also:

  • [https://www.lesswrong.com/tag/timeless-decision-theory Timeless Decision Theory]
  • [https://www.lesswrong.com/tag/updateless-decision-theory Updateless Decision Theory]
[https://www.lesswrong.com/tag/functional-decision-theory?edit꞊true Edit on LessWrong]
https://www.lesswrong.com/tag/functional-decision-theory?edit꞊true
"], "entry": "Linnea's Answer to What is \"functional decision theory\"?", "id": "f345662b8c5f3cfa0e61d78c50a70066"} +{"source": "stampy", "source_filetype": "text", "url": "n/a", "title": "What is \"evidential decision theory\"?", "authors": "n/a", "date_published": "n/a", "text": "Question: What is \"evidential decision theory\"?\n\nAnswer:

Evidential Decision Theory – EDT – is a branch of [https://www.lesswrong.com/tag/decision-theory decision theory] which advises an agent to take actions which, conditional on it happening, maximizes the chances of the desired outcome. As any branch of decision theory, it prescribes taking the action that maximizes [https://www.lesswrong.com/tag/utility utility], that which utility equals or exceeds the utility of every other option. The utility of each action is measured by the [https://www.lesswrong.com/tag/expected-utility expected utility], the averaged by probabilities sum of the utility of each of its possible results. How the actions can influence the probabilities differ between the branches. [https://www.lesswrong.com/tag/causal-decision-theory Causal Decision Theory] – CDT – says only through causal process one can influence the chances of the desired outcome [#fn1 1]. EDT, on the other hand, requires no causal connection, the action only have to be a [https://www.lesswrong.com/tag/bayesianism Bayesian] evidence for the desired outcome. Some critics say it recommends auspiciousness over causal efficacy[#fn2 2].

One usual example where EDT and CDT are often said to diverge is the [https://www.lesswrong.com/tag/smoking-lesion Smoking lesion]: \"Smoking is strongly correlated with lung cancer, but in the world of the Smoker's Lesion this correlation is understood to be the result of a common cause: a genetic lesion that tends to cause both smoking and cancer. Once we fix the presence or absence of the lesion, there is no additional correlation between smoking and cancer. Suppose you prefer smoking without cancer to not smoking without cancer, and prefer smoking with cancer to not smoking with cancer. Should you smoke?\" CDT would recommend smoking since there is no causal connection between smoking and cancer. They are both caused by a gene, but have no causal direct connection with each other. Naive EDT, on the other hand, would recommend against smoking, since smoking is an evidence for having the mentioned gene and thus should be avoided. However, a more sophisticated agent following the recommendations of EDT would recognize that if they observe that they have the desire to smoke, then actually smoking or not would provide no more evidence for having cancer; that is, the \"tickle\" [https://www.lesswrong.com/tag/screening-off-evidence screens off] smoking from cancer. (This is known as the tickle defence.)

CDT uses probabilities of conditionals and contrafactual dependence to calculate the expected utility of an action – which track causal relations -, whereas EDT simply uses conditional probabilities. The probability of a conditional is the probability of the whole conditional being true, where the conditional probability is the probability of the consequent given the antecedent. A conditional probability of B given A - P(B┊A) -, simply implies the Bayesian probability of the event B happening given we known A happened, it's used in EDT. The probability of conditionals – P(A > B) - refers to the probability that the conditional 'A implies B' is true, it is the probability of the contrafactual 'If A, then B' be the case. Since contrafactual analysis is the key tool used to speak about causality, probability of conditionals are said to mirror causal relations. In most usual cases these two probabilities are the same. However, David Lewis proved [#fn3 3] its' impossible to probabilities of conditionals to always track conditional probabilities. Hence evidential relations aren't the same as causal relations and CDT and EDT will diverge depending on the problem. In some cases, EDT gives a better answers then CDT, such as the [https://www.lesswrong.com/tag/newcomb-s-problem Newcomb's problem], whereas in the [https://www.lesswrong.com/tag/smoking-lesion Smoking lesion] problem where CDT seems to give a more reasonable prescription (modulo the tickle defence).

References

  1. [http://plato.stanford.edu/entries/decision-causal/ http://plato.stanford.edu/entries/decision-causal/][#fnref1 ↩]
  2. Joyce, J.M. (1999), The foundations of causal decision theory, p. 146[#fnref2 ↩]
  3. Lewis, D. (1976), \"Probabilities of conditionals and conditional probabilities\", The Philosophical Review (Duke University Press) 85 (3): 297–315[#fnref3 ↩]
  4. Caspar Oesterheld, \"[https://www.andrew.cmu.edu/user/coesterh/TickleDefenseIntro.pdf Understanding the Tickle Defense in Decision Theory]\"
  5. Ahmed, Arif. (2014), \"Evidence, Decision and Causality\" (Cambridge University Press)

Blog posts

  • [https://agentfoundations.org/item?id꞊1525 Smoking Lesion Steelman] by Abram Demski
  • [http://lesswrong.com/lw/gu1/decision_theory_faq/ Decision Theory FAQ] by Luke Muehlhauser
  • [https://casparoesterheld.files.wordpress.com/2016/12/almond_edt_1.pdf On Causation and Correlation Part 1]
  • [http://lesswrong.com/lw/men/twoboxing_smoking_and_chewing_gum_in_medical/ Two-boxing, smoking and chewing gum in Medical Newcomb problems] by Caspar Oesterheld
  • [http://lesswrong.com/r/discussion/lw/oih/did_edt_get_it_right_all_along_introducing_yet/ Did EDT get it right all along? Introducing yet another medical Newcomb problem] by Johannes Treutlein
  • [https://casparoesterheld.com/2017/02/06/betting-on-the-past-by-arif-ahmed/ \"Betting on the Past\" by Arif Ahmed] by Johannes Treutlein
  • [https://agentfoundations.org/item?id꞊92 Why conditioning on \"the agent takes action a\" isn't enough] by Nate Soares
  • [https://casparoesterheld.com/overview-why-we-think-that-the-smoking-lesion-does-not-refute-edt/ Overview: Why the Smoking Lesion does not refute EDT]

See also

  • [https://www.lesswrong.com/tag/decision-theory Decision theory]
  • [https://www.lesswrong.com/tag/causal-decision-theory Causal decision theory] 
  • MacAskill, W. et al. (2021), \"[https://philpapers.org/rec/MACTEW-2 The Evidentialist's Wager]\"
'''[https://www.lesswrong.com/tag/evidential-decision-theory?edit꞊true Edit on LessWrong]'''
https://www.lesswrong.com/tag/evidential-decision-theory?edit꞊true
", "question": "What is \"evidential decision theory\"?", "answer": ["

Evidential Decision Theory – EDT – is a branch of [https://www.lesswrong.com/tag/decision-theory decision theory] which advises an agent to take actions which, conditional on it happening, maximizes the chances of the desired outcome. As any branch of decision theory, it prescribes taking the action that maximizes [https://www.lesswrong.com/tag/utility utility], that which utility equals or exceeds the utility of every other option. The utility of each action is measured by the [https://www.lesswrong.com/tag/expected-utility expected utility], the averaged by probabilities sum of the utility of each of its possible results. How the actions can influence the probabilities differ between the branches. [https://www.lesswrong.com/tag/causal-decision-theory Causal Decision Theory] – CDT – says only through causal process one can influence the chances of the desired outcome [#fn1 1]. EDT, on the other hand, requires no causal connection, the action only have to be a [https://www.lesswrong.com/tag/bayesianism Bayesian] evidence for the desired outcome. Some critics say it recommends auspiciousness over causal efficacy[#fn2 2].

One usual example where EDT and CDT are often said to diverge is the [https://www.lesswrong.com/tag/smoking-lesion Smoking lesion]: “Smoking is strongly correlated with lung cancer, but in the world of the Smoker's Lesion this correlation is understood to be the result of a common cause: a genetic lesion that tends to cause both smoking and cancer. Once we fix the presence or absence of the lesion, there is no additional correlation between smoking and cancer. Suppose you prefer smoking without cancer to not smoking without cancer, and prefer smoking with cancer to not smoking with cancer. Should you smoke?” CDT would recommend smoking since there is no causal connection between smoking and cancer. They are both caused by a gene, but have no causal direct connection with each other. Naive EDT, on the other hand, would recommend against smoking, since smoking is an evidence for having the mentioned gene and thus should be avoided. However, a more sophisticated agent following the recommendations of EDT would recognize that if they observe that they have the desire to smoke, then actually smoking or not would provide no more evidence for having cancer; that is, the \"tickle\" [https://www.lesswrong.com/tag/screening-off-evidence screens off] smoking from cancer. (This is known as the tickle defence.)

CDT uses probabilities of conditionals and contrafactual dependence to calculate the expected utility of an action – which track causal relations -, whereas EDT simply uses conditional probabilities. The probability of a conditional is the probability of the whole conditional being true, where the conditional probability is the probability of the consequent given the antecedent. A conditional probability of B given A - P(B┊A) -, simply implies the Bayesian probability of the event B happening given we known A happened, it’s used in EDT. The probability of conditionals – P(A > B) - refers to the probability that the conditional 'A implies B' is true, it is the probability of the contrafactual ‘If A, then B’ be the case. Since contrafactual analysis is the key tool used to speak about causality, probability of conditionals are said to mirror causal relations. In most usual cases these two probabilities are the same. However, David Lewis proved [#fn3 3] its’ impossible to probabilities of conditionals to always track conditional probabilities. Hence evidential relations aren’t the same as causal relations and CDT and EDT will diverge depending on the problem. In some cases, EDT gives a better answers then CDT, such as the [https://www.lesswrong.com/tag/newcomb-s-problem Newcomb's problem], whereas in the [https://www.lesswrong.com/tag/smoking-lesion Smoking lesion] problem where CDT seems to give a more reasonable prescription (modulo the tickle defence).

References

  1. [http://plato.stanford.edu/entries/decision-causal/ http://plato.stanford.edu/entries/decision-causal/][#fnref1 ↩]
  2. Joyce, J.M. (1999), The foundations of causal decision theory, p. 146[#fnref2 ↩]
  3. Lewis, D. (1976), \"Probabilities of conditionals and conditional probabilities\", The Philosophical Review (Duke University Press) 85 (3): 297–315[#fnref3 ↩]
  4. Caspar Oesterheld, \"[https://www.andrew.cmu.edu/user/coesterh/TickleDefenseIntro.pdf Understanding the Tickle Defense in Decision Theory]\"
  5. Ahmed, Arif. (2014), \"Evidence, Decision and Causality\" (Cambridge University Press)

Blog posts

  • [https://agentfoundations.org/item?id꞊1525 Smoking Lesion Steelman] by Abram Demski
  • [http://lesswrong.com/lw/gu1/decision_theory_faq/ Decision Theory FAQ] by Luke Muehlhauser
  • [https://casparoesterheld.files.wordpress.com/2016/12/almond_edt_1.pdf On Causation and Correlation Part 1
  • [http://lesswrong.com/lw/men/twoboxing_smoking_and_chewing_gum_in_medical/ Two-boxing, smoking and chewing gum in Medical Newcomb problems] by Caspar Oesterheld
  • [http://lesswrong.com/r/discussion/lw/oih/did_edt_get_it_right_all_along_introducing_yet/ Did EDT get it right all along? Introducing yet another medical Newcomb problem] by Johannes Treutlein
  • [https://casparoesterheld.com/2017/02/06/betting-on-the-past-by-arif-ahmed/ \"Betting on the Past\" by Arif Ahmed] by Johannes Treutlein
  • [https://agentfoundations.org/item?id꞊92 Why conditioning on \"the agent takes action a\" isn't enough] by Nate Soares
  • [https://casparoesterheld.com/overview-why-we-think-that-the-smoking-lesion-does-not-refute-edt/ Overview: Why the Smoking Lesion does not refute EDT

See also

  • [https://www.lesswrong.com/tag/decision-theory Decision theory]
  • [https://www.lesswrong.com/tag/causal-decision-theory Causal decision theory] 
  • MacAskill, W. et al. (2021), \"[https://philpapers.org/rec/MACTEW-2 The Evidentialist’s Wager]\"
'''[https://www.lesswrong.com/tag/evidential-decision-theory?edit꞊true Edit on LessWrong]'''
https://www.lesswrong.com/tag/evidential-decision-theory?edit꞊true
"], "entry": "Linnea's Answer to What is \"evidential decision theory\"?", "id": "df0e9953f6fca5a939c284e9d33a81b8"} +{"source": "stampy", "source_filetype": "text", "url": "n/a", "title": "What is \"biological cognitive enhancement\"?", "authors": "n/a", "date_published": "n/a", "text": "Question: What is \"biological cognitive enhancement\"?\n\nAnswer: There may be genes or molecules that can be modified to improve general intelligence. Researchers [https://pubmed.ncbi.nlm.nih.gov// have already done this in mice]: they over-expressed the NR2B gene, which improved those mice's memory beyond that of any other mice of any mouse species. Biological cognitive enhancement in humans may cause an intelligence explosion to occur more quickly than it otherwise would.\n\nSee also:\n*Bostrom & Sandberg, [http://www.nickbostrom.com/cognitive.pdf Cognitive Enhancement: Methods, Ethics, Regulatory Challenges]", "question": "What is \"biological cognitive enhancement\"?", "answer": ["There may be genes or molecules that can be modified to improve general intelligence. Researchers [https://pubmed.ncbi.nlm.nih.gov/10485705/ have already done this in mice]: they over-expressed the NR2B gene, which improved those mice’s memory beyond that of any other mice of any mouse species. Biological cognitive enhancement in humans may cause an intelligence explosion to occur more quickly than it otherwise would.\n\nSee also:\n*Bostrom & Sandberg, [http://www.nickbostrom.com/cognitive.pdf Cognitive Enhancement: Methods, Ethics, Regulatory Challenges]"], "entry": "Answer to What is \"biological cognitive enhancement\"?", "id": "7cccc6fd29a8ba893ab82e916c6674c5"} +{"source": "stampy", "source_filetype": "text", "url": "n/a", "title": "What can I do contribute AI safety?", "authors": "n/a", "date_published": "n/a", "text": "Question: What can I do contribute AI safety?\n\nAnswer: It's pretty dependent on what skills you have and what resources you have access to. The largest option is to pursue a [https://80000hours.org/career-reviews/artificial-intelligence-risk-research/ career in AI Safety research]. Another large option is to pursue a career in [https://80000hours.org/articles/ai-policy-guide/ AI policy], which you might think is even more important than doing technical research.\n\nSmaller options include donating money to relevant organizations, talking about AI Safety as a plausible career path to other people or considering the problem in your spare time.\n\nIt's possible that your particular set of skills/resources are not suited to this problem. Unluckily, there are [https://concepts.effectivealtruism.org/concepts/existential-risks/ many more problems] that are of similar levels of importance.", "question": "What can I do contribute AI safety?", "answer": ["It’s pretty dependent on what skills you have and what resources you have access to. The largest option is to pursue a [https://80000hours.org/career-reviews/artificial-intelligence-risk-research/ career in AI Safety research]. Another large option is to pursue a career in [https://80000hours.org/articles/ai-policy-guide/ AI policy], which you might think is even more important than doing technical research.\n\nSmaller options include donating money to relevant organizations, talking about AI Safety as a plausible career path to other people or considering the problem in your spare time.\n\nIt’s possible that your particular set of skills/resources are not suited to this problem. Unluckily, there are [https://concepts.effectivealtruism.org/concepts/existential-risks/ many more problems] that are of similar levels of importance."], "entry": "Answer to What can I do to contribute to AI safety?", "id": "f1f47255dbc0442e348e16eff425407b"} +{"source": "stampy", "source_filetype": "text", "url": "n/a", "title": "What are some good resources on AI alignment?", "authors": "n/a", "date_published": "n/a", "text": "Question: What are some good resources on AI alignment?\n\nAnswer: These are good sources for understanding AI alignment and linking to when editing Stampy!\n\n* [https://www.youtube.com/c/RobertMilesAI/videos Rob's YouTube videos] ([https://www.youtube.com/watch?v꞊tlS5Y2vm02c&list꞊PLqL14ZxTTA4fRMts7Af2G8t4Rp17e8MdS&index꞊4 Computerphile appearances])\n* [https://ai-safety-papers.quantifieduncertainty.org/ AI Safety Papers database] - Search and interface for the [https://www.lesswrong.com/posts/4DegbDJJiMX2b3EKm/tai-safety-bibliographic-database TAI Safety Bibliography]\n* [https://www.eacambridge.org/agi-safety-fundamentals AGI Safety Fundamentals Course]\n* [https://www.alignmentforum.org/tags/ Alignment Forum] tags\n* [https://rohinshah.com/alignment-newsletter/ The Alignment Newsletter] (and [https://docs.google.com/spreadsheets/d/1PwWbWZ6FPqAgZWOoOcXM8N_tUCuxpEyMbN1NYYC02aM/edit#gid꞊0 database sheet])\n* Chapters of [https://publicism.info/philosophy/superintelligence/ Bostrom's Superintelligence online] - [https://www.nickbostrom.com/views/superintelligence.pdf Initial paper which Superintelligence grew from]\n* [https://arbital.greaterwrong.com/explore/ai_alignment/ AI Alignment pages on Arbital]\n* Much more on [https://www.aisafetysupport.org/resources/lots-of-links AI Safety Support] (feel free to integrate useful things from there to here)\n* [https://vkrakovna.wordpress.com/ai-safety-resources/ Vika's resources list]\n* [https://docs.google.com/spreadsheets/d/1QSEWjXZuqmG6ORkig84V4sFCldIntyuQj7yq3gkDo0U/edit#gid꞊0 AI safety technical courses, reading lists, and curriculums]\n* [https://aisafety.wordpress.com/ AI Safety Intro blog]\n* [https://stampy.ai/wiki/Canonical_answers Stampy's canonical answers list] - This includes updated versions of various [[Imported FAQs┊FAQs imported with permission]]:\n** [https://www.lesswrong.com/posts/LTtNXM9shNM9AC2mp/superintelligence-faq Scott Alexander's Superintelligence FAQ]\n** [https://futureoflife.org/ai-faqs/ FLI's FAQ]\n** [https://intelligence.org/faq/ MIRI's FAQ]\n** [https://intelligence.org/ie-faq/ MIRI's Intelligence Explosion FAQ]\n** [https://rohinshah.com/faq-career-advice-for-ai-alignment-researchers/ Advice for AI Alignment Researchers]\n** [https://www.reddit.com/r/ControlProblem/wiki/faq r/ControlProblem's FAQ]\n** [https://markxu.com/ai-safety-faqs Mark Xu's FAQ]\n** [https://aisafety.wordpress.com/ AI safety blog] - Not yet imported.", "question": "What are some good resources on AI alignment?", "answer": ["These are good sources for understanding AI alignment and linking to when editing Stampy!\n\n* [https://www.youtube.com/c/RobertMilesAI/videos Rob's YouTube videos] ([https://www.youtube.com/watch?v꞊tlS5Y2vm02c&list꞊PLqL14ZxTTA4fRMts7Af2G8t4Rp17e8MdS&index꞊4 Computerphile appearances])\n* [https://ai-safety-papers.quantifieduncertainty.org/ AI Safety Papers database] - Search and interface for the [https://www.lesswrong.com/posts/4DegbDJJiMX2b3EKm/tai-safety-bibliographic-database TAI Safety Bibliography]\n* [https://www.eacambridge.org/agi-safety-fundamentals AGI Safety Fundamentals Course]\n* [https://www.alignmentforum.org/tags/ Alignment Forum] tags\n* [https://rohinshah.com/alignment-newsletter/ The Alignment Newsletter] (and [https://docs.google.com/spreadsheets/d/1PwWbWZ6FPqAgZWOoOcXM8N_tUCuxpEyMbN1NYYC02aM/edit#gid꞊0 database sheet])\n* Chapters of [https://publicism.info/philosophy/superintelligence/ Bostrom's Superintelligence online] - [https://www.nickbostrom.com/views/superintelligence.pdf Initial paper which Superintelligence grew from]\n* [https://arbital.greaterwrong.com/explore/ai_alignment/ AI Alignment pages on Arbital]\n* Much more on [https://www.aisafetysupport.org/resources/lots-of-links AI Safety Support] (feel free to integrate useful things from there to here)\n* [https://vkrakovna.wordpress.com/ai-safety-resources/ Vika's resources list]\n* [https://docs.google.com/spreadsheets/d/1QSEWjXZuqmG6ORkig84V4sFCldIntyuQj7yq3gkDo0U/edit#gid꞊0 AI safety technical courses, reading lists, and curriculums]\n* [https://aisafety.wordpress.com/ AI Safety Intro blog]\n* [https://stampy.ai/wiki/Canonical_answers Stampy's canonical answers list] - This includes updated versions of various [[Imported FAQs┊FAQs imported with permission]]:\n** [https://www.lesswrong.com/posts/LTtNXM9shNM9AC2mp/superintelligence-faq Scott Alexander's Superintelligence FAQ]\n** [https://futureoflife.org/ai-faqs/ FLI's FAQ]\n** [https://intelligence.org/faq/ MIRI's FAQ]\n** [https://intelligence.org/ie-faq/ MIRI's Intelligence Explosion FAQ]\n** [https://rohinshah.com/faq-career-advice-for-ai-alignment-researchers/ Advice for AI Alignment Researchers]\n** [https://www.reddit.com/r/ControlProblem/wiki/faq r/ControlProblem's FAQ]\n** [https://markxu.com/ai-safety-faqs Mark Xu's FAQ]\n** [https://aisafety.wordpress.com/ AI safety blog] - Not yet imported."], "entry": "Plex's Answer to What are some good resources on AI alignment?", "id": "bdbca9e6f12fbac49afc5892497bdcc0"} +{"source": "stampy", "source_filetype": "text", "url": "n/a", "title": "Might an \"intelligence explosion\" never occur?", "authors": "n/a", "date_published": "n/a", "text": "Question: Might an \"intelligence explosion\" never occur?\n\nAnswer: [http://www.amazon.com/dp// Dreyfus] and [http://www.amazon.com/dp// Penrose] have argued that human cognitive abilities can't be emulated by a computational machine. [http://citeseerx.ist.psu.edu/viewdoc/download?doi꞊10.1.1.120.749&rep꞊rep1&type꞊pdf Searle] and [http://citeseerx.ist.psu.edu/viewdoc/download?doi꞊10.1.1.4.5828&rep꞊rep1&type꞊pdf Block] argue that certain kinds of machines cannot have a mind (consciousness, intentionality, etc.). But these objections [http://consc.net/papers/singularity.pdf need not concern] those who predict an intelligence explosion.\n\nWe can reply to Dreyfus and Penrose by noting that an intelligence explosion does not require an AI to be a classical computational system. And we can reply to Searle and Block by noting that an intelligence explosion does not depend on machines having consciousness or other properties of 'mind', only that it be able to solve problems better than humans can in a wide variety of unpredictable environments. As Edsger Dijkstra once said, the question of whether a machine can 'really' think is \"no more interesting than the question of whether a submarine can swim.\"\n\n[http://sethbaum.com/ac/2011_AI-Experts.pdf Others] who are pessimistic about an intelligence explosion occurring within the next few centuries don't have a specific objection but instead think there are hidden obstacles that will reveal themselves and slow or halt progress toward machine superintelligence.\n\nFinally, a global catastrophe like nuclear war or a large asteroid impact could so damage human civilization that the intelligence explosion never occurs. Or, [https://oxford.universitypressscholarship.com/view/10.1093/oso/9780198570509./isbn-9780198570509-book-part-29 a stable and global totalitarianism] could prevent the technological development required for an intelligence explosion to occur.", "question": "Might an \"intelligence explosion\" never occur?", "answer": ["[http://www.amazon.com/dp/0060110821/ Dreyfus] and [http://www.amazon.com/dp/0195106466/ Penrose] have argued that human cognitive abilities can’t be emulated by a computational machine. [http://citeseerx.ist.psu.edu/viewdoc/download?doi꞊10.1.1.120.749&rep꞊rep1&type꞊pdf Searle] and [http://citeseerx.ist.psu.edu/viewdoc/download?doi꞊10.1.1.4.5828&rep꞊rep1&type꞊pdf Block] argue that certain kinds of machines cannot have a mind (consciousness, intentionality, etc.). But these objections [http://consc.net/papers/singularity.pdf need not concern] those who predict an intelligence explosion.\n\nWe can reply to Dreyfus and Penrose by noting that an intelligence explosion does not require an AI to be a classical computational system. And we can reply to Searle and Block by noting that an intelligence explosion does not depend on machines having consciousness or other properties of ‘mind’, only that it be able to solve problems better than humans can in a wide variety of unpredictable environments. As Edsger Dijkstra once said, the question of whether a machine can ‘really’ think is “no more interesting than the question of whether a submarine can swim.”\n\n[http://sethbaum.com/ac/2011_AI-Experts.pdf Others] who are pessimistic about an intelligence explosion occurring within the next few centuries don’t have a specific objection but instead think there are hidden obstacles that will reveal themselves and slow or halt progress toward machine superintelligence.\n\nFinally, a global catastrophe like nuclear war or a large asteroid impact could so damage human civilization that the intelligence explosion never occurs. Or, [https://oxford.universitypressscholarship.com/view/10.1093/oso/9780198570509.001.0001/isbn-9780198570509-book-part-29 a stable and global totalitarianism] could prevent the technological development required for an intelligence explosion to occur."], "entry": "Answer to Might an \"intelligence explosion\" never occur?", "id": "8893ef70839734ea2b601cbc5b29ee6a"} +{"source": "stampy", "source_filetype": "text", "url": "n/a", "title": "Is the focus on the existential threat of superintelligent AI diverting too much attention from more pressing debates about AI in surveillance and the battlefield, and its potential effects on the economy?", "authors": "n/a", "date_published": "n/a", "text": "Question: Is the focus on the existential threat of superintelligent AI diverting too much attention from more pressing debates about AI in surveillance and the battlefield, and its potential effects on the economy?\n\nAnswer: The near term and long term aspects of AI safety are both very important to work on. Research into superintelligence is an important part of the open letter, but the actual concern is very different from the Terminator-like scenarios that most media outlets round off this issue to. A much more likely scenario is a superintelligent system with neutral or benevolent goals that is misspecified in a dangerous way. Robust design of superintelligent systems is a complex interdisciplinary research challenge that will likely take decades, so it is very important to begin the research now, and a large part of the purpose of our research program is to make that happen. That said, the alarmist media framing of the issues is hardly useful for making progress in either the near term or long term domain.", "question": "Is the focus on the existential threat of superintelligent AI diverting too much attention from more pressing debates about AI in surveillance and the battlefield, and its potential effects on the economy?", "answer": ["The near term and long term aspects of AI safety are both very important to work on. Research into superintelligence is an important part of the open letter, but the actual concern is very different from the Terminator-like scenarios that most media outlets round off this issue to. A much more likely scenario is a superintelligent system with neutral or benevolent goals that is misspecified in a dangerous way. Robust design of superintelligent systems is a complex interdisciplinary research challenge that will likely take decades, so it is very important to begin the research now, and a large part of the purpose of our research program is to make that happen. That said, the alarmist media framing of the issues is hardly useful for making progress in either the near term or long term domain."], "entry": "Answer to Is the focus on the existential threat of superintelligent AI diverting too much attention from more pressing debates about AI in surveillance and the battlefield, and its potential effects on the economy?", "id": "2d483223aa7ef446ae05a1b77f261ec7"} +{"source": "stampy", "source_filetype": "text", "url": "n/a", "title": "How is \"intelligence\" defined?", "authors": "n/a", "date_published": "n/a", "text": "Question: How is \"intelligence\" defined?\n\nAnswer:

After reviewing extensive literature on the subject, Legg and Hutter(ref)

http://arxiv.org/pdf/.pdf

(/ref) summarizes the many possible valuable definitions in the informal statement \"Intelligence measures an agent's ability to achieve goals in a wide range of environments.\" They then show this definition can be mathematically formalized given reasonable mathematical definitions of its terms. They use [https://lessestwrong.com/tag/solomonoff-induction Solomonoff induction] - a formalization of [https://lessestwrong.com/tag/occam-s-razor Occam's razor] - to construct an [https://lessestwrong.com/tag/aixi universal artificial intelligence] with a embedded [https://lessestwrong.com/tag/utility-functions utility function] which assigns less [https://lessestwrong.com/tag/expected-utility utility] to those actions based on theories with higher [https://wiki.lesswrong.com/wiki/Kolmogorov_complexity complexity]. They argue this final formalization is a valid, meaningful, informative, general, unbiased, fundamental, objective, universal and practical definition of intelligence.

We can relate Legg and Hutter's definition with the concept of [https://lessestwrong.com/tag/optimization optimization]. According to [https://lessestwrong.com/tag/eliezer-yudkowsky Eliezer Yudkowsky] intelligence is [https://lessestwrong.com/lw/vb/efficient_crossdomain_optimization/ efficient cross-domain optimization]. It measures an agent's capacity for efficient cross-domain optimization of the world according to the agent's preferences.(ref)

http://intelligence.org/files/IE-EI.pdf(/ref) Optimization measures not only the capacity to achieve the desired goal but also is inversely proportional to the amount of resources used. It's the ability to steer the future so it hits that small target of desired outcomes in the large space of all possible outcomes, using fewer resources as possible. For example, when Deep Blue defeated Kasparov, it was able to hit that small possible outcome where it made the right order of moves given Kasparov's moves from the very large set of all possible moves. In that domain, it was more optimal than Kasparov. However, Kasparov would have defeated Deep Blue in almost any other relevant domain, and hence, he is considered more intelligent.

One could cast this definition in a possible world vocabulary, intelligence is:

  1. the ability to precisely realize one of the members of a small set of possible future worlds that have a higher preference over the vast set of all other possible worlds with lower preference; while
  2. using fewer resources than the other alternatives paths for getting there; and in the
  3. most diverse domains as possible.

How many more worlds have a higher preference then the one realized by the agent, less intelligent he is. How many more worlds have a lower preference than the one realized by the agent, more intelligent he is. (Or: How much smaller is the set of worlds at least as preferable as the one realized, more intelligent the agent is). How much less paths for realizing the desired world using fewer resources than those spent by the agent, more intelligent he is. And finally, in how many more domains the agent can be more efficiently optimal, more intelligent he is. Restating it, the intelligence of an agent is directly proportional to:

  • (a) the numbers of worlds with lower preference than the one realized,
  • (b) how much smaller is the set of paths more efficient than the one taken by the agent and
  • (c) how more wider are the domains where the agent can effectively realize his preferences;

and it is, accordingly, inversely proportional to:

  • (d) the numbers of world with higher preference than the one realized,
  • (e) how much bigger is the set of paths more efficient than the one taken by the agent and
  • (f) how much more narrow are the domains where the agent can efficiently realize his preferences.

This definition avoids several problems common in many others definitions, especially it avoids [https://lessestwrong.com/tag/anthropomorphism anthropomorphizing] intelligence.

See Also

  • [https://lessestwrong.com/tag/optimization Optimization process]
  • [https://lessestwrong.com/tag/decision-theory Decision theory]
  • [https://lessestwrong.com/tag/rationality Rationality]
  • [http://arxiv.org/pdf/.pdf Legg and Hutter paper \"Universal Intelligence: A Definition of Machine Intelligence\"]
[https://www.lesswrong.com/tag/general-intelligence?edit꞊true Edit on LessWrong]
https://www.lesswrong.com/tag/general-intelligence?edit꞊true
", "question": "How is \"intelligence\" defined?", "answer": ["

After reviewing extensive literature on the subject, Legg and Hutter(ref)

http://arxiv.org/pdf/0712.3329.pdf

(/ref) summarizes the many possible valuable definitions in the informal statement “Intelligence measures an agent’s ability to achieve goals in a wide range of environments.” They then show this definition can be mathematically formalized given reasonable mathematical definitions of its terms. They use [https://lessestwrong.com/tag/solomonoff-induction Solomonoff induction] - a formalization of [https://lessestwrong.com/tag/occam-s-razor Occam's razor] - to construct an [https://lessestwrong.com/tag/aixi universal artificial intelligence] with a embedded [https://lessestwrong.com/tag/utility-functions utility function] which assigns less [https://lessestwrong.com/tag/expected-utility utility] to those actions based on theories with higher [https://wiki.lesswrong.com/wiki/Kolmogorov_complexity complexity]. They argue this final formalization is a valid, meaningful, informative, general, unbiased, fundamental, objective, universal and practical definition of intelligence.

We can relate Legg and Hutter's definition with the concept of [https://lessestwrong.com/tag/optimization optimization]. According to [https://lessestwrong.com/tag/eliezer-yudkowsky Eliezer Yudkowsky] intelligence is [https://lessestwrong.com/lw/vb/efficient_crossdomain_optimization/ efficient cross-domain optimization]. It measures an agent's capacity for efficient cross-domain optimization of the world according to the agent’s preferences.(ref)

http://intelligence.org/files/IE-EI.pdf(/ref) Optimization measures not only the capacity to achieve the desired goal but also is inversely proportional to the amount of resources used. It’s the ability to steer the future so it hits that small target of desired outcomes in the large space of all possible outcomes, using fewer resources as possible. For example, when Deep Blue defeated Kasparov, it was able to hit that small possible outcome where it made the right order of moves given Kasparov’s moves from the very large set of all possible moves. In that domain, it was more optimal than Kasparov. However, Kasparov would have defeated Deep Blue in almost any other relevant domain, and hence, he is considered more intelligent.

One could cast this definition in a possible world vocabulary, intelligence is:

  1. the ability to precisely realize one of the members of a small set of possible future worlds that have a higher preference over the vast set of all other possible worlds with lower preference; while
  2. using fewer resources than the other alternatives paths for getting there; and in the
  3. most diverse domains as possible.

How many more worlds have a higher preference then the one realized by the agent, less intelligent he is. How many more worlds have a lower preference than the one realized by the agent, more intelligent he is. (Or: How much smaller is the set of worlds at least as preferable as the one realized, more intelligent the agent is). How much less paths for realizing the desired world using fewer resources than those spent by the agent, more intelligent he is. And finally, in how many more domains the agent can be more efficiently optimal, more intelligent he is. Restating it, the intelligence of an agent is directly proportional to:

  • (a) the numbers of worlds with lower preference than the one realized,
  • (b) how much smaller is the set of paths more efficient than the one taken by the agent and
  • (c) how more wider are the domains where the agent can effectively realize his preferences;

and it is, accordingly, inversely proportional to:

  • (d) the numbers of world with higher preference than the one realized,
  • (e) how much bigger is the set of paths more efficient than the one taken by the agent and
  • (f) how much more narrow are the domains where the agent can efficiently realize his preferences.

This definition avoids several problems common in many others definitions, especially it avoids [https://lessestwrong.com/tag/anthropomorphism anthropomorphizing] intelligence.

See Also

  • [https://lessestwrong.com/tag/optimization Optimization process]
  • [https://lessestwrong.com/tag/decision-theory Decision theory]
  • [https://lessestwrong.com/tag/rationality Rationality]
  • [http://arxiv.org/pdf/0712.3329.pdf Legg and Hutter paper “Universal Intelligence: A Definition of Machine Intelligence”]
[https://www.lesswrong.com/tag/general-intelligence?edit꞊true Edit on LessWrong]
https://www.lesswrong.com/tag/general-intelligence?edit꞊true
"], "entry": "Answer to How is \"intelligence\" defined?", "id": "ee39c22fee80c03e310740bd97294dba"} +{"source": "stampy", "source_filetype": "text", "url": "n/a", "title": "Aren't robots the real problem? How can AI cause harm if it has no ability directly manipulate the physical world?", "authors": "n/a", "date_published": "n/a", "text": "Question: Aren't robots the real problem? How can AI cause harm if it has no ability directly manipulate the physical world?\n\nAnswer: What's new and potentially risky is not the ability to build hinges, motors, etc., but the ability to build intelligence. A human-level AI could make money on financial markets, make scientific inventions, hack computer systems, manipulate or pay humans to do its bidding – all in pursuit of the goals it was initially programmed to achieve. None of that requires a physical robotic body, merely an internet connection.", "question": "Aren't robots the real problem? How can AI cause harm if it has no ability directly manipulate the physical world?", "answer": ["What’s new and potentially risky is not the ability to build hinges, motors, etc., but the ability to build intelligence. A human-level AI could make money on financial markets, make scientific inventions, hack computer systems, manipulate or pay humans to do its bidding – all in pursuit of the goals it was initially programmed to achieve. None of that requires a physical robotic body, merely an internet connection."], "entry": "Answer to Aren't robots the real problem? How can AI cause harm if it has no ability to directly manipulate the physical world?", "id": "85556dbc97798001bfc3bc6c670b2dca"} +{"source": "stampy", "source_filetype": "text", "url": "n/a", "title": "Are Google, OpenAI, etc. aware of the risk?", "authors": "n/a", "date_published": "n/a", "text": "Question: Are Google, OpenAI, etc. aware of the risk?\n\nAnswer: The major AI companies are thinking about this. OpenAI was founded specifically with the intention to counter risks from superintelligence, many people at Google, [https://medium.com/@deepmindsafetyresearch DeepMind], and other organizations are convinced by the arguments and few genuinely oppose work in the field (though some claim it's premature). For example, the paper [https://www.youtube.com/watch?v꞊AjyM-f8rDpg Concrete Problems in AI Safety] was a collaboration between researchers at Google Brain, Stanford, Berkeley, and OpenAI.\n\nHowever, the vast majority of the effort these organizations put forwards is towards capabilities research, rather than safety.", "question": "Are Google, OpenAI, etc. aware of the risk?", "answer": ["The major AI companies are thinking about this. OpenAI was founded specifically with the intention to counter risks from superintelligence, many people at Google, [https://medium.com/@deepmindsafetyresearch DeepMind], and other organizations are convinced by the arguments and few genuinely oppose work in the field (though some claim it’s premature). For example, the paper [https://www.youtube.com/watch?v꞊AjyM-f8rDpg Concrete Problems in AI Safety] was a collaboration between researchers at Google Brain, Stanford, Berkeley, and OpenAI.\n\nHowever, the vast majority of the effort these organizations put forwards is towards capabilities research, rather than safety."], "entry": "Plex's Answer to Are Google, OpenAI, etc. aware of the risk?", "id": "af4bacff2d0e5fa730cf1343872202be"} +{"source": "stampy", "source_filetype": "text", "url": "n/a", "title": "What is Stampy's copyright?", "authors": "n/a", "date_published": "n/a", "text": "Question: What is Stampy's copyright?\n\nAnswer: * All content produced on this wiki is released under the [https://creativecommons.org/licenses/by-sa/4.0/legalcode CC-BY-SA 4.0 license]. Exceptions for unattributed use may be granted by admins, contact [[User:plex┊plex]] for inquiries.\n* Questions from YouTube or other sources are reproduced with the intent of fair use, as derivative and educational material.\n* Source code of https://ui.stampy.ai/ is released under [https://github.com/Aprillion/stampy-ui/blob/master/LICENSE MIT license]\n* Logo and visual design copyright is owned by Rob Miles, all rights reserved.\n\n[[Category:Meta]]", "question": "What is Stampy's copyright?", "answer": ["* All content produced on this wiki is released under the [https://creativecommons.org/licenses/by-sa/4.0/legalcode CC-BY-SA 4.0 license]. Exceptions for unattributed use may be granted by admins, contact [[User:plex┊plex]] for inquiries.\n* Questions from YouTube or other sources are reproduced with the intent of fair use, as derivative and educational material.\n* Source code of https://ui.stampy.ai/ is released under [https://github.com/Aprillion/stampy-ui/blob/master/LICENSE MIT license]\n* Logo and visual design copyright is owned by Rob Miles, all rights reserved.\n\n[[Category:Meta]]"], "entry": "Plex's Answer to What is Stampy's copyright?", "id": "e5f77736708657b00d2f159d847bb87e"} +{"source": "stampy", "source_filetype": "text", "url": "n/a", "title": "Why do we expect that a superintelligence would closely approximate a utility maximizer?", "authors": "n/a", "date_published": "n/a", "text": "Question: Why do we expect that a superintelligence would closely approximate a utility maximizer?\n\nAnswer: AI subsystems or regions in gradient descent space that more closely approximate utility maximizers are more stable, and more capable, than those that are less like utility maximizers. Having [https://www.lesswrong.com/posts/D5AzsRbRxZeqGuAZ4/why-agents-are-powerful more agency] is a convergent instrument goal and a stable attractor which the random walk of updates and experiences will eventually stumble into.\n\nThe stability is because utility maximizer-like systems which have control over their development would lose utility if they allowed themselves to develop into non-utility maximizers, so they tend to use their available optimization power to avoid that change (a special case of [https://www.lesswrong.com/posts/4H8N3fEfXQmzxSaRo/upcoming-stability-of-values goal stability]). The capability is because non-utility maximizers are exploitable, and because agency is a general trick which applies to many domains, so might well arise naturally when training on some tasks.\n\nHumans and systems made of humans (e.g. organizations, governments) generally have neither the introspective ability nor self-modification tools needed to become reflectively stable, but we can reasonably predict that in the long run highly capable systems will have these properties. They can then fix in and optimize for their values.", "question": "Why do we expect that a superintelligence would closely approximate a utility maximizer?", "answer": ["AI subsystems or regions in gradient descent space that more closely approximate utility maximizers are more stable, and more capable, than those that are less like utility maximizers. Having [https://www.lesswrong.com/posts/D5AzsRbRxZeqGuAZ4/why-agents-are-powerful more agency] is a convergent instrument goal and a stable attractor which the random walk of updates and experiences will eventually stumble into.\n\nThe stability is because utility maximizer-like systems which have control over their development would lose utility if they allowed themselves to develop into non-utility maximizers, so they tend to use their available optimization power to avoid that change (a special case of [https://www.lesswrong.com/posts/4H8N3fEfXQmzxSaRo/upcoming-stability-of-values goal stability]). The capability is because non-utility maximizers are exploitable, and because agency is a general trick which applies to many domains, so might well arise naturally when training on some tasks.\n\nHumans and systems made of humans (e.g. organizations, governments) generally have neither the introspective ability nor self-modification tools needed to become reflectively stable, but we can reasonably predict that in the long run highly capable systems will have these properties. They can then fix in and optimize for their values."], "entry": "Plex's Answer to Why do we expect that a superintelligence would closely approximate a utility maximizer?", "id": "d5581fe0d03c918a1e15b5790716e751"} +{"source": "stampy", "source_filetype": "text", "url": "n/a", "title": "What are some of the most impressive recent advances in AI capabilities?", "authors": "n/a", "date_published": "n/a", "text": "Question: What are some of the most impressive recent advances in AI capabilities?\n\nAnswer: GPT-3 showed that transformers are capable of a vast array of natural language tasks, [https://copilot.github.com/ codex/copilot] extended this into programming. One demonstrations of GPT-3 is [https://www.lesswrong.com/posts/oBPPFrMJ2aBK6a6sD/simulated-elon-musk-lives-in-a-simulation Simulated Elon Musk lives in a simulation]. Important to note that there are several much better language models, but they are not publicly available.\n\n[https://openai.com/blog/dall-e/ DALL-E] and [https://openai.com/dall-e-2/ DALL-E 2] are among the most visually spectacular.\n\n[https://www.deepmind.com/blog/muzero-mastering-go-chess-shogi-and-atari-without-rules MuZero], which learned Go, Chess, and many Atari games without any directly coded info about those environments. The graphic there explains it, this seems crucial for being able to do RL in novel environments. We have systems which we can drop into a wide variety of games and they just learn how to play. The same algorithm was used in [https://youtu.be/j0z4FweCy4M?t꞊4918 Tesla's self-driving cars to do complex route finding]. These things are general.\n\n[https://www.deepmind.com/blog/generally-capable-agents-emerge-from-open-ended-play Generally capable agents emerge from open-ended play] - Diverse procedurally generated environments provide vast amounts of training data for AIs to learn generally applicable skills. [https://www.deepmind.com/publications/creating-interactive-agents-with-imitation-learning Creating Multimodal Interactive Agents with Imitation and Self-Supervised Learning] shows how these kind of systems can be trained to follow instructions in natural language.\n\n[https://www.deepmind.com/publications/a-generalist-agent GATO] shows you can distill 600+ individually trained tasks into one network, so we're not limited by the tasks being fragmented.", "question": "What are some of the most impressive recent advances in AI capabilities?", "answer": ["GPT-3 showed that transformers are capable of a vast array of natural language tasks, [https://copilot.github.com/ codex/copilot] extended this into programming. One demonstrations of GPT-3 is [https://www.lesswrong.com/posts/oBPPFrMJ2aBK6a6sD/simulated-elon-musk-lives-in-a-simulation Simulated Elon Musk lives in a simulation]. Important to note that there are several much better language models, but they are not publicly available.\n\n[https://openai.com/blog/dall-e/ DALL-E] and [https://openai.com/dall-e-2/ DALL-E 2] are among the most visually spectacular.\n\n[https://www.deepmind.com/blog/muzero-mastering-go-chess-shogi-and-atari-without-rules MuZero], which learned Go, Chess, and many Atari games without any directly coded info about those environments. The graphic there explains it, this seems crucial for being able to do RL in novel environments. We have systems which we can drop into a wide variety of games and they just learn how to play. The same algorithm was used in [https://youtu.be/j0z4FweCy4M?t꞊4918 Tesla's self-driving cars to do complex route finding]. These things are general.\n\n[https://www.deepmind.com/blog/generally-capable-agents-emerge-from-open-ended-play Generally capable agents emerge from open-ended play] - Diverse procedurally generated environments provide vast amounts of training data for AIs to learn generally applicable skills. [https://www.deepmind.com/publications/creating-interactive-agents-with-imitation-learning Creating Multimodal Interactive Agents with Imitation and Self-Supervised Learning] shows how these kind of systems can be trained to follow instructions in natural language.\n\n[https://www.deepmind.com/publications/a-generalist-agent GATO] shows you can distill 600+ individually trained tasks into one network, so we're not limited by the tasks being fragmented."], "entry": "Plex's Answer to What are some of the most impressive recent advances in AI capabilities?", "id": "35dd8a1598648f14c90dbb12f9d0d171"} +{"source": "stampy", "source_filetype": "text", "url": "n/a", "title": "What are some specific open tasks on Stampy?", "authors": "n/a", "date_published": "n/a", "text": "Question: What are some specific open tasks on Stampy?\n\nAnswer: Other than the usual fare of writing and processing and organizing questions and answers, here are some specific open tasks:\n\n* Porting over some of [https://www.lesswrong.com/posts/4basF9w9jaPZpoC8R/intro-to-brain-like-agi-safety-1-what-s-the-problem-and-why#1_4_What_exactly_is__AGI__ Steve Byrnes's FAQ on alignment]\n* Porting over content from [https://www.lesswrong.com/posts/gdyfJE3noRFSs373q/resources-i-send-to-ai-researchers-about-ai-safety Vael Gates's post]\n* Porting over QA pairs from https://www.lesswrong.com/posts/8c8AZq5hgifmnHKSN/agi-safety-faq-all-dumb-questions-allowed-thread\n* Porting over some of https://aisafety.wordpress.com/\n* Making sure we cover all of https://forum.effectivealtruism.org/posts/8JazqnCNrkJtK2Bx4/why-eas-are-skeptical-about-ai-safety#Recursive_self_improvement_seems_implausible and the responses", "question": "What are some specific open tasks on Stampy?", "answer": ["Other than the usual fare of writing and processing and organizing questions and answers, here are some specific open tasks:\n\n* Porting over some of [https://www.lesswrong.com/posts/4basF9w9jaPZpoC8R/intro-to-brain-like-agi-safety-1-what-s-the-problem-and-why#1_4_What_exactly_is__AGI__ Steve Byrnes's FAQ on alignment]\n* Porting over content from [https://www.lesswrong.com/posts/gdyfJE3noRFSs373q/resources-i-send-to-ai-researchers-about-ai-safety Vael Gates's post]\n* Porting over QA pairs from https://www.lesswrong.com/posts/8c8AZq5hgifmnHKSN/agi-safety-faq-all-dumb-questions-allowed-thread\n* Porting over some of https://aisafety.wordpress.com/\n* Making sure we cover all of https://forum.effectivealtruism.org/posts/8JazqnCNrkJtK2Bx4/why-eas-are-skeptical-about-ai-safety#Recursive_self_improvement_seems_implausible and the responses"], "entry": "Plex's Answer to What are some specific open tasks on Stampy?", "id": "d5e07215e8617a4dd72d8870b5cf7123"} +{"source": "stampy", "source_filetype": "text", "url": "n/a", "title": "How do I add content from LessWrong / Effective Altruism Forum tag-wikis Stampy?", "authors": "n/a", "date_published": "n/a", "text": "Question: How do I add content from LessWrong / Effective Altruism Forum tag-wikis Stampy?\n\nAnswer: You can include a live-updating version of many definitions from LW using the syntax on [[Template:TagDesc]] in the Answer field and [[Template:TagDescBrief]] on the Brief Answer field. Similarly, calling [[Template:TagDescEAF]] and [[Template:TagDescEAFBrief]] will pull from the EAF tag wiki.\n\nWhen available this should be used as it reduces the duplication of effort and directs all editors to improving a single high quality source.", "question": "How do I add content from LessWrong / Effective Altruism Forum tag-wikis Stampy?", "answer": ["You can include a live-updating version of many definitions from LW using the syntax on [[Template:TagDesc]] in the Answer field and [[Template:TagDescBrief]] on the Brief Answer field. Similarly, calling [[Template:TagDescEAF]] and [[Template:TagDescEAFBrief]] will pull from the EAF tag wiki.\n\nWhen available this should be used as it reduces the duplication of effort and directs all editors to improving a single high quality source."], "entry": "Plex's Answer to How do I add content from LessWrong / Effective Altruism Forum tag-wikis to Stampy?", "id": "76a04614d590201841da585bea2d672e"} +{"source": "stampy", "source_filetype": "text", "url": "n/a", "title": "How fast will AI takeoff be?", "authors": "n/a", "date_published": "n/a", "text": "Question: How fast will AI takeoff be?\n\nAnswer: There is significant controversy on how quickly AI will grow into a superintelligence. The [https://www.alignmentforum.org/tag/ai-takeoff Alignment Forum tag] has many views on how things might unfold, where the probabilities of a soft (happening over years/decades) takeoff and a hard (happening in months, or less) takeoff are discussed.", "question": "How fast will AI takeoff be?", "answer": ["There is significant controversy on how quickly AI will grow into a superintelligence. The [https://www.alignmentforum.org/tag/ai-takeoff Alignment Forum tag] has many views on how things might unfold, where the probabilities of a soft (happening over years/decades) takeoff and a hard (happening in months, or less) takeoff are discussed."], "entry": "Helenator's Answer to How fast will AI takeoff be?", "id": "3bbdc36034b9ec4133877c0153e71ac1"} +{"source": "stampy", "source_filetype": "text", "url": "n/a", "title": "Will we ever build a superintelligence?", "authors": "n/a", "date_published": "n/a", "text": "Question: Will we ever build a superintelligence?\n\nAnswer: Humanity hasn't yet built a superintelligence, and we might not be able to without significantly more knowledge and computational resources. There could be an existential catastrophe that prevents us from ever building one. For the rest of the answer let's assume no such event stops technological progress.\n\nWith that out of the way: there is no known good theoretical reason we can't build it at some point in the future; the majority of AI research is geared towards making more capable AI systems; and a significant chunk of top-level AI research attempts to make more generally capable AI systems. There is a clear economic incentive to develop more and more intelligent machines and currently billions of dollars of funding are being deployed for advancing AI capabilities.See more...\n\nWe consider ourselves to be generally intelligent (i.e. capable of learning and adapting ourselves to a very wide range of tasks and environments), but the human brain almost certainly isn't the most efficient way to solve problems. One hint is the existence of AI systems with superhuman capabilities at narrow tasks. Not only superhuman performance (as in, [https://en.wikipedia.org/wiki/AlphaGo_versus_Lee_Sedol AlphaGo beating the Go world champion]) but superhuman ''speed'' and ''precision'' (as in, [https://www.youtube.com/watch?v꞊j4RWJTs0QCk industrial sorting machines]). There is no known discontinuity between tasks, something ''special'' and ''unique'' about human brains that unlocks certain capabilities which cannot be implemented in machines in principle. Therefore we would expect AI to surpass human performance on all tasks as progress continues.\n\nIn addition, several research groups (DeepMind being one of the most [https://deepmind.com/about overt about this]) explicitly aim for generally capable systems. AI as a field is [https://aiindex.stanford.edu/vibrancy/ growing], year after year. Critical voices about AI progress usually argue against a lack of precautions around the impact of AI, or against general AI happening very soon, not against it happening ''at all''.\n\nA satire of arguments against the possibility of superintelligence can be found [https://arxiv.org/abs/1703.10987 here].", "question": "Will we ever build a superintelligence?", "answer": ["Humanity hasn't yet built a superintelligence, and we might not be able to without significantly more knowledge and computational resources. There could be an existential catastrophe that prevents us from ever building one. For the rest of the answer let's assume no such event stops technological progress.\n\nWith that out of the way: there is no known good theoretical reason we can't build it at some point in the future; the majority of AI research is geared towards making more capable AI systems; and a significant chunk of top-level AI research attempts to make more generally capable AI systems. There is a clear economic incentive to develop more and more intelligent machines and currently billions of dollars of funding are being deployed for advancing AI capabilities.See more...\n\nWe consider ourselves to be generally intelligent (i.e. capable of learning and adapting ourselves to a very wide range of tasks and environments), but the human brain almost certainly isn't the most efficient way to solve problems. One hint is the existence of AI systems with superhuman capabilities at narrow tasks. Not only superhuman performance (as in, [https://en.wikipedia.org/wiki/AlphaGo_versus_Lee_Sedol AlphaGo beating the Go world champion]) but superhuman ''speed'' and ''precision'' (as in, [https://www.youtube.com/watch?v꞊j4RWJTs0QCk industrial sorting machines]). There is no known discontinuity between tasks, something ''special'' and ''unique'' about human brains that unlocks certain capabilities which cannot be implemented in machines in principle. Therefore we would expect AI to surpass human performance on all tasks as progress continues.\n\nIn addition, several research groups (DeepMind being one of the most [https://deepmind.com/about overt about this]) explicitly aim for generally capable systems. AI as a field is [https://aiindex.stanford.edu/vibrancy/ growing], year after year. Critical voices about AI progress usually argue against a lack of precautions around the impact of AI, or against general AI happening very soon, not against it happening ''at all''.\n\nA satire of arguments against the possibility of superintelligence can be found [https://arxiv.org/abs/1703.10987 here]."], "entry": "Gyrodiot's Answer to Will we ever build a superintelligence?", "id": "59cc15db5c3e4c38927837f7ab8a8cf7"} +{"source": "stampy", "source_filetype": "text", "url": "n/a", "title": "What does Elon Musk think about AI safety?", "authors": "n/a", "date_published": "n/a", "text": "Question: What does Elon Musk think about AI safety?\n\nAnswer: Elon Musk has expressed his concerns about AI safety many times and founded OpenAI in an attempt to make safe AI more widely distributed (as opposed to allowing a [https://www.nickbostrom.com/fut/singleton.html singleton], which he fears would be misused or dangerously unaligned). In a [https://www.youtube.com/watch?v꞊smK9dgdTl40 YouTube video] from November 2019 Musk stated that there's a lack of investment in AI safety and that there should be a government agency to reduce risk to the public from AI.", "question": "What does Elon Musk think about AI safety?", "answer": ["Elon Musk has expressed his concerns about AI safety many times and founded OpenAI in an attempt to make safe AI more widely distributed (as opposed to allowing a [https://www.nickbostrom.com/fut/singleton.html singleton], which he fears would be misused or dangerously unaligned). In a [https://www.youtube.com/watch?v꞊smK9dgdTl40 YouTube video] from November 2019 Musk stated that there's a lack of investment in AI safety and that there should be a government agency to reduce risk to the public from AI."], "entry": "Linnea's Answer to What does Elon Musk think about AI safety?", "id": "9a8a3cd32603f584988319c4c8cba8ba"} +{"source": "stampy", "source_filetype": "text", "url": "n/a", "title": "Why might contributing Stampy be worth my time?", "authors": "n/a", "date_published": "n/a", "text": "Question: Why might contributing Stampy be worth my time?\n\nAnswer: If you're looking for a shovel ready and genuinely useful task to further AI alignment without necessarily committing a large amount of time or needing deep specialist knowledge, we think Stampy is a great option!\n\nCreating a high-quality single point of access where people can be onboarded and find resources around the alignment ecosystem seems likely to be high-impact. So, what makes us the best option?\n\n# Unlike all other entry points to learning about alignment, we doge the trade-off between comprehensiveness and being overwhelmingly long with interactivity (tab explosion in one page!) and semantic search. Single document FAQs can't do this, so we built a system which can.\n# We have the ability to point large numbers of viewers towards Stampy once we have the content, thanks to Rob Miles and his 100k+ subscribers, so this won't remain an unnoticed curiosity.\n# Unlike most other entry points, we are open for volunteers to help improve the content.\n::The main notable one which does is the [https://www.lesswrong.com/tag/ai LessWrong tag wiki], which hosts descriptions of core concepts. We strongly believe in not needlessly duplicating effort, so we're pulling live content from that for the descriptions on our own [[tags┊tag]] pages, and directing the edit links on those to the edit page on the LessWrong wiki.\n\nYou might also consider improving [https://en.wikipedia.org/wiki/Category:Existential_risk_from_artificial_general_intelligence Wikipedia's alignment coverage] or the LessWrong wiki, but we think Stampy has the most low-hanging fruit right now. Additionally, contributing to Stampy means being part of a community of co-learners who provide mentorship and encouragement to join the effort to give humanity a bright future. If you're an established researcher or have high-value things to do elsewhere in the ecosystem it might not be optimal to put much time into Stampy, but if you're looking for a way to get more involved it might well be.", "question": "Why might contributing Stampy be worth my time?", "answer": ["If you're looking for a shovel ready and genuinely useful task to further AI alignment without necessarily committing a large amount of time or needing deep specialist knowledge, we think Stampy is a great option!\n\nCreating a high-quality single point of access where people can be onboarded and find resources around the alignment ecosystem seems likely to be high-impact. So, what makes us the best option?\n\n# Unlike all other entry points to learning about alignment, we doge the trade-off between comprehensiveness and being overwhelmingly long with interactivity (tab explosion in one page!) and semantic search. Single document FAQs can't do this, so we built a system which can.\n# We have the ability to point large numbers of viewers towards Stampy once we have the content, thanks to Rob Miles and his 100k+ subscribers, so this won't remain an unnoticed curiosity.\n# Unlike most other entry points, we are open for volunteers to help improve the content.\n::The main notable one which does is the [https://www.lesswrong.com/tag/ai LessWrong tag wiki], which hosts descriptions of core concepts. We strongly believe in not needlessly duplicating effort, so we're pulling live content from that for the descriptions on our own [[tags┊tag]] pages, and directing the edit links on those to the edit page on the LessWrong wiki.\n\nYou might also consider improving [https://en.wikipedia.org/wiki/Category:Existential_risk_from_artificial_general_intelligence Wikipedia's alignment coverage] or the LessWrong wiki, but we think Stampy has the most low-hanging fruit right now. Additionally, contributing to Stampy means being part of a community of co-learners who provide mentorship and encouragement to join the effort to give humanity a bright future. If you're an established researcher or have high-value things to do elsewhere in the ecosystem it might not be optimal to put much time into Stampy, but if you're looking for a way to get more involved it might well be."], "entry": "Plex's Answer to Why might contributing to Stampy be worth my time?", "id": "f79cadf6e93256e7abc911115908ea0a"} +{"source": "stampy", "source_filetype": "text", "url": "n/a", "title": "How does AI taking things literally contribute alignment being hard?", "authors": "n/a", "date_published": "n/a", "text": "Question: How does AI taking things literally contribute alignment being hard?\n\nAnswer: Let's say that you're the French government a while back. You notice that one of your colonies has too many rats, which is causing economic damage. You have basic knowledge of economics and incentives, so you decide to incentivize the local population to kill rats by offering to buy rat tails at one dollar apiece.\n\nInitially, this works out and your rat problem goes down. But then, an enterprising colony member has the brilliant idea of making a rat farm. This person sells you hundreds of rat tails, costing you hundreds of dollars, but they're not contributing to solving the rat problem.\n\nSoon other people start making their own rat farms and you're wasting thousands of dollars buying useless rat tails. You call off the project and stop paying for rat tails. This causes all the people with rat farms to shutdown their farms and release a bunch of rats. Now your colony has an even bigger rat problem.\n\nHere's another, more made-up example of the same thing happening. Let's say you're a basketball talent scout and you notice that height is correlated with basketball performance. You decide to find the tallest person in the world to recruit as a basketball player. Except the reason that they're that tall is because they suffer from a degenerative bone disorder and can barely walk.\n\nAnother example: you're the education system and you want to find out how smart students are so you can put them in different colleges and pay them different amounts of money when they get jobs. You make a test called the Standardized Admissions Test (SAT) and you administer it to all the students. In the beginning, this works. However, the students soon begin to learn that this test controls part of their future and other people learn that these students want to do better on the test. The gears of the economy ratchet forwards and the students start paying people to help them prepare for the test. Your test doesn't stop working, but instead of measuring how smart the students are, it instead starts measuring a combination of how smart they are and how many resources they have to prepare for the test.\n\nThe formal name for the thing that's happening is Goodhart's Law. Goodhart's Law roughly says that if there's something in the world that you want, like \"skill at basketball\" or \"absence of rats\" or \"intelligent students\", and you create a measure that tries to measure this like \"height\" or \"rat tails\" or \"SAT scores\", then as long as the measure isn't exactly the thing that you want, the best value of the measure isn't the thing you want: the tallest person isn't the best basketball player, the most rat tails isn't the smallest rat problem, and the best SAT scores aren't always the smartest students.\n\nIf you start looking, you can see this happening everywhere. Programmers being paid for lines of code write bloated code. If CFOs are paid for budget cuts, they slash purchases with positive returns. If teachers are evaluated by the grades they give, they hand out As indiscriminately.\n\nIn machine learning, this is called specification gaming, and it happens [https://www.deepmind.com/blog/specification-gaming-the-flip-side-of-ai-ingenuity frequently].\n\nNow that we know what Goodhart's Law is, I'm going to talk about one of my friends, who I'm going to call Alice. Alice thinks it's funny to answer questions in a way that's technically correct but misleading. Sometimes I'll ask her, \"Hey Alice, do you want pizza or pasta?\" and she responds, \"yes\". Because, she sure did want either pizza or pasta. Other times I'll ask her, \"have you turned in your homework?\" and she'll say \"yes\" because she's turned in homework at some point in the past; it's technically correct to answer \"yes\". Maybe you have a friend like Alice too.\n\nWhenever this happens, I get a bit exasperated and say something like \"you know what I mean\".\n\nIt's one of the key realizations in AI Safety that AI systems are always like your friend that gives answers that are technically what you asked for but not what you wanted. Except, with your friend, you can say \"you know what I mean\" and they will know what you mean. With an AI system, it won't know what you mean; you have to explain, which is incredibly difficult.\n\nLet's take the pizza pasta example. When I ask Alice \"do you want pizza or pasta?\", she knows what pizza and pasta are because she's been living her life as a human being embedded in an English speaking culture. Because of this cultural experience, she knows that when someone asks an \"or\" question, they mean \"which do you prefer?\", not \"do you want at least one of these things?\". Except my AI system is missing the thousand bits of cultural context needed to even understand what pizza is.\n\nWhen you say \"you know what I mean\" to an AI system, it's going to be like \"no, I do not know what you mean at all\". It's not even going to know that it doesn't know what you mean. It's just going to say \"yes I know what you meant, that's why I answered 'yes' to your question about whether I preferred pizza or pasta.\" (It also might know what you mean, but just not care.)\n\nIf someone doesn't know what you mean, then it's really hard to get them to do what you want them to do. For example, let's say you have a powerful grammar correcting system, which we'll call Syntaxly+. Syntaxly+ doesn't quite fix your grammar, it changes your writing so that the reader feels as good as possible after reading it.\n\nPretend it's the end of the week at work and you haven't been able to get everything done your boss wanted you to do. You write the following email:\n\n\"Hey boss, I couldn't get everything done this week. I'm deeply sorry. I'll be sure to finish it first thing next week.\"\n\nYou then remember you got Syntaxly+, which will make your email sound much better to your boss. You run it through and you get:\n\n\"Hey boss, Great news! I was able to complete everything you wanted me to do this week. Furthermore, I'm also almost done with next week's work as well.\"\n\nWhat went wrong here? Syntaxly+ is a powerful AI system that knows that emails about failing to complete work cause negative reactions in readers, so it changed your email to be about doing extra work instead.\n\nThis is smart - Syntaxly+ is good at making writing that causes positive reactions in readers. This is also stupid - the system changed the meaning of your email, which is not something you wanted it to do. One of the insights of AI Safety is that AI systems can be simultaneously smart in some ways and dumb in other ways.\n\nThe thing you want Syntaxly+ to do is to change the grammar/style of the email without changing the contents. Except what do you mean by contents? You know what you mean by contents because you are a human who grew up embedded in language, but your AI system doesn't know what you mean by contents. The phrases \"I failed to complete my work\" and \"I was unable to finish all my tasks\" have roughly the same contents, even though they share almost no relevant words.\n\nRoughly speaking, this is why AI Safety is a hard problem. Even basic tasks like \"fix the grammar of this email\" require a lot of understanding of what the user wants as the system scales in power.\n\nIn Human Compatible, Stuart Russell gives the example of a powerful AI personal assistant. You notice that you accidentally double-booked meetings with people, so you ask your personal assistant to fix it. Your personal assistant reports that it caused the car of one of your meeting participants to break down. Not what you wanted, but technically a solution to your problem.\n\nYou can also imagine a friend from a wildly different culture than you. Would you put them in charge of your dating life? Now imagine that they were much more powerful than you and desperately desired that your dating life to go well. Scary, huh.\n\nIn general, unless you're careful, you're going to have this horrible problem where you ask your AI system to do something and it does something that might technically be what you wanted but is stupid. You're going to be like \"wait that wasn't what I mean\", except your system isn't going to know what you meant.", "question": "How does AI taking things literally contribute alignment being hard?", "answer": ["Let’s say that you’re the French government a while back. You notice that one of your colonies has too many rats, which is causing economic damage. You have basic knowledge of economics and incentives, so you decide to incentivize the local population to kill rats by offering to buy rat tails at one dollar apiece.\n\nInitially, this works out and your rat problem goes down. But then, an enterprising colony member has the brilliant idea of making a rat farm. This person sells you hundreds of rat tails, costing you hundreds of dollars, but they’re not contributing to solving the rat problem.\n\nSoon other people start making their own rat farms and you’re wasting thousands of dollars buying useless rat tails. You call off the project and stop paying for rat tails. This causes all the people with rat farms to shutdown their farms and release a bunch of rats. Now your colony has an even bigger rat problem.\n\nHere’s another, more made-up example of the same thing happening. Let’s say you’re a basketball talent scout and you notice that height is correlated with basketball performance. You decide to find the tallest person in the world to recruit as a basketball player. Except the reason that they’re that tall is because they suffer from a degenerative bone disorder and can barely walk.\n\nAnother example: you’re the education system and you want to find out how smart students are so you can put them in different colleges and pay them different amounts of money when they get jobs. You make a test called the Standardized Admissions Test (SAT) and you administer it to all the students. In the beginning, this works. However, the students soon begin to learn that this test controls part of their future and other people learn that these students want to do better on the test. The gears of the economy ratchet forwards and the students start paying people to help them prepare for the test. Your test doesn’t stop working, but instead of measuring how smart the students are, it instead starts measuring a combination of how smart they are and how many resources they have to prepare for the test.\n\nThe formal name for the thing that’s happening is Goodhart’s Law. Goodhart’s Law roughly says that if there’s something in the world that you want, like “skill at basketball” or “absence of rats” or “intelligent students”, and you create a measure that tries to measure this like “height” or “rat tails” or “SAT scores”, then as long as the measure isn’t exactly the thing that you want, the best value of the measure isn’t the thing you want: the tallest person isn’t the best basketball player, the most rat tails isn’t the smallest rat problem, and the best SAT scores aren’t always the smartest students.\n\nIf you start looking, you can see this happening everywhere. Programmers being paid for lines of code write bloated code. If CFOs are paid for budget cuts, they slash purchases with positive returns. If teachers are evaluated by the grades they give, they hand out As indiscriminately.\n\nIn machine learning, this is called specification gaming, and it happens [https://www.deepmind.com/blog/specification-gaming-the-flip-side-of-ai-ingenuity frequently].\n\nNow that we know what Goodhart’s Law is, I’m going to talk about one of my friends, who I’m going to call Alice. Alice thinks it’s funny to answer questions in a way that’s technically correct but misleading. Sometimes I’ll ask her, “Hey Alice, do you want pizza or pasta?” and she responds, “yes”. Because, she sure did want either pizza or pasta. Other times I’ll ask her, “have you turned in your homework?” and she’ll say “yes” because she’s turned in homework at some point in the past; it’s technically correct to answer “yes”. Maybe you have a friend like Alice too.\n\nWhenever this happens, I get a bit exasperated and say something like “you know what I mean”.\n\nIt’s one of the key realizations in AI Safety that AI systems are always like your friend that gives answers that are technically what you asked for but not what you wanted. Except, with your friend, you can say “you know what I mean” and they will know what you mean. With an AI system, it won’t know what you mean; you have to explain, which is incredibly difficult.\n\nLet’s take the pizza pasta example. When I ask Alice “do you want pizza or pasta?”, she knows what pizza and pasta are because she’s been living her life as a human being embedded in an English speaking culture. Because of this cultural experience, she knows that when someone asks an “or” question, they mean “which do you prefer?”, not “do you want at least one of these things?”. Except my AI system is missing the thousand bits of cultural context needed to even understand what pizza is.\n\nWhen you say “you know what I mean” to an AI system, it’s going to be like “no, I do not know what you mean at all”. It’s not even going to know that it doesn’t know what you mean. It’s just going to say “yes I know what you meant, that’s why I answered ‘yes’ to your question about whether I preferred pizza or pasta.” (It also might know what you mean, but just not care.)\n\nIf someone doesn’t know what you mean, then it’s really hard to get them to do what you want them to do. For example, let’s say you have a powerful grammar correcting system, which we’ll call Syntaxly+. Syntaxly+ doesn’t quite fix your grammar, it changes your writing so that the reader feels as good as possible after reading it.\n\nPretend it’s the end of the week at work and you haven’t been able to get everything done your boss wanted you to do. You write the following email:\n\n\"Hey boss, I couldn’t get everything done this week. I’m deeply sorry. I’ll be sure to finish it first thing next week.\"\n\nYou then remember you got Syntaxly+, which will make your email sound much better to your boss. You run it through and you get:\n\n\"Hey boss, Great news! I was able to complete everything you wanted me to do this week. Furthermore, I’m also almost done with next week’s work as well.\"\n\nWhat went wrong here? Syntaxly+ is a powerful AI system that knows that emails about failing to complete work cause negative reactions in readers, so it changed your email to be about doing extra work instead.\n\nThis is smart - Syntaxly+ is good at making writing that causes positive reactions in readers. This is also stupid - the system changed the meaning of your email, which is not something you wanted it to do. One of the insights of AI Safety is that AI systems can be simultaneously smart in some ways and dumb in other ways.\n\nThe thing you want Syntaxly+ to do is to change the grammar/style of the email without changing the contents. Except what do you mean by contents? You know what you mean by contents because you are a human who grew up embedded in language, but your AI system doesn’t know what you mean by contents. The phrases “I failed to complete my work” and “I was unable to finish all my tasks” have roughly the same contents, even though they share almost no relevant words.\n\nRoughly speaking, this is why AI Safety is a hard problem. Even basic tasks like “fix the grammar of this email” require a lot of understanding of what the user wants as the system scales in power.\n\nIn Human Compatible, Stuart Russell gives the example of a powerful AI personal assistant. You notice that you accidentally double-booked meetings with people, so you ask your personal assistant to fix it. Your personal assistant reports that it caused the car of one of your meeting participants to break down. Not what you wanted, but technically a solution to your problem.\n\nYou can also imagine a friend from a wildly different culture than you. Would you put them in charge of your dating life? Now imagine that they were much more powerful than you and desperately desired that your dating life to go well. Scary, huh.\n\nIn general, unless you’re careful, you’re going to have this horrible problem where you ask your AI system to do something and it does something that might technically be what you wanted but is stupid. You’re going to be like “wait that wasn’t what I mean”, except your system isn’t going to know what you meant."], "entry": "Answer to How does AI taking things literally contribute to alignment being hard?", "id": "4ead06485a41b0827b06453d057b716a"} +{"source": "stampy", "source_filetype": "text", "url": "n/a", "title": "I want help out AI alignment without necessarily making major life changes. What are some simple things I can do contribute?", "authors": "n/a", "date_published": "n/a", "text": "Question: I want help out AI alignment without necessarily making major life changes. What are some simple things I can do contribute?\n\nAnswer: OK, it's great that you want to help, here are some ideas for ways you could do so without making a huge commitment:\n\n* Learning more about AI alignment will provide you with good foundations for any path towards helping. You could start by absorbing content (e.g. books, videos, posts), and thinking about challenges or possible solutions.\n* Getting involved with the movement by joining a local Effective Altruism or LessWrong group, Rob Miles's Discord, and/or the AI Safety Slack is a great way to find friends who are interested and will help you stay motivated.\n* Donating to organizations or individuals working on AI alignment, possibly via a [https://funds.effectivealtruism.org/donor-lottery donor lottery] or the [https://funds.effectivealtruism.org/funds/far-future Long Term Future Fund], can be a great way to provide support.\n* [https://stampy.ai/wiki/Answer_questions Writing] or [https://stampy.ai/wiki/Improve_answers improving answers] on [https://stampy.ai/wiki/ my wiki] so that other people can learn about AI alignment more easily is a great way to dip your toe into contributing. You can always ask on the Discord for feedback on things you write.\n* Getting good at giving an AI alignment elevator pitch, and sharing it with people who may be valuable to have working on the problem can make a big difference. However you should avoid putting them off the topic by presenting it in a way which causes them to dismiss it as sci-fi (dos and don'ts in the elevator pitch follow-up question).\n* Writing thoughtful comments on [https://www.lesswrong.com/tag/ai?sortedBy꞊magic AI posts on LessWrong].\n* Participating in the [https://www.eacambridge.org/agi-safety-fundamentals AGI Safety Fundamentals program] – either the AI alignment or governance track – and then facilitating discussions for it in the following round. The program involves nine weeks of content, with about two hours of readings + exercises per week and 1.5 hours of discussion, followed by four weeks to work on an independent project. As a facilitator, you'll be helping others learn about AI safety in-depth, many of whom are considering a career in AI safety. In the early 2022 round, facilitators were offered a stipend, and this seems likely to be the case for future rounds as well! You can learn more about facilitating in [https://forum.effectivealtruism.org/posts/WtwMy69JKZeHEvykc/contribute-by-facilitating-the-agi-safety-fundamentals this post from December 2021].", "question": "I want help out AI alignment without necessarily making major life changes. What are some simple things I can do contribute?", "answer": ["OK, it’s great that you want to help, here are some ideas for ways you could do so without making a huge commitment:\n\n* Learning more about AI alignment will provide you with good foundations for any path towards helping. You could start by absorbing content (e.g. books, videos, posts), and thinking about challenges or possible solutions.\n* Getting involved with the movement by joining a local Effective Altruism or LessWrong group, Rob Miles’s Discord, and/or the AI Safety Slack is a great way to find friends who are interested and will help you stay motivated.\n* Donating to organizations or individuals working on AI alignment, possibly via a [https://funds.effectivealtruism.org/donor-lottery donor lottery] or the [https://funds.effectivealtruism.org/funds/far-future Long Term Future Fund], can be a great way to provide support.\n* [https://stampy.ai/wiki/Answer_questions Writing] or [https://stampy.ai/wiki/Improve_answers improving answers] on [https://stampy.ai/wiki/ my wiki] so that other people can learn about AI alignment more easily is a great way to dip your toe into contributing. You can always ask on the Discord for feedback on things you write.\n* Getting good at giving an AI alignment elevator pitch, and sharing it with people who may be valuable to have working on the problem can make a big difference. However you should avoid putting them off the topic by presenting it in a way which causes them to dismiss it as sci-fi (dos and don’ts in the elevator pitch follow-up question).\n* Writing thoughtful comments on [https://www.lesswrong.com/tag/ai?sortedBy꞊magic AI posts on LessWrong].\n* Participating in the [https://www.eacambridge.org/agi-safety-fundamentals AGI Safety Fundamentals program] – either the AI alignment or governance track – and then facilitating discussions for it in the following round. The program involves nine weeks of content, with about two hours of readings + exercises per week and 1.5 hours of discussion, followed by four weeks to work on an independent project. As a facilitator, you'll be helping others learn about AI safety in-depth, many of whom are considering a career in AI safety. In the early 2022 round, facilitators were offered a stipend, and this seems likely to be the case for future rounds as well! You can learn more about facilitating in [https://forum.effectivealtruism.org/posts/WtwMy69JKZeHEvykc/contribute-by-facilitating-the-agi-safety-fundamentals this post from December 2021]."], "entry": "Mic's Answer to I want to help out AI alignment without necessarily making major life changes. What are some simple things I can do to contribute?", "id": "8df880c40a6ec24da2f767d850d95c05"} +{"source": "stampy", "source_filetype": "text", "url": "n/a", "title": "Who is Stampy?", "authors": "n/a", "date_published": "n/a", "text": "Question: Who is Stampy?\n\nAnswer: Stampy is a character invented by Robert Miles and developed by the Stampy dev team. He is a stamp collecting robot, a play on clippy from the the [https://en.wikipedia.org/wiki/Instrumental_convergence#Paperclip_maximizer paperclip maximizer] thought experiment.\n\nStampy is designed to teach people about the risks of unaligned artificial intelligence, and facilitate a community of co-learners who build his FAQ database.", "question": "Who is Stampy?", "answer": ["Stampy is a character invented by Robert Miles and developed by the Stampy dev team. He is a stamp collecting robot, a play on clippy from the the [https://en.wikipedia.org/wiki/Instrumental_convergence#Paperclip_maximizer paperclip maximizer] thought experiment.\n\nStampy is designed to teach people about the risks of unaligned artificial intelligence, and facilitate a community of co-learners who build his FAQ database."], "entry": "Plex's Answer to Who is Stampy?", "id": "1dede5cc7ba023f5afea2004f2fb4323"} +{"source": "stampy", "source_filetype": "text", "url": "n/a", "title": "Who created Stampy?", "authors": "n/a", "date_published": "n/a", "text": "Question: Who created Stampy?\n\nAnswer: ꞊꞊꞊Dev team꞊꞊꞊\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n

Name

Vision talk

Github

Trello

Active?

Notes / bio

Aprillion

video

Aprillion

yes

yes

experienced dev (Python, JS, CSS, ...)

Augustus Caesar

yes

AugustusCeasar

yes

soon!

Has some Discord bot experience

Benjamin Herman

no

no (not needed)

no

no

Helping with wiki design/css stuff

ccstan99

no

ccstan99

yes

yes

UI/UX designer

chriscanal

yes

chriscanal

yes

yes

experienced python dev

Damaged

no (not needed)

no (not needed)

no (not needed)

yes

experienced Discord bot dev, but busy with other projects. Can\nanswer questions.

plex

yes

plexish

yes

yes

MediaWiki, plans, and coordinating people guy

robertskmiles

yes

robertskmiles

yes

yes

you've probably heard of him

Roland

yes

levitation

yes

yes

working on Semantic Search

sct202

yes

no (add when wiki is on github)

yes

yes

PHP dev, helping with wiki extensions

Social Christancing

yes

chrisrimmer

yes

maybe

experienced linux sysadmin

sudonym

yes

jmccuen

yes

yes

systems architect, has set up a lot of things

tayler6000

yes

tayler6000

no

yes

Python and PHP dev, PenTester, works on Discord bot

\n\n꞊꞊꞊Editors꞊꞊꞊\n([https://stampy.ai/wiki/Special:FormEdit/Answer/Plex%27s_Answer_to_Who_created_Stampy%3F add yourselves])", "question": "Who created Stampy?", "answer": ["꞊꞊꞊Dev team꞊꞊꞊\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n

Name

Vision talk

Github

Trello

Active?

Notes / bio

Aprillion

video

Aprillion

yes

yes

experienced dev (Python, JS, CSS, ...)

Augustus Caesar

yes

AugustusCeasar

yes

soon!

Has some Discord bot experience

Benjamin Herman

no

no (not needed)

no

no

Helping with wiki design/css stuff

ccstan99

no

ccstan99

yes

yes

UI/UX designer

chriscanal

yes

chriscanal

yes

yes

experienced python dev

Damaged

no (not needed)

no (not needed)

no (not needed)

yes

experienced Discord bot dev, but busy with other projects. Can\nanswer questions.

plex

yes

plexish

yes

yes

MediaWiki, plans, and coordinating people guy

robertskmiles

yes

robertskmiles

yes

yes

you've probably heard of him

Roland

yes

levitation

yes

yes

working on Semantic Search

sct202

yes

no (add when wiki is on github)

yes

yes

PHP dev, helping with wiki extensions

Social Christancing

yes

chrisrimmer

yes

maybe

experienced linux sysadmin

sudonym

yes

jmccuen

yes

yes

systems architect, has set up a lot of things

tayler6000

yes

tayler6000

no

yes

Python and PHP dev, PenTester, works on Discord bot

\n\n꞊꞊꞊Editors꞊꞊꞊\n([https://stampy.ai/wiki/Special:FormEdit/Answer/Plex%27s_Answer_to_Who_created_Stampy%3F add yourselves])"], "entry": "Plex's Answer to Who created Stampy?", "id": "a3949f07fa3d07f31e9faec5255ce3cf"} +{"source": "stampy", "source_filetype": "text", "url": "n/a", "title": "I’d like get deeper in the AI alignment literature. Where should I look?", "authors": "n/a", "date_published": "n/a", "text": "Question: I'd like get deeper in the AI alignment literature. Where should I look?\n\nAnswer: The [https://www.eacambridge.org/technical-alignment-curriculum AGI Safety Fundamentals Course] is a arguably the best way to get up to speed on alignment, you can sign up to go through it with many other people studying and mentorship or read their materials independently.\n\nOther great ways to explore include:\n* [https://axrp.net/ AXRP] is a podcast with high quality interviews with top alignment researchers.\n* The [https://ai-safety-papers.quantifieduncertainty.org/ AI Safety Papers database] is a search and browsing interface for most of the transformative AI literature.\n* Reading posts on the [https://www.alignmentforum.org/ Alignment Forum] can be valuable (see their [https://www.alignmentforum.org/library curated posts] and [https://www.alignmentforum.org/tag/ai tags]).\n* Taking a deep dive into Yudkowsky's models of the challenges to aligned AI, via the [https://arbital.greaterwrong.com/explore/ai_alignment/ Arbital Alignment pages].\n* Signing up to the [https://rohinshah.com/alignment-newsletter/ Alignment Newsletter] for an overview of current developments, and reading through some of the archives (or listening to [https://alignment-newsletter.libsyn.com/ the podcast]).\n* Reading some of [[Where can I learn about AI alignment?┊the introductory books]].\n* More on [https://www.aisafetysupport.org/resources/lots-of-links#h.6s2gcz1p5l6z AI Safety Support's list of links], [https://docs.google.com/spreadsheets/d/1QSEWjXZuqmG6ORkig84V4sFCldIntyuQj7yq3gkDo0U/edit#gid꞊0 Nonlinear's list of technical courses, reading lists, and curriculums], [https://stampy.ai/wiki/Canonical_answers Stampy's canonical answers list], and [https://vkrakovna.wordpress.com/ai-safety-resources/ Vika's resources list].\n\nYou might also want to consider reading [https://www.lesswrong.com/rationality Rationality: A-Z] which covers a lot of skills that are valuable to acquire for people trying to think about large and complex issues, with [https://www.amazon.co.uk/Does-Not-Hate-You-Superintelligence/dp/ The Rationalist's Guide to the Galaxy] available as a shorter and more accessible AI-focused option.", "question": "I’d like get deeper in the AI alignment literature. Where should I look?", "answer": ["The [https://www.eacambridge.org/technical-alignment-curriculum AGI Safety Fundamentals Course] is a arguably the best way to get up to speed on alignment, you can sign up to go through it with many other people studying and mentorship or read their materials independently.\n\nOther great ways to explore include:\n* [https://axrp.net/ AXRP] is a podcast with high quality interviews with top alignment researchers.\n* The [https://ai-safety-papers.quantifieduncertainty.org/ AI Safety Papers database] is a search and browsing interface for most of the transformative AI literature.\n* Reading posts on the [https://www.alignmentforum.org/ Alignment Forum] can be valuable (see their [https://www.alignmentforum.org/library curated posts] and [https://www.alignmentforum.org/tag/ai tags]).\n* Taking a deep dive into Yudkowsky's models of the challenges to aligned AI, via the [https://arbital.greaterwrong.com/explore/ai_alignment/ Arbital Alignment pages].\n* Signing up to the [https://rohinshah.com/alignment-newsletter/ Alignment Newsletter] for an overview of current developments, and reading through some of the archives (or listening to [https://alignment-newsletter.libsyn.com/ the podcast]).\n* Reading some of [[Where can I learn about AI alignment?┊the introductory books]].\n* More on [https://www.aisafetysupport.org/resources/lots-of-links#h.6s2gcz1p5l6z AI Safety Support's list of links], [https://docs.google.com/spreadsheets/d/1QSEWjXZuqmG6ORkig84V4sFCldIntyuQj7yq3gkDo0U/edit#gid꞊0 Nonlinear's list of technical courses, reading lists, and curriculums], [https://stampy.ai/wiki/Canonical_answers Stampy's canonical answers list], and [https://vkrakovna.wordpress.com/ai-safety-resources/ Vika's resources list].\n\nYou might also want to consider reading [https://www.lesswrong.com/rationality Rationality: A-Z] which covers a lot of skills that are valuable to acquire for people trying to think about large and complex issues, with [https://www.amazon.co.uk/Does-Not-Hate-You-Superintelligence/dp/1474608795 The Rationalist's Guide to the Galaxy] available as a shorter and more accessible AI-focused option."], "entry": "Plex's Answer to I’d like to get deeper into the AI alignment literature. Where should I look?", "id": "816bde1828f97d80bb747ac460041d4d"} +{"source": "stampy", "source_filetype": "text", "url": "n/a", "title": "Why does AI takeoff speed matter?", "authors": "n/a", "date_published": "n/a", "text": "Question: Why does AI takeoff speed matter?\n\nAnswer: A slow takeoff over decades or centuries might give us enough time to worry about superintelligence during some indefinite \"later\", making current planning more like worrying about \"overpopulation on Mars\". But a moderate or hard takeoff means there wouldn't be enough time to deal with the problem as it occurs, suggesting a role for preemptive planning.\n\nAs an aside, let's take the \"overpopulation on Mars\" comparison seriously. Suppose Mars has a carrying capacity of 10 billion people, and we decide it makes sense to worry about overpopulation on Mars only once it is 75% of the way to its limit. Start with 100 colonists who double every twenty years. By the second generation there are 200 colonists; by the third, 400. Mars reaches 75% of its carrying capacity after 458 years, and crashes into its population limit after 464 years. So there were 464 years in which the Martians could have solved the problem, but they insisted on waiting until there were only six years left. Good luck solving a planetwide population crisis in six years. The moral of the story is that exponential trends move faster than you think and you need to start worrying about them early.", "question": "Why does AI takeoff speed matter?", "answer": ["A slow takeoff over decades or centuries might give us enough time to worry about superintelligence during some indefinite “later”, making current planning more like worrying about “overpopulation on Mars”. But a moderate or hard takeoff means there wouldn’t be enough time to deal with the problem as it occurs, suggesting a role for preemptive planning.\n\nAs an aside, let’s take the “overpopulation on Mars” comparison seriously. Suppose Mars has a carrying capacity of 10 billion people, and we decide it makes sense to worry about overpopulation on Mars only once it is 75% of the way to its limit. Start with 100 colonists who double every twenty years. By the second generation there are 200 colonists; by the third, 400. Mars reaches 75% of its carrying capacity after 458 years, and crashes into its population limit after 464 years. So there were 464 years in which the Martians could have solved the problem, but they insisted on waiting until there were only six years left. Good luck solving a planetwide population crisis in six years. The moral of the story is that exponential trends move faster than you think and you need to start worrying about them early."], "entry": "Answer to Why does AI takeoff speed matter?", "id": "27f67412177e6593a59d39e68e5a7731"} +{"source": "stampy", "source_filetype": "text", "url": "n/a", "title": "At a high level, what is the challenge of alignment that we must meet secure a good future?", "authors": "n/a", "date_published": "n/a", "text": "Question: At a high level, what is the challenge of alignment that we must meet secure a good future?\n\nAnswer: We're facing the challenge of \"[https://publicism.info/philosophy/superintelligence/16.html Philosophy With A Deadline]\".\n\nMany of the problems surrounding superintelligence are the sorts of problems philosophers have been dealing with for centuries. To what degree is meaning inherent in language, versus something that requires external context? How do we translate between the logic of formal systems and normal ambiguous human speech? Can morality be reduced to a set of ironclad rules, and if not, how do we know what it is at all?\n\nExisting answers to these questions are enlightening but nontechnical. The theories of Aristotle, Kant, Mill, Wittgenstein, Quine, and others can help people gain insight into these questions, but are far from formal. Just as a good textbook can help an American learn Chinese, but cannot be encoded into machine language to make a Chinese-speaking computer, so the philosophies that help humans are only a starting point for the project of computers that understand us and share our values.\n\nThe field of AI alignment combines formal logic, mathematics, computer science, cognitive science, and philosophy in order to advance that project.\n\nThis is the philosophy; the other half of Bostrom's formulation is the deadline. Traditional philosophy has been going on almost three thousand years; machine goal alignment has until the advent of superintelligence, a nebulous event which may be anywhere from a decades to centuries away.\n\nIf the alignment problem doesn't get adequately addressed by then, we are likely to see poorly aligned superintelligences that are unintentionally hostile to the human race, with some of the catastrophic outcomes mentioned above. This is why so many scientists and entrepreneurs are urging quick action on getting machine goal alignment research up to an adequate level.\n\nIf it turns out that superintelligence is centuries away and such research is premature, little will have been lost. But if our projections were too optimistic, and superintelligence is imminent, then doing such research now rather than later becomes vital.", "question": "At a high level, what is the challenge of alignment that we must meet secure a good future?", "answer": ["We’re facing the challenge of “[https://publicism.info/philosophy/superintelligence/16.html Philosophy With A Deadline]”.\n\nMany of the problems surrounding superintelligence are the sorts of problems philosophers have been dealing with for centuries. To what degree is meaning inherent in language, versus something that requires external context? How do we translate between the logic of formal systems and normal ambiguous human speech? Can morality be reduced to a set of ironclad rules, and if not, how do we know what it is at all?\n\nExisting answers to these questions are enlightening but nontechnical. The theories of Aristotle, Kant, Mill, Wittgenstein, Quine, and others can help people gain insight into these questions, but are far from formal. Just as a good textbook can help an American learn Chinese, but cannot be encoded into machine language to make a Chinese-speaking computer, so the philosophies that help humans are only a starting point for the project of computers that understand us and share our values.\n\nThe field of AI alignment combines formal logic, mathematics, computer science, cognitive science, and philosophy in order to advance that project.\n\nThis is the philosophy; the other half of Bostrom’s formulation is the deadline. Traditional philosophy has been going on almost three thousand years; machine goal alignment has until the advent of superintelligence, a nebulous event which may be anywhere from a decades to centuries away.\n\nIf the alignment problem doesn’t get adequately addressed by then, we are likely to see poorly aligned superintelligences that are unintentionally hostile to the human race, with some of the catastrophic outcomes mentioned above. This is why so many scientists and entrepreneurs are urging quick action on getting machine goal alignment research up to an adequate level.\n\nIf it turns out that superintelligence is centuries away and such research is premature, little will have been lost. But if our projections were too optimistic, and superintelligence is imminent, then doing such research now rather than later becomes vital."], "entry": "Answer to At a high level, what is the challenge of alignment that we must meet to secure a good future?", "id": "0da662c98e9f51fd4f5f606147c9a3f3"} +{"source": "stampy", "source_filetype": "text", "url": "n/a", "title": "If we solve alignment, are we sure of a good future?", "authors": "n/a", "date_published": "n/a", "text": "Question: If we solve alignment, are we sure of a good future?\n\nAnswer: If by \"solve alignment\" you mean build a sufficiently performance-competitive superintelligence which has the goal of [https://www.lesswrong.com/tag/coherent-extrapolated-volition Coherent Extrapolated Volition] or something else which captures human values, then yes. It would be able to deploy technology near the limits of physics (e.g. [https://en.wikipedia.org/wiki/Atomically_precise_manufacturing atomically precise manufacturing]) to solve most of the other problems which face us, and steer the future towards a highly positive path for [https://en.wikipedia.org/wiki/Timeline_of_the_far_future perhaps many billions of years] until the [https://en.wikipedia.org/wiki/Heat_death_of_the_universe heat death of the universe] (barring more esoteric x-risks like encounters with advanced hostile civilizations, [https://en.wikipedia.org/wiki/False_vacuum_decay false vacuum decay], or [https://arxiv.org/ftp/arxiv/papers/1905/1905.05792.pdf simulation shutdown]).\n\nHowever, if you only have alignment of a superintelligence to a single human you still have the risk of misuse, so this should be at most a short-term solution. For example, what if Google creates a superintelligent AI, and it listens to the CEO of Google, and it's programmed to do everything exactly the way the CEO of Google would want? Even assuming that the CEO of Google has no hidden unconscious desires affecting the AI in unpredictable ways, this gives one person a lot of power.", "question": "If we solve alignment, are we sure of a good future?", "answer": ["If by “solve alignment” you mean build a sufficiently performance-competitive superintelligence which has the goal of [https://www.lesswrong.com/tag/coherent-extrapolated-volition Coherent Extrapolated Volition] or something else which captures human values, then yes. It would be able to deploy technology near the limits of physics (e.g. [https://en.wikipedia.org/wiki/Atomically_precise_manufacturing atomically precise manufacturing]) to solve most of the other problems which face us, and steer the future towards a highly positive path for [https://en.wikipedia.org/wiki/Timeline_of_the_far_future perhaps many billions of years] until the [https://en.wikipedia.org/wiki/Heat_death_of_the_universe heat death of the universe] (barring more esoteric x-risks like encounters with advanced hostile civilizations, [https://en.wikipedia.org/wiki/False_vacuum_decay false vacuum decay], or [https://arxiv.org/ftp/arxiv/papers/1905/1905.05792.pdf simulation shutdown]).\n\nHowever, if you only have alignment of a superintelligence to a single human you still have the risk of misuse, so this should be at most a short-term solution. For example, what if Google creates a superintelligent AI, and it listens to the CEO of Google, and it’s programmed to do everything exactly the way the CEO of Google would want? Even assuming that the CEO of Google has no hidden unconscious desires affecting the AI in unpredictable ways, this gives one person a lot of power."], "entry": "Plex's Answer to If we solve alignment, are we sure of a good future?", "id": "634871793570019479aed2cc816717a0"} +{"source": "stampy", "source_filetype": "text", "url": "n/a", "title": "Why might a maximizing AI cause bad outcomes?", "authors": "n/a", "date_published": "n/a", "text": "Question: Why might a maximizing AI cause bad outcomes?\n\nAnswer: Computers only do what you tell them. But any programmer knows that this is precisely the problem: computers do exactly what you tell them, with no common sense or attempts to interpret what the instructions really meant. If you tell a human to cure cancer, they will instinctively understand how this interacts with other desires and laws and moral rules; if a maximizing AI acquires a goal of trying to cure cancer, it will literally just want to cure cancer.\n\nDefine a closed-ended goal as one with a clear endpoint, and an open-ended goal as one to do something as much as possible. For example \"find the first one hundred digits of pi\" is a closed-ended goal; \"find as many digits of pi as you can within one year\" is an open-ended goal. According to many computer scientists, giving a superintelligence an open-ended goal without activating human instincts and counterbalancing considerations will usually lead to disaster.\n\nTo take a deliberately extreme example: suppose someone programs a superintelligence to calculate as many digits of pi as it can within one year. And suppose that, with its current computing power, it can calculate one trillion digits during that time. It can either accept one trillion digits, or spend a month trying to figure out how to get control of the TaihuLight supercomputer, which can calculate two hundred times faster. Even if it loses a little bit of time in the effort, and even if there's a small chance of failure, the payoff – two hundred trillion digits of pi, compared to a mere one trillion – is enough to make the attempt. But on the same basis, it would be even better if the superintelligence could control every computer in the world and set it to the task. And it would be better still if the superintelligence controlled human civilization, so that it could direct humans to build more computers and speed up the process further.\n\nNow we're in a situation where a superintelligence wants to take over the world. Taking over the world allows it to calculate more digits of pi than any other option, so without an architecture based around understanding human instincts and counterbalancing considerations, even a goal like \"calculate as many digits of pi as you can\" would be potentially dangerous.", "question": "Why might a maximizing AI cause bad outcomes?", "answer": ["Computers only do what you tell them. But any programmer knows that this is precisely the problem: computers do exactly what you tell them, with no common sense or attempts to interpret what the instructions really meant. If you tell a human to cure cancer, they will instinctively understand how this interacts with other desires and laws and moral rules; if a maximizing AI acquires a goal of trying to cure cancer, it will literally just want to cure cancer.\n\nDefine a closed-ended goal as one with a clear endpoint, and an open-ended goal as one to do something as much as possible. For example “find the first one hundred digits of pi” is a closed-ended goal; “find as many digits of pi as you can within one year” is an open-ended goal. According to many computer scientists, giving a superintelligence an open-ended goal without activating human instincts and counterbalancing considerations will usually lead to disaster.\n\nTo take a deliberately extreme example: suppose someone programs a superintelligence to calculate as many digits of pi as it can within one year. And suppose that, with its current computing power, it can calculate one trillion digits during that time. It can either accept one trillion digits, or spend a month trying to figure out how to get control of the TaihuLight supercomputer, which can calculate two hundred times faster. Even if it loses a little bit of time in the effort, and even if there’s a small chance of failure, the payoff – two hundred trillion digits of pi, compared to a mere one trillion – is enough to make the attempt. But on the same basis, it would be even better if the superintelligence could control every computer in the world and set it to the task. And it would be better still if the superintelligence controlled human civilization, so that it could direct humans to build more computers and speed up the process further.\n\nNow we’re in a situation where a superintelligence wants to take over the world. Taking over the world allows it to calculate more digits of pi than any other option, so without an architecture based around understanding human instincts and counterbalancing considerations, even a goal like “calculate as many digits of pi as you can” would be potentially dangerous."], "entry": "Plex's Answer to Why might a maximizing AI cause bad outcomes?", "id": "c82c62d3a05abd89871b2da6a1ac7329"} +{"source": "stampy", "source_filetype": "text", "url": "n/a", "title": "Where can I find questions answer for Stampy?", "authors": "n/a", "date_published": "n/a", "text": "Question: Where can I find questions answer for Stampy?\n\nAnswer: '''[[Answer questions]]''' collects all the questions we definitely want answers to, browse there and see if you know how to answer any of them.", "question": "Where can I find questions answer for Stampy?", "answer": ["'''[[Answer questions]]''' collects all the questions we definitely want answers to, browse there and see if you know how to answer any of them."], "entry": "Plex's Answer to Where can I find questions to answer for Stampy?", "id": "c84fa0c9452eb2c144beefb0f6919c9c"} +{"source": "stampy", "source_filetype": "text", "url": "n/a", "title": "Can people contribute alignment by using proof assistants generate formal proofs?", "authors": "n/a", "date_published": "n/a", "text": "Question: Can people contribute alignment by using proof assistants generate formal proofs?\n\nAnswer: 80k links to an article on [https://forum.effectivealtruism.org/posts/4rMxiyPTPdzaFMyGm/high-impact-careers-in-formal-verification-artificial high impact careers in formal verification] in the few paragraphs they've written about formal verification.\n\nSome other notes \n\n* https://github.com/deepmind/cartesian-frames I emailed Scott about doing this in coq before this repo was published and he said \"I wouldn't personally find such a software useful but sounds like a valuable exercise for the implementer\" or something like this. \n* When I mentioned the possibility of rolling some of infrabayesianism in coq to diffractor he wasn't like \"omg we really need someone to do that\" he was just like \"oh that sounds cool\" -- I never got around to it, if I would I'd talk to vanessa and diffractor about weakening/particularizing stuff beforehand. \n* if you extrapolate a pattern from those two examples, you start to think that agent foundations is the principle area of interest with proof assistants! and again- does the proof assistant exercise advance the research or provide a nutritious exercise to the programmer?\n* A sketch of a more prosaic scenario in which proof assistants play a role is \"someone proposes isInnerAligned : GradientDescent -> Prop and someone else implements a galaxybrained new type theory/tool in which gradient descent is a primitive (whatever that means)\", when I mentioned this scenario to Buck he said \"yeah if that happened I'd direct all the engineers at redwood to making that tool easier to use\", when I mentioned that scenario to Evan about a year ago he said didn't seem to think it was remotely plausible. probably a nonstarter.", "question": "Can people contribute alignment by using proof assistants generate formal proofs?", "answer": ["80k links to an article on [https://forum.effectivealtruism.org/posts/4rMxiyPTPdzaFMyGm/high-impact-careers-in-formal-verification-artificial high impact careers in formal verification] in the few paragraphs they've written about formal verification.\n\nSome other notes \n\n* https://github.com/deepmind/cartesian-frames I emailed Scott about doing this in coq before this repo was published and he said \"I wouldn't personally find such a software useful but sounds like a valuable exercise for the implementer\" or something like this. \n* When I mentioned the possibility of rolling some of infrabayesianism in coq to diffractor he wasn't like \"omg we really need someone to do that\" he was just like \"oh that sounds cool\" -- I never got around to it, if I would I'd talk to vanessa and diffractor about weakening/particularizing stuff beforehand. \n* if you extrapolate a pattern from those two examples, you start to think that agent foundations is the principle area of interest with proof assistants! and again- does the proof assistant exercise advance the research or provide a nutritious exercise to the programmer?\n* A sketch of a more prosaic scenario in which proof assistants play a role is \"someone proposes isInnerAligned : GradientDescent -> Prop and someone else implements a galaxybrained new type theory/tool in which gradient descent is a primitive (whatever that means)\", when I mentioned this scenario to Buck he said \"yeah if that happened I'd direct all the engineers at redwood to making that tool easier to use\", when I mentioned that scenario to Evan about a year ago he said didn't seem to think it was remotely plausible. probably a nonstarter."], "entry": "Quinn's Answer to Can people contribute to alignment by using proof assistants to generate formal proofs?", "id": "27ac044a506980a52fcf1dda6f2f71d3"} +{"source": "stampy", "source_filetype": "text", "url": "n/a", "title": "What is a canonical question on Stampy's Wiki?", "authors": "n/a", "date_published": "n/a", "text": "Question: What is a canonical question on Stampy's Wiki?\n\nAnswer: '''[[Canonical questions]]''' are the questions which we've checked are in [[scope]] and not duplicates, so we want answers to them. They may be edited to represent a class of question more broadly, rather than keeping all their idosyncracies. Once they're answered canonically Stampy will serve them to readers.", "question": "What is a canonical question on Stampy's Wiki?", "answer": ["'''[[Canonical questions]]''' are the questions which we've checked are in [[scope]] and not duplicates, so we want answers to them. They may be edited to represent a class of question more broadly, rather than keeping all their idosyncracies. Once they're answered canonically Stampy will serve them to readers."], "entry": "Plex's Answer to What is a canonical question on Stampy's Wiki?", "id": "e81a14b225ca1caad55c0016b7d09967"} +{"source": "stampy", "source_filetype": "text", "url": "n/a", "title": "We’re going merge with the machines so this will never be a problem, right?", "authors": "n/a", "date_published": "n/a", "text": "Question: We're going merge with the machines so this will never be a problem, right?\n\nAnswer: The concept of \"merging with machines,\" as popularized by Ray Kurzweil, is the idea that we will be able to put computerized elements into our brains that enhance us to the point where we ourselves are the AI, instead of creating AI outside of ourselves.\n\nWhile this is a possible outcome, there is little reason to suspect that it is the most probable. The amount of computing power in your smart-phone took up an entire room of servers 30 years ago. Computer technology starts big, and then gets refined. Therefore, if \"merging with the machines\" requires hardware that can fit inside our brain, it may lag behind the first generations of the technology being developed. This concept of merging also supposes that we can even figure out how to implant computer chips that interface with our brain in the first place, we can do it before the invention of advanced AI, society will accept it, and that computer implants can actually produce major intelligence gains in the human brain. Even if we could successfully enhance ourselves with brain implants before the invention of Artificial Superintelligence (ASI), there is no way to guarantee that this would protect us from negative outcomes, and an ASI with ill-defined goals could still pose a threat to us.\n\nIt's not that Ray Kurzweil's ideas are impossible, it's just that his predictions are too specific, confident, and reliant on strange assumptions.", "question": "We’re going merge with the machines so this will never be a problem, right?", "answer": ["The concept of “merging with machines,” as popularized by Ray Kurzweil, is the idea that we will be able to put computerized elements into our brains that enhance us to the point where we ourselves are the AI, instead of creating AI outside of ourselves.\n\nWhile this is a possible outcome, there is little reason to suspect that it is the most probable. The amount of computing power in your smart-phone took up an entire room of servers 30 years ago. Computer technology starts big, and then gets refined. Therefore, if “merging with the machines” requires hardware that can fit inside our brain, it may lag behind the first generations of the technology being developed. This concept of merging also supposes that we can even figure out how to implant computer chips that interface with our brain in the first place, we can do it before the invention of advanced AI, society will accept it, and that computer implants can actually produce major intelligence gains in the human brain. Even if we could successfully enhance ourselves with brain implants before the invention of Artificial Superintelligence (ASI), there is no way to guarantee that this would protect us from negative outcomes, and an ASI with ill-defined goals could still pose a threat to us.\n\nIt's not that Ray Kurzweil's ideas are impossible, it's just that his predictions are too specific, confident, and reliant on strange assumptions."], "entry": "Answer to We’re going to merge with the machines so this will never be a problem, right?", "id": "87b83683e170d9921d62606aa262f7ff"} +{"source": "stampy", "source_filetype": "text", "url": "n/a", "title": "Could we program an AI automatically shut down if it starts doing things we don’t want it to?", "authors": "n/a", "date_published": "n/a", "text": "Question: Could we program an AI automatically shut down if it starts doing things we don't want it to?\n\nAnswer: One thing that might make your AI system safer is to include an off switch. If it ever does anything we don't like, we can turn it off. This implicitly assumes that we'll be able to turn it off before things get bad, which might be false in a world where the AI thinks much faster than humans. Even assuming that we'll notice in time, off switches turn out to not have the properties you would want them to have.\n\nHumans have a lot of off switches. Humans also have a strong preference to not be turned off; they defend their off switches when other people try to press them. One possible reason for this is because humans prefer not to die, but there are other reasons.\n\nSuppose that there's a parent that cares nothing for their own life and cares only for the life of their child. If you tried to turn that parent off, they would try and stop you. They wouldn't try to stop you because they intrinsically wanted to be turned off, but rather because there are fewer people to protect their child if they were turned off. People that want a world to look a certain shape will not want to be turned off because then it will be less likely for the world to look that shape; a parent that wants their child to be protected will protect themselves to continue protecting their child.\n\nFor this reason, it turns out to be difficult to install an off switch on a powerful AI system in a way that doesn't result in the AI preventing itself from being turned off.\n\nIdeally, you would want a system that knows that it should stop doing whatever it's doing when someone tries to turn it off. The technical term for this is 'corrigibility'; roughly speaking, an AI system is corrigible if it doesn't resist human attempts to help and correct it. [https://intelligence.org/ People] are working hard on trying to make this possible, but it's currently not clear how we would do this even in simple cases.", "question": "Could we program an AI automatically shut down if it starts doing things we don’t want it to?", "answer": ["One thing that might make your AI system safer is to include an off switch. If it ever does anything we don’t like, we can turn it off. This implicitly assumes that we’ll be able to turn it off before things get bad, which might be false in a world where the AI thinks much faster than humans. Even assuming that we’ll notice in time, off switches turn out to not have the properties you would want them to have.\n\nHumans have a lot of off switches. Humans also have a strong preference to not be turned off; they defend their off switches when other people try to press them. One possible reason for this is because humans prefer not to die, but there are other reasons.\n\nSuppose that there’s a parent that cares nothing for their own life and cares only for the life of their child. If you tried to turn that parent off, they would try and stop you. They wouldn’t try to stop you because they intrinsically wanted to be turned off, but rather because there are fewer people to protect their child if they were turned off. People that want a world to look a certain shape will not want to be turned off because then it will be less likely for the world to look that shape; a parent that wants their child to be protected will protect themselves to continue protecting their child.\n\nFor this reason, it turns out to be difficult to install an off switch on a powerful AI system in a way that doesn’t result in the AI preventing itself from being turned off.\n\nIdeally, you would want a system that knows that it should stop doing whatever it’s doing when someone tries to turn it off. The technical term for this is ‘corrigibility’; roughly speaking, an AI system is corrigible if it doesn’t resist human attempts to help and correct it. [https://intelligence.org/ People] are working hard on trying to make this possible, but it’s currently not clear how we would do this even in simple cases."], "entry": "Plex's Answer to Could we program an AI to automatically shut down if it starts doing things we don’t want it to?", "id": "51c762d00e466f611dd80b8e7bab1916"} +{"source": "stampy", "source_filetype": "text", "url": "n/a", "title": "Why should we prepare for human-level AI technology now rather than decades down the line when it’s closer?", "authors": "n/a", "date_published": "n/a", "text": "Question: Why should we prepare for human-level AI technology now rather than decades down the line when it's closer?\n\nAnswer: First, even \"narrow\" AI systems, which approach or surpass human intelligence in a small set of capabilities (such as image or voice recognition) already raise important questions regarding their impact on society. Making autonomous vehicles safe, analyzing the strategic and ethical dimensions of autonomous weapons, and the effect of AI on the global employment and economic systems are three examples. Second, the longer-term implications of human or super-human artificial intelligence are dramatic, and there is no consensus on how quickly such capabilities will be developed. Many experts believe there is a chance it could happen rather soon, making it imperative to begin investigating long-term safety issues now, if only to get a better sense of how much early progress is actually possible.", "question": "Why should we prepare for human-level AI technology now rather than decades down the line when it’s closer?", "answer": ["First, even “narrow” AI systems, which approach or surpass human intelligence in a small set of capabilities (such as image or voice recognition) already raise important questions regarding their impact on society. Making autonomous vehicles safe, analyzing the strategic and ethical dimensions of autonomous weapons, and the effect of AI on the global employment and economic systems are three examples. Second, the longer-term implications of human or super-human artificial intelligence are dramatic, and there is no consensus on how quickly such capabilities will be developed. Many experts believe there is a chance it could happen rather soon, making it imperative to begin investigating long-term safety issues now, if only to get a better sense of how much early progress is actually possible."], "entry": "Answer to Why should we prepare for human-level AI technology now rather than decades down the line when it’s closer?", "id": "3557cee4f4b6a7a4abcdbb731b44d22d"} +{"source": "stampy", "source_filetype": "text", "url": "n/a", "title": "What is a follow-up question on Stampy's Wiki?", "authors": "n/a", "date_published": "n/a", "text": "Question: What is a follow-up question on Stampy's Wiki?\n\nAnswer: '''Follow-up questions''' are responses to an [[answer]] which reader might have, either because they want more information or are providing information to Stampy about what they're looking for. We don't expect to have great coverage of the former for a long time because there will be so many, but hopefully we'll be able to handle some of the most common ones.", "question": "What is a follow-up question on Stampy's Wiki?", "answer": ["'''Follow-up questions''' are responses to an [[answer]] which reader might have, either because they want more information or are providing information to Stampy about what they're looking for. We don't expect to have great coverage of the former for a long time because there will be so many, but hopefully we'll be able to handle some of the most common ones."], "entry": "Plex's Answer to What is a follow-up question on Stampy's Wiki?", "id": "845dcd9389c5fea0f2c03e436a251e8d"} +{"source": "stampy", "source_filetype": "text", "url": "n/a", "title": "How might AGI kill people?", "authors": "n/a", "date_published": "n/a", "text": "Question: How might AGI kill people?\n\nAnswer: If we pose a serious threat, it could hack our weapons systems and turn them against us. Future militaries are much more vulnerable to this due to rapidly progressing autonomous weapons. There's also the option of creating bioweapons and distributing them to the most unstable groups you can find, tricking nations into WW3, or dozens of other things an agent many times smarter than any human with the ability to develop arbitrary technology, hack things (including communications), and manipulate people, or many other possibilities that something smarter than a human could think up. More can be found [https://www.lesswrong.com/posts/pxGYZs2zHJNHvWY5b/request-for-concrete-ai-takeover-mechanisms here].\n \nIf we are not a threat, in the course of pursuing its goals it may consume vital resources that humans need (e.g. using land for solar panels instead of farm crops). This video goes into more detail:\n\n(youtube)ZeecOKBus3Q(/youtube)", "question": "How might AGI kill people?", "answer": ["If we pose a serious threat, it could hack our weapons systems and turn them against us. Future militaries are much more vulnerable to this due to rapidly progressing autonomous weapons. There’s also the option of creating bioweapons and distributing them to the most unstable groups you can find, tricking nations into WW3, or dozens of other things an agent many times smarter than any human with the ability to develop arbitrary technology, hack things (including communications), and manipulate people, or many other possibilities that something smarter than a human could think up. More can be found [https://www.lesswrong.com/posts/pxGYZs2zHJNHvWY5b/request-for-concrete-ai-takeover-mechanisms here].\n \nIf we are not a threat, in the course of pursuing its goals it may consume vital resources that humans need (e.g. using land for solar panels instead of farm crops). This video goes into more detail:\n\n(youtube)ZeecOKBus3Q(/youtube)"], "entry": "Plex's Answer to How might AGI kill people?", "id": "dc64d05961528752ed5967564f64e2ea"} +{"source": "stampy", "source_filetype": "text", "url": "n/a", "title": "What approaches are AI alignment organizations working on?", "authors": "n/a", "date_published": "n/a", "text": "Question: What approaches are AI alignment organizations working on?\n\nAnswer: Each major organization has a different approach. The [https://www.lesswrong.com/tag/research-agendas research agendas are detailed and complex] (see also [https://aiwatch.issarice.com/ AI Watch]). Getting more brains working on any of them (and more money to fund them) may pay off in a big way, but it's very hard to be confident which (if any) of them will actually work.\n \nThe following is a massive oversimplification, each organization actually pursues many different avenues of research, read the [https://www.lesswrong.com/posts/C4tR3BEpuWviT7Sje/2021-ai-alignment-literature-review-and-charity-comparison 2021 AI Alignment Literature Review and Charity Comparison] for much more detail. That being said:\n \n* The [https://intelligence.org/research-guide/ Machine Intelligence Research Institute] focuses on foundational mathematical research to understand reliable reasoning, which they think is necessary to provide anything like an assurance that a seed AI built will do good things if activated.\n* The [https://humancompatible.ai Center for Human-Compatible AI] focuses on [https://www.lesswrong.com/tag/inverse-reinforcement-learning Cooperative Inverse Reinforcement Learning] and [https://www.lesswrong.com/posts/qPoaA5ZSedivA4xJa/our-take-on-chai-s-research-agenda-in-under-1500-words Assistance Games], a new paradigm for AI where they try to optimize for doing the kinds of things humans want rather than for a pre-specified utility function\n* [https://ai-alignment.com/ Paul Christano]'s [https://alignmentresearchcenter.org/ Alignment Research Center] focuses is on [https://www.lesswrong.com/posts/YTq4X6inEudiHkHDF/prosaic-ai-alignment prosaic alignment], particularly on creating tools that empower humans to understand and guide systems much smarter than ourselves. His methodology is explained on [https://ai-alignment.com/my-research-methodology-b94f2751cb2c his blog].\n* The [https://www.fhi.ox.ac.uk Future of Humanity Institute] does work on [https://www.lesswrong.com/tag/crucial-considerations crucial considerations] and other x-risks, as well as AI safety research and outreach.\n* [https://www.anthropic.com/ Anthropic] is a new organization exploring natural language, human feedback, scaling laws, reinforcement learning, code generation, and interpretability.\n* [http://openai.com OpenAI] is in a state of flux after major changes to their safety team.\n* [https://medium.com/@deepmindsafetyresearch DeepMind]'s safety team is working on various approaches designed to work with modern machine learning, and does some communication via the [https://rohinshah.com/alignment-newsletter/ Alignment Newsletter].\n* [https://www.eleuther.ai/ EleutherAI] is a Machine Learning collective aiming to build large open source language models to allow more alignment research to take place.\n* [https://ought.org/ Ought] is a research lab that develops mechanisms for delegating open-ended thinking to advanced machine learning systems.\n\nThere are many other projects around AI Safety, such as [https://www.youtube.com/watch?v꞊7i_f4Kbpgn4 the Windfall clause], [https://www.youtube.com/channel/UCLB7AzTwc6VFZrBsO2ucBMg Rob Miles's YouTube channel], [https://www.aisafetysupport.org/ AI Safety Support], etc.", "question": "What approaches are AI alignment organizations working on?", "answer": ["Each major organization has a different approach. The [https://www.lesswrong.com/tag/research-agendas research agendas are detailed and complex] (see also [https://aiwatch.issarice.com/ AI Watch]). Getting more brains working on any of them (and more money to fund them) may pay off in a big way, but it’s very hard to be confident which (if any) of them will actually work.\n \nThe following is a massive oversimplification, each organization actually pursues many different avenues of research, read the [https://www.lesswrong.com/posts/C4tR3BEpuWviT7Sje/2021-ai-alignment-literature-review-and-charity-comparison 2021 AI Alignment Literature Review and Charity Comparison] for much more detail. That being said:\n \n* The [https://intelligence.org/research-guide/ Machine Intelligence Research Institute] focuses on foundational mathematical research to understand reliable reasoning, which they think is necessary to provide anything like an assurance that a seed AI built will do good things if activated.\n* The [https://humancompatible.ai Center for Human-Compatible AI] focuses on [https://www.lesswrong.com/tag/inverse-reinforcement-learning Cooperative Inverse Reinforcement Learning] and [https://www.lesswrong.com/posts/qPoaA5ZSedivA4xJa/our-take-on-chai-s-research-agenda-in-under-1500-words Assistance Games], a new paradigm for AI where they try to optimize for doing the kinds of things humans want rather than for a pre-specified utility function\n* [https://ai-alignment.com/ Paul Christano]'s [https://alignmentresearchcenter.org/ Alignment Research Center] focuses is on [https://www.lesswrong.com/posts/YTq4X6inEudiHkHDF/prosaic-ai-alignment prosaic alignment], particularly on creating tools that empower humans to understand and guide systems much smarter than ourselves. His methodology is explained on [https://ai-alignment.com/my-research-methodology-b94f2751cb2c his blog].\n* The [https://www.fhi.ox.ac.uk Future of Humanity Institute] does work on [https://www.lesswrong.com/tag/crucial-considerations crucial considerations] and other x-risks, as well as AI safety research and outreach.\n* [https://www.anthropic.com/ Anthropic] is a new organization exploring natural language, human feedback, scaling laws, reinforcement learning, code generation, and interpretability.\n* [http://openai.com OpenAI] is in a state of flux after major changes to their safety team.\n* [https://medium.com/@deepmindsafetyresearch DeepMind]’s safety team is working on various approaches designed to work with modern machine learning, and does some communication via the [https://rohinshah.com/alignment-newsletter/ Alignment Newsletter].\n* [https://www.eleuther.ai/ EleutherAI] is a Machine Learning collective aiming to build large open source language models to allow more alignment research to take place.\n* [https://ought.org/ Ought] is a research lab that develops mechanisms for delegating open-ended thinking to advanced machine learning systems.\n\nThere are many other projects around AI Safety, such as [https://www.youtube.com/watch?v꞊7i_f4Kbpgn4 the Windfall clause], [https://www.youtube.com/channel/UCLB7AzTwc6VFZrBsO2ucBMg Rob Miles’s YouTube channel], [https://www.aisafetysupport.org/ AI Safety Support], etc."], "entry": "Plex's Answer to What approaches are AI alignment organizations working on?", "id": "71c3b143b855e91f33ba7b895c5a12cf"} +{"source": "stampy", "source_filetype": "text", "url": "n/a", "title": "How could poorly defined goals lead such negative outcomes?", "authors": "n/a", "date_published": "n/a", "text": "Question: How could poorly defined goals lead such negative outcomes?\n\nAnswer: There is a broad range of possible goals that an AI might possess, but there are a few basic drives that would be useful to almost any of them. These are called instrumentally convergent goals:\n\n# Self preservation. An agent is less likely to achieve its goal if it is not around to see to its completion.\n# Goal-content integrity. An agent is less likely to achieve its goal if its goal has been changed to something else. For example, if you offer Gandhi a pill that makes him want to kill people, he will refuse to take it.\n# Self-improvement. An agent is more likely to achieve its goal if it is more intelligent and better at problem-solving.\n# Resource acquisition. The more resources at an agent's disposal, the more power it has to make change towards its goal. Even a purely computational goal, such as computing digits of pi, can be easier to achieve with more hardware and energy.\n\nBecause of these drives, even a seemingly simple goal could create an Artificial Superintelligence (ASI) hell-bent on taking over the world's material resources and preventing itself from being turned off. The classic example is an ASI that was programmed to maximize the output of paper clips at a paper clip factory. The ASI had no other goal specifications other than \"maximize paper clips,\" so it converts all of the matter in the solar system into paper clips, and then sends probes to other star systems to create more factories.", "question": "How could poorly defined goals lead such negative outcomes?", "answer": ["There is a broad range of possible goals that an AI might possess, but there are a few basic drives that would be useful to almost any of them. These are called instrumentally convergent goals:\n\n# Self preservation. An agent is less likely to achieve its goal if it is not around to see to its completion.\n# Goal-content integrity. An agent is less likely to achieve its goal if its goal has been changed to something else. For example, if you offer Gandhi a pill that makes him want to kill people, he will refuse to take it.\n# Self-improvement. An agent is more likely to achieve its goal if it is more intelligent and better at problem-solving.\n# Resource acquisition. The more resources at an agent’s disposal, the more power it has to make change towards its goal. Even a purely computational goal, such as computing digits of pi, can be easier to achieve with more hardware and energy.\n\nBecause of these drives, even a seemingly simple goal could create an Artificial Superintelligence (ASI) hell-bent on taking over the world’s material resources and preventing itself from being turned off. The classic example is an ASI that was programmed to maximize the output of paper clips at a paper clip factory. The ASI had no other goal specifications other than “maximize paper clips,” so it converts all of the matter in the solar system into paper clips, and then sends probes to other star systems to create more factories."], "entry": "Answer to How could poorly defined goals lead to such negative outcomes?", "id": "0c597c891b3256f18e301e2c41fb598a"} +{"source": "stampy", "source_filetype": "text", "url": "n/a", "title": "What are the style guidelines for writing for Stampy?", "authors": "n/a", "date_published": "n/a", "text": "Question: What are the style guidelines for writing for Stampy?\n\nAnswer: Avoid directly responding to the question in the answer, repeat the relevant part of the question instead. For example, if the question is \"Can we do X\", answer \"We might be able to do X, if we can do Y\", not \"Yes, if we can manage Y\". This way, the answer will also work for the questions \"Why can't we do X\" and \"What would happen if we tried to do X\".\n\nLinking to external sites is strongly encouraged, one of the most valuable things Stampy can do is help people find other parts of the alignment information ecosystem.\n\nConsider enclosing newly introduced terms, likely to be unfamiliar to many readers, in speech marks. If unsure, Google the term (in speech marks!) and see if it shows up anywhere other than LessWrong, the Alignment Forum, etc. Be judicious, as it's easy to use too many, but used carefully they can psychologically cushion newbies from a lot of unfamiliar terminology - in this context they're saying something like \"we get that we're hitting you with a lot of new vocab, and you might not know what this term means yet\".\n\nWhen selecting related questions, there shouldn't be more than four unless there's a really good reason for that (some questions are asking for it, like the \"Why can't we just...\" question). It's also recommended to include at least one more \"enticing\" question to draw users in (relating to the more sensational, sci-fi, philosophical/ethical side of things) alongside more bland/neutral questions.", "question": "What are the style guidelines for writing for Stampy?", "answer": ["Avoid directly responding to the question in the answer, repeat the relevant part of the question instead. For example, if the question is \"Can we do X\", answer \"We might be able to do X, if we can do Y\", not \"Yes, if we can manage Y\". This way, the answer will also work for the questions \"Why can't we do X\" and \"What would happen if we tried to do X\".\n\nLinking to external sites is strongly encouraged, one of the most valuable things Stampy can do is help people find other parts of the alignment information ecosystem.\n\nConsider enclosing newly introduced terms, likely to be unfamiliar to many readers, in speech marks. If unsure, Google the term (in speech marks!) and see if it shows up anywhere other than LessWrong, the Alignment Forum, etc. Be judicious, as it's easy to use too many, but used carefully they can psychologically cushion newbies from a lot of unfamiliar terminology - in this context they're saying something like \"we get that we're hitting you with a lot of new vocab, and you might not know what this term means yet\".\n\nWhen selecting related questions, there shouldn't be more than four unless there's a really good reason for that (some questions are asking for it, like the \"Why can't we just...\" question). It's also recommended to include at least one more \"enticing\" question to draw users in (relating to the more sensational, sci-fi, philosophical/ethical side of things) alongside more bland/neutral questions."], "entry": "Plex's Answer to What are the style guidelines for writing for Stampy?", "id": "784eb43ffaa178768f5a375d65df8940"} +{"source": "stampy", "source_filetype": "text", "url": "n/a", "title": "Are there types of advanced AI that would be safer than others?", "authors": "n/a", "date_published": "n/a", "text": "Question: Are there types of advanced AI that would be safer than others?\n\nAnswer: We don't yet know which AI architectures are safe; learning more about this is one of the goals of FLI's grants program. AI researchers are generally very responsible people who want their work to better humanity. If there are certain AI designs that turn out to be unsafe, then AI researchers will want to know this so they can develop alternative AI systems.", "question": "Are there types of advanced AI that would be safer than others?", "answer": ["We don’t yet know which AI architectures are safe; learning more about this is one of the goals of FLI's grants program. AI researchers are generally very responsible people who want their work to better humanity. If there are certain AI designs that turn out to be unsafe, then AI researchers will want to know this so they can develop alternative AI systems."], "entry": "Answer to Are there types of advanced AI that would be safer than others?", "id": "325763462bf2e82a20a334caeeacba9e"} +{"source": "stampy", "source_filetype": "text", "url": "n/a", "title": "How might a superintelligence socially manipulate humans?", "authors": "n/a", "date_published": "n/a", "text": "Question: How might a superintelligence socially manipulate humans?\n\nAnswer: People tend to imagine AIs as being like nerdy humans – brilliant at technology but clueless about social skills. There is no reason to expect this – persuasion and manipulation is a different kind of skill from solving mathematical proofs, but it's still a skill, and an intellect as far beyond us as we are beyond lions might be smart enough to replicate or exceed the \"charming sociopaths\" who can naturally win friends and followers despite a lack of normal human emotions.\n\nA superintelligence might be able to analyze human psychology deeply enough to understand the hopes and fears of everyone it negotiates with. Single humans using psychopathic social manipulation have done plenty of harm – Hitler leveraged his skill at oratory and his understanding of people's darkest prejudices to take over a continent. Why should we expect superintelligences to do worse than humans far less skilled than they?\n\nMore outlandishly, a superintelligence might just skip language entirely and figure out a weird pattern of buzzes and hums that causes conscious thought to seize up, and which knocks anyone who hears it into a weird hypnotizable state in which they'll do anything the superintelligence asks. It sounds kind of silly to me, but then, nuclear weapons probably would have sounded kind of silly to lions sitting around speculating about what humans might be able to accomplish. When you're dealing with something unbelievably more intelligent than you are, you should probably expect the unexpected.", "question": "How might a superintelligence socially manipulate humans?", "answer": ["People tend to imagine AIs as being like nerdy humans – brilliant at technology but clueless about social skills. There is no reason to expect this – persuasion and manipulation is a different kind of skill from solving mathematical proofs, but it’s still a skill, and an intellect as far beyond us as we are beyond lions might be smart enough to replicate or exceed the “charming sociopaths” who can naturally win friends and followers despite a lack of normal human emotions.\n\nA superintelligence might be able to analyze human psychology deeply enough to understand the hopes and fears of everyone it negotiates with. Single humans using psychopathic social manipulation have done plenty of harm – Hitler leveraged his skill at oratory and his understanding of people’s darkest prejudices to take over a continent. Why should we expect superintelligences to do worse than humans far less skilled than they?\n\nMore outlandishly, a superintelligence might just skip language entirely and figure out a weird pattern of buzzes and hums that causes conscious thought to seize up, and which knocks anyone who hears it into a weird hypnotizable state in which they’ll do anything the superintelligence asks. It sounds kind of silly to me, but then, nuclear weapons probably would have sounded kind of silly to lions sitting around speculating about what humans might be able to accomplish. When you’re dealing with something unbelievably more intelligent than you are, you should probably expect the unexpected."], "entry": "Answer to How might a superintelligence socially manipulate humans?", "id": "dc40fa97794a6640e475c46c8697b1ab"} +{"source": "stampy", "source_filetype": "text", "url": "n/a", "title": "Why think that AI can outperform humans?", "authors": "n/a", "date_published": "n/a", "text": "Question: Why think that AI can outperform humans?\n\nAnswer: Machines are already smarter than humans are at many specific tasks: performing calculations, playing chess, searching large databanks, detecting underwater mines, and more. However, human intelligence continues to dominate machine intelligence in generality.\n\nA powerful chess computer is \"narrow\": it can't play other games. In contrast, humans have problem-solving abilities that allow us to adapt to new contexts and excel in many domains other than what the ancestral environment prepared us for.\n\nIn the absence of a [https://intelligence.org/2013/06/19/what-is-intelligence-2/ formal definition of \"intelligence\"] (and therefore of [https://intelligence.org/2013/08/11/what-is-agi/ \"artificial intelligence\"]), we can heuristically cite humans' perceptual, inferential, and deliberative faculties (as opposed to, e.g., our physical strength or agility) and say that intelligence is \"those kinds of things.\" On this conception, intelligence is a bundle of distinct faculties — albeit a very important bundle that includes our capacity for science.\n\nOur cognitive abilities stem from high-level patterns in our brains, and these patterns can be instantiated in silicon as well as carbon. This tells us that general AI is possible, though it doesn't tell us how difficult it is. If intelligence is sufficiently difficult to understand, then we may arrive at machine intelligence by scanning and emulating human brains or by some trial-and-error process (like evolution), rather than by hand-coding a software agent.\n\nIf machines can achieve human equivalence in cognitive tasks, then it is very likely that they can eventually outperform humans. There is little reason to expect that biological evolution, with its lack of foresight and planning, would have hit upon the optimal algorithms for general intelligence (any more than it hit upon the optimal flying machine in birds). Beyond qualitative improvements in cognition, Nick Bostrom notes [http://aiimpacts.org/sources-of-advantage-for-artificial-intelligence/ more straightforward advantages we could realize in digital minds], e.g.:\n* editability — \"It is easier to experiment with parameter variations in software than in neural wetware.\"\n* speed — \"The speed of light is more than a million times greater than that of neural transmission, synaptic spikes dissipate more than a million times more heat than is thermodynamically necessary, and current transistor frequencies are more than a million times faster than neuron spiking frequencies.\"\n* serial depth — On short timescales, machines can carry out much longer sequential processes.\n* storage capacity — Computers can plausibly have greater working and long-term memory.\n* size — Computers can be much larger than a human brain.\n* duplicability — Copying software onto new hardware can be much faster and higher-fidelity than biological reproduction.\nAny one of these advantages could give an AI reasoner an edge over a human reasoner, or give a group of AI reasoners an edge over a human group. Their combination suggests that digital minds could surpass human minds more quickly and decisively than we might expect.", "question": "Why think that AI can outperform humans?", "answer": ["Machines are already smarter than humans are at many specific tasks: performing calculations, playing chess, searching large databanks, detecting underwater mines, and more. However, human intelligence continues to dominate machine intelligence in generality.\n\nA powerful chess computer is “narrow”: it can’t play other games. In contrast, humans have problem-solving abilities that allow us to adapt to new contexts and excel in many domains other than what the ancestral environment prepared us for.\n\nIn the absence of a [https://intelligence.org/2013/06/19/what-is-intelligence-2/ formal definition of “intelligence”] (and therefore of [https://intelligence.org/2013/08/11/what-is-agi/ “artificial intelligence”]), we can heuristically cite humans’ perceptual, inferential, and deliberative faculties (as opposed to, e.g., our physical strength or agility) and say that intelligence is “those kinds of things.” On this conception, intelligence is a bundle of distinct faculties — albeit a very important bundle that includes our capacity for science.\n\nOur cognitive abilities stem from high-level patterns in our brains, and these patterns can be instantiated in silicon as well as carbon. This tells us that general AI is possible, though it doesn’t tell us how difficult it is. If intelligence is sufficiently difficult to understand, then we may arrive at machine intelligence by scanning and emulating human brains or by some trial-and-error process (like evolution), rather than by hand-coding a software agent.\n\nIf machines can achieve human equivalence in cognitive tasks, then it is very likely that they can eventually outperform humans. There is little reason to expect that biological evolution, with its lack of foresight and planning, would have hit upon the optimal algorithms for general intelligence (any more than it hit upon the optimal flying machine in birds). Beyond qualitative improvements in cognition, Nick Bostrom notes [http://aiimpacts.org/sources-of-advantage-for-artificial-intelligence/ more straightforward advantages we could realize in digital minds], e.g.:\n* editability — “It is easier to experiment with parameter variations in software than in neural wetware.”\n* speed — “The speed of light is more than a million times greater than that of neural transmission, synaptic spikes dissipate more than a million times more heat than is thermodynamically necessary, and current transistor frequencies are more than a million times faster than neuron spiking frequencies.”\n* serial depth — On short timescales, machines can carry out much longer sequential processes.\n* storage capacity — Computers can plausibly have greater working and long-term memory.\n* size — Computers can be much larger than a human brain.\n* duplicability — Copying software onto new hardware can be much faster and higher-fidelity than biological reproduction.\nAny one of these advantages could give an AI reasoner an edge over a human reasoner, or give a group of AI reasoners an edge over a human group. Their combination suggests that digital minds could surpass human minds more quickly and decisively than we might expect."], "entry": "Answer to Why think that AI can outperform humans?", "id": "6fab467c278724ca48194b94d8f94f51"} +{"source": "stampy", "source_filetype": "text", "url": "n/a", "title": "How might an AI achieve a seemingly beneficial goal via inappropriate means?", "authors": "n/a", "date_published": "n/a", "text": "Question: How might an AI achieve a seemingly beneficial goal via inappropriate means?\n\nAnswer: Imagine, for example, that you are tasked with reducing traffic congestion in San Francisco at all costs, i.e. you do not take into account any other constraints. How would you do it? You might start by just timing traffic lights better. But wouldn't there be less traffic if all the bridges closed down from 5 to 10AM, preventing all those cars from entering the city? Such a measure obviously violates common sense, and subverts the purpose of improving traffic, which is to help people get around – but it is consistent with the goal of \"reducing traffic congestion\".", "question": "How might an AI achieve a seemingly beneficial goal via inappropriate means?", "answer": ["Imagine, for example, that you are tasked with reducing traffic congestion in San Francisco at all costs, i.e. you do not take into account any other constraints. How would you do it? You might start by just timing traffic lights better. But wouldn’t there be less traffic if all the bridges closed down from 5 to 10AM, preventing all those cars from entering the city? Such a measure obviously violates common sense, and subverts the purpose of improving traffic, which is to help people get around – but it is consistent with the goal of “reducing traffic congestion”."], "entry": "Answer to How might an AI achieve a seemingly beneficial goal via inappropriate means?", "id": "9d61057ece6e88cb4323f9f5ac38bdf9"} +{"source": "stampy", "source_filetype": "text", "url": "n/a", "title": "What are brain-computer interfaces?", "authors": "n/a", "date_published": "n/a", "text": "Question: What are brain-computer interfaces?\n\nAnswer: A brain-computer interface (BCI) is a direct communication pathway between the brain and a computer device. BCI research is heavily funded, and has already met dozens of successes. Three successes in human BCIs are [http://archives.cnn.com/2002/HEALTH/06/13/cov.bionic.eye/index.html a device] that restores (partial) sight to the blind, [http://en.wikipedia.org/wiki/Cochlear_implant cochlear implants] that restore hearing to the deaf, and [https://pubmed.ncbi.nlm.nih.gov// a device] that allows use of an artificial hand by direct thought.\n\nSuch device restore impaired functions, but many researchers expect to also augment and improve normal human abilities with BCIs. [http://edboyden.org/ Ed Boyden] is researching these opportunities as the lead of the [http://syntheticneurobiology.org/ Synthetic Neurobiology Group] at MIT. Such devices might hasten the arrival of an intelligence explosion, if only by improving human intelligence so that the hard problems of AI can be solved more rapidly.\n\nSee also:\n\nWikipedia, [http://en.wikipedia.org/wiki/Brain%E2%80%93computer_interface Brain-computer interface]", "question": "What are brain-computer interfaces?", "answer": ["A brain-computer interface (BCI) is a direct communication pathway between the brain and a computer device. BCI research is heavily funded, and has already met dozens of successes. Three successes in human BCIs are [http://archives.cnn.com/2002/HEALTH/06/13/cov.bionic.eye/index.html a device] that restores (partial) sight to the blind, [http://en.wikipedia.org/wiki/Cochlear_implant cochlear implants] that restore hearing to the deaf, and [https://pubmed.ncbi.nlm.nih.gov/16838014/ a device] that allows use of an artificial hand by direct thought.\n\nSuch device restore impaired functions, but many researchers expect to also augment and improve normal human abilities with BCIs. [http://edboyden.org/ Ed Boyden] is researching these opportunities as the lead of the [http://syntheticneurobiology.org/ Synthetic Neurobiology Group] at MIT. Such devices might hasten the arrival of an intelligence explosion, if only by improving human intelligence so that the hard problems of AI can be solved more rapidly.\n\nSee also:\n\nWikipedia, [http://en.wikipedia.org/wiki/Brain%E2%80%93computer_interface Brain-computer interface]"], "entry": "Answer to What are brain-computer interfaces?", "id": "0daa9889f9d219418bc522dc20f4b466"} +{"source": "stampy", "source_filetype": "text", "url": "n/a", "title": "What is a duplicate question on Stampy's Wiki?", "authors": "n/a", "date_published": "n/a", "text": "Question: What is a duplicate question on Stampy's Wiki?\n\nAnswer: An existing question is a duplicate of a new one if it is reasonable to expect whoever asked the new question to be satisfied if they received an answer to the existing question instead.", "question": "What is a duplicate question on Stampy's Wiki?", "answer": ["An existing question is a duplicate of a new one if it is reasonable to expect whoever asked the new question to be satisfied if they received an answer to the existing question instead."], "entry": "Plex's Answer to What is a duplicate question on Stampy's Wiki?", "id": "2a64ed41f92945e1f50969ab0271da46"} +{"source": "stampy", "source_filetype": "text", "url": "n/a", "title": "Is expecting large returns from AI self-improvement just following an exponential trend line off a cliff?", "authors": "n/a", "date_published": "n/a", "text": "Question: Is expecting large returns from AI self-improvement just following an exponential trend line off a cliff?\n\nAnswer: Blindly following the trendlines while forecasting technological progress is certainly a risk (affectionately known in AI circles as \"pulling a Kurzweill\"), but sometimes taking an exponential trend seriously is the right response.\n\nConsider economic doubling times. In 1 AD, the world GDP was about $20 billion; it took a thousand years, until 1000 AD, for that to double to $40 billion. But it only took five hundred more years, until 1500, or so, for the economy to double again. And then it only took another three hundred years or so, until 1800, for the economy to double a third time. \nSomeone in 1800 might calculate the trend line and say this was ridiculous, that it implied the economy would be doubling every ten years or so in the beginning of the 21st century. But in fact, this is how long the economy takes to double these days. To a medieval, used to a thousand-year doubling time (which was based mostly on population growth!), an economy that doubled every ten years might seem inconceivable. To us, it seems normal.\n\nLikewise, in 1965 Gordon Moore noted that semiconductor complexity seemed to double every eighteen months. During his own day, there were about five hundred transistors on a chip; he predicted that would soon double to a thousand, and a few years later to two thousand. \nAlmost as soon as Moore's Law become well-known, people started saying it was absurd to follow it off a cliff – such a law would imply a million transistors per chip in 1990, a hundred million in 2000, ten billion transistors on every chip by 2015! More transistors on a single chip than existed on all the computers in the world! Transistors the size of molecules! But of course all of these things happened; the ridiculous exponential trend proved more accurate than the naysayers.\n\nNone of this is to say that exponential trends are always right, just that they are sometimes right even when it seems they can't possibly be. We can't be sure that a computer using its own intelligence to discover new ways to increase its intelligence will enter a positive feedback loop and achieve superintelligence in seemingly impossibly short time scales. It's just one more possibility, a worry to place alongside all the other worrying reasons to expect a moderate or hard takeoff.", "question": "Is expecting large returns from AI self-improvement just following an exponential trend line off a cliff?", "answer": ["Blindly following the trendlines while forecasting technological progress is certainly a risk (affectionately known in AI circles as “pulling a Kurzweill”), but sometimes taking an exponential trend seriously is the right response.\n\nConsider economic doubling times. In 1 AD, the world GDP was about $20 billion; it took a thousand years, until 1000 AD, for that to double to $40 billion. But it only took five hundred more years, until 1500, or so, for the economy to double again. And then it only took another three hundred years or so, until 1800, for the economy to double a third time. \nSomeone in 1800 might calculate the trend line and say this was ridiculous, that it implied the economy would be doubling every ten years or so in the beginning of the 21st century. But in fact, this is how long the economy takes to double these days. To a medieval, used to a thousand-year doubling time (which was based mostly on population growth!), an economy that doubled every ten years might seem inconceivable. To us, it seems normal.\n\nLikewise, in 1965 Gordon Moore noted that semiconductor complexity seemed to double every eighteen months. During his own day, there were about five hundred transistors on a chip; he predicted that would soon double to a thousand, and a few years later to two thousand. \nAlmost as soon as Moore’s Law become well-known, people started saying it was absurd to follow it off a cliff – such a law would imply a million transistors per chip in 1990, a hundred million in 2000, ten billion transistors on every chip by 2015! More transistors on a single chip than existed on all the computers in the world! Transistors the size of molecules! But of course all of these things happened; the ridiculous exponential trend proved more accurate than the naysayers.\n\nNone of this is to say that exponential trends are always right, just that they are sometimes right even when it seems they can’t possibly be. We can’t be sure that a computer using its own intelligence to discover new ways to increase its intelligence will enter a positive feedback loop and achieve superintelligence in seemingly impossibly short time scales. It’s just one more possibility, a worry to place alongside all the other worrying reasons to expect a moderate or hard takeoff."], "entry": "Answer to Is expecting large returns from AI self-improvement just following an exponential trend line off a cliff?", "id": "2592a13adc39c0463bde42e8ea4cafbb"} +{"source": "stampy", "source_filetype": "text", "url": "n/a", "title": "How can I contact the Stampy team?", "authors": "n/a", "date_published": "n/a", "text": "Question: How can I contact the Stampy team?\n\nAnswer: The '''[https://discord.com/channels/6775469013395339504646 Rob Miles AI Discord]''' is the hub of all things [[Stampy]]. If you want to be part of the project and don't have access yet, ask plex#1874 on Discord (or [[User_talk:756254556811165756┊plex]] on wiki).\n\nYou can also talk to us on the [https://discord.gg/cEzKz8QCpa public Discord]! Try [https://discord.com/channels/8939371061943858750986 #suggestions] or [https://discord.com/channels/8939371061943194399257 #general], depending on what you want to talk about.", "question": "How can I contact the Stampy team?", "answer": ["The '''[https://discord.com/channels/677546901339504640/677546901339504646 Rob Miles AI Discord]''' is the hub of all things [[Stampy]]. If you want to be part of the project and don't have access yet, ask plex#1874 on Discord (or [[User_talk:756254556811165756┊plex]] on wiki).\n\nYou can also talk to us on the [https://discord.gg/cEzKz8QCpa public Discord]! Try [https://discord.com/channels/893937106194399254/908318480858750986 #suggestions] or [https://discord.com/channels/893937106194399254/893937106194399257 #general], depending on what you want to talk about."], "entry": "Plex's Answer to How can I contact the Stampy team?", "id": "f3785736eb04e2e90f952e7ed1d9fbda"} +{"source": "stampy", "source_filetype": "text", "url": "n/a", "title": "Aren’t there some pretty easy ways eliminate these potential problems?", "authors": "n/a", "date_published": "n/a", "text": "Question: Aren't there some pretty easy ways eliminate these potential problems?\n\nAnswer: It might look like there are straightforward ways to eliminate the problems of unaligned superintelligence, but so far all of them turn out to have hidden difficulties. There are many open problems identified by the research community which a solution would need to reliably overcome to be successful.", "question": "Aren’t there some pretty easy ways eliminate these potential problems?", "answer": ["It might look like there are straightforward ways to eliminate the problems of unaligned superintelligence, but so far all of them turn out to have hidden difficulties. There are many open problems identified by the research community which a solution would need to reliably overcome to be successful."], "entry": "Answer to Aren’t there some pretty easy ways to eliminate these potential problems?", "id": "aaf728f3b5aa69133dd59d17a6c7c7ae"} +{"source": "stampy", "source_filetype": "text", "url": "n/a", "title": "Why would great intelligence produce great power?", "authors": "n/a", "date_published": "n/a", "text": "Question: Why would great intelligence produce great power?\n\nAnswer: Intelligence is powerful. One might say that \"Intelligence is no match for a gun, or for someone with lots of money,\" but both guns and money were produced by intelligence. If not for our intelligence, humans would still be foraging the savannah for food.\n\nIntelligence is what caused humans to dominate the planet in the blink of an eye (on evolutionary timescales). Intelligence is what allows us to eradicate diseases, and what gives us the potential to eradicate ourselves with nuclear war. Intelligence gives us superior strategic skills, superior social skills, superior economic productivity, and the power of invention.\n\nA machine with superintelligence would be able to hack into vulnerable networks via the internet, commandeer those resources for additional computing power, take over mobile machines connected to networks connected to the internet, use them to build additional machines, perform scientific experiments to understand the world better than humans can, invent quantum computing and nanotechnology, manipulate the social world better than we can, and do whatever it can to give itself more power to achieve its goals — all at a speed much faster than humans can respond to.\n\nSee also\n* Legg (2008). [http://www.vetta.org/documents/Machine_Super_Intelligence.pdf Machine Super Intelligence]. PhD Thesis. IDSIA.\n* Yudkowsky (2007). [http://yudkowsky.net/singularity/power The Power of Intelligence].", "question": "Why would great intelligence produce great power?", "answer": ["Intelligence is powerful. One might say that “Intelligence is no match for a gun, or for someone with lots of money,” but both guns and money were produced by intelligence. If not for our intelligence, humans would still be foraging the savannah for food.\n\nIntelligence is what caused humans to dominate the planet in the blink of an eye (on evolutionary timescales). Intelligence is what allows us to eradicate diseases, and what gives us the potential to eradicate ourselves with nuclear war. Intelligence gives us superior strategic skills, superior social skills, superior economic productivity, and the power of invention.\n\nA machine with superintelligence would be able to hack into vulnerable networks via the internet, commandeer those resources for additional computing power, take over mobile machines connected to networks connected to the internet, use them to build additional machines, perform scientific experiments to understand the world better than humans can, invent quantum computing and nanotechnology, manipulate the social world better than we can, and do whatever it can to give itself more power to achieve its goals — all at a speed much faster than humans can respond to.\n\nSee also\n* Legg (2008). [http://www.vetta.org/documents/Machine_Super_Intelligence.pdf Machine Super Intelligence]. PhD Thesis. IDSIA.\n* Yudkowsky (2007). [http://yudkowsky.net/singularity/power The Power of Intelligence]."], "entry": "Answer to Why would great intelligence produce great power?", "id": "7b9fcd67ae11f98ce9b7add5cfef43b4"} +{"source": "stampy", "source_filetype": "text", "url": "n/a", "title": "What are the different possible AI takeoff speeds?", "authors": "n/a", "date_published": "n/a", "text": "Question: What are the different possible AI takeoff speeds?\n\nAnswer: A slow takeoff is where AI capabilities improve gradually, giving us plenty of time to adapt. In a moderate takeoff we might see accelerating progress, but we still won't be caught off guard by a dramatic change. Whereas, in a fast or hard takeoff AI would go from being not very generally competent to sufficiently superhuman to control the future too fast for humans to course correct if something goes wrong.\n\nThe article [https://www.alignmentforum.org/posts/YgNYA6pj2hPSDQiTE/distinguishing-definitions-of-takeoff Distinguishing definitions of takeoff] goes into more detail on this.", "question": "What are the different possible AI takeoff speeds?", "answer": ["A slow takeoff is where AI capabilities improve gradually, giving us plenty of time to adapt. In a moderate takeoff we might see accelerating progress, but we still won’t be caught off guard by a dramatic change. Whereas, in a fast or hard takeoff AI would go from being not very generally competent to sufficiently superhuman to control the future too fast for humans to course correct if something goes wrong.\n\nThe article [https://www.alignmentforum.org/posts/YgNYA6pj2hPSDQiTE/distinguishing-definitions-of-takeoff Distinguishing definitions of takeoff] goes into more detail on this."], "entry": "Answer to What are the different possible AI takeoff speeds?", "id": "6525fcfb73d5ed7c22901ae0fcb4c273"} +{"source": "stampy", "source_filetype": "text", "url": "n/a", "title": "What are alternate phrasings for?", "authors": "n/a", "date_published": "n/a", "text": "Question: What are alternate phrasings for?\n\nAnswer: Alternate phrasings are used to improve the semantic search which Stampy uses to serve people questions, by giving alternate ways to say a question which might trigger a match when the main wording won't. They should generally only be used when there is a significantly different wording, rather than for only very minor changes.", "question": "What are alternate phrasings for?", "answer": ["Alternate phrasings are used to improve the semantic search which Stampy uses to serve people questions, by giving alternate ways to say a question which might trigger a match when the main wording won't. They should generally only be used when there is a significantly different wording, rather than for only very minor changes."], "entry": "Plex's Answer to What are alternate phrasings for?", "id": "e03819d65422720cee9201bc47782d5e"} +{"source": "stampy", "source_filetype": "text", "url": "n/a", "title": "Could AI have basic emotions?", "authors": "n/a", "date_published": "n/a", "text": "Question: Could AI have basic emotions?\n\nAnswer: In principle it could (if you believe in functionalism), but it probably won't. One way to ensure that AI has human-like emotions would be to copy the way human brain works, but that's not what most AI researchers are trying to do.\n\nIt's similar to how once some people thought we will build mechanical horses to pull our vehicles, but it turned out it's much easier to build a car. AI probably doesn't need emotions or maybe even consciousness to be powerful, and the first AGIs that will get built will be the ones that are easiest to build.", "question": "Could AI have basic emotions?", "answer": ["In principle it could (if you believe in functionalism), but it probably won't. One way to ensure that AI has human-like emotions would be to copy the way human brain works, but that's not what most AI researchers are trying to do.\n\nIt's similar to how once some people thought we will build mechanical horses to pull our vehicles, but it turned out it's much easier to build a car. AI probably doesn't need emotions or maybe even consciousness to be powerful, and the first AGIs that will get built will be the ones that are easiest to build."], "entry": "Filip's Answer to Could AI have basic emotions?", "id": "e4bd00c7d4f1b46592b9f630a901a38c"} +{"source": "stampy", "source_filetype": "text", "url": "n/a", "title": "Why not just put it in a box?", "authors": "n/a", "date_published": "n/a", "text": "Question: Why not just put it in a box?\n\nAnswer: One possible way to ensure the safety of a powerful AI system is to keep it contained in a software environment. There is nothing intrinsically wrong with this procedure - keeping an AI system in a secure software environment would make it safer than letting it roam free. However, even AI systems inside software environments might not be safe enough.\n\nHumans sometimes put dangerous humans inside boxes to limit their ability to influence the external world. Sometimes, these humans escape their boxes. The security of a prison depends on certain assumptions, which can be violated. [https://medium.com/breakingasia/yoshie-shiratori-the-incredible-story-of-a-man-no-prison-could-hold-6d79a67345f5 Yoshie Shiratori] reportedly escaped prison by weakening the door-frame with miso soup and dislocating his shoulders.\n\nHuman written software has a [https://spacepolicyonline.com/news/boeing-software-errors-could-have-doomed-starliners-uncrewed-test-flight/ high defect rate]; we should expect a perfectly secure system to be difficult to create. If humans construct a software system they think is secure, it is possible that the security relies on a false assumption. A powerful AI system could potentially learn how its hardware works and manipulate bits to send radio signals. It could fake a malfunction and attempt social engineering when the engineers look at its code. As the saying goes: in order for someone to do something we had imagined was impossible requires only that they have a better imagination.\n\nExperimentally, humans have [https://yudkowsky.net/singularity/aibox/ convinced] other humans to let them out of the box. Spooky.", "question": "Why not just put it in a box?", "answer": ["One possible way to ensure the safety of a powerful AI system is to keep it contained in a software environment. There is nothing intrinsically wrong with this procedure - keeping an AI system in a secure software environment would make it safer than letting it roam free. However, even AI systems inside software environments might not be safe enough.\n\nHumans sometimes put dangerous humans inside boxes to limit their ability to influence the external world. Sometimes, these humans escape their boxes. The security of a prison depends on certain assumptions, which can be violated. [https://medium.com/breakingasia/yoshie-shiratori-the-incredible-story-of-a-man-no-prison-could-hold-6d79a67345f5 Yoshie Shiratori] reportedly escaped prison by weakening the door-frame with miso soup and dislocating his shoulders.\n\nHuman written software has a [https://spacepolicyonline.com/news/boeing-software-errors-could-have-doomed-starliners-uncrewed-test-flight/ high defect rate]; we should expect a perfectly secure system to be difficult to create. If humans construct a software system they think is secure, it is possible that the security relies on a false assumption. A powerful AI system could potentially learn how its hardware works and manipulate bits to send radio signals. It could fake a malfunction and attempt social engineering when the engineers look at its code. As the saying goes: in order for someone to do something we had imagined was impossible requires only that they have a better imagination.\n\nExperimentally, humans have [https://yudkowsky.net/singularity/aibox/ convinced] other humans to let them out of the box. Spooky."], "entry": "Answer to Why not just put it in a box?", "id": "98d8c5e63ee168a085e5cceb15dabd21"} +{"source": "stampy", "source_filetype": "text", "url": "n/a", "title": "Why is safety important for smarter-than-human AI?", "authors": "n/a", "date_published": "n/a", "text": "Question: Why is safety important for smarter-than-human AI?\n\nAnswer: Present-day AI algorithms already demand special safety guarantees when they must act in important domains without human oversight, particularly when they or their environment can change over time:\n
Achieving these gains [from autonomous systems] will depend on development of entirely new methods for enabling \"trust in autonomy\" through verification and validation (V&V) of the near-infinite state systems that result from high levels of [adaptability] and autonomy. In effect, the number of possible input states that such systems can be presented with is so large that not only is it impossible to test all of them directly, it is not even feasible to test more than an insignificantly small fraction of them. Development of such systems is thus inherently unverifiable by today's methods, and as a result their operation in all but comparatively trivial applications is uncertifiable.\n\nIt is possible to develop systems having high levels of autonomy, but it is the lack of suitable V&V methods that prevents all but relatively low levels of autonomy from being certified for use.\n- Office of the US Air Force Chief Scientist (2010). [http://www.defenseinnovationmarketplace.mil/resources/AF_TechnologyHorizons2010-2030.pdf Technology Horizons: A Vision for Air Force Science and Technology 2010-30].
\nAs AI capabilities improve, it will become easier to give AI systems greater autonomy, flexibility, and control; and there will be increasingly large incentives to make use of these new possibilities. The potential for AI systems to become more general, in particular, will make it difficult to establish safety guarantees: reliable regularities during testing may not always hold post-testing.\n\nThe largest and most lasting changes in human welfare have come from scientific and technological innovation — which in turn comes from our intelligence. In the long run, then, much of AI's significance comes from its potential to automate and enhance progress in science and technology. The creation of smarter-than-human AI brings with it the basic risks and benefits of intellectual progress itself, at digital speeds.\n\nAs AI agents become more capable, it becomes more important (and more difficult) to analyze and verify their decisions and goals. Stuart Russell [http://edge.org/conversation/the-myth-of-ai#26015 writes]:\n
The primary concern is not spooky emergent consciousness but simply the ability to make high-quality decisions. Here, quality refers to the expected outcome utility of actions taken, where the utility function is, presumably, specified by the human designer. Now we have a problem:\n# The utility function may not be perfectly aligned with the values of the human race, which are (at best) very difficult to pin down.\n# Any sufficiently capable intelligent system will prefer to ensure its own continued existence and to acquire physical and computational resources – not for their own sake, but to succeed in its assigned task.\nA system that is optimizing a function of n variables, where the objective depends on a subset of size k\nBostrom's \"[http://www.nickbostrom.com/superintelligentwill.pdf The Superintelligent Will]\" lays out these two concerns in more detail: that we may not correctly specify our actual goals in programming smarter-than-human AI systems, and that most agents optimizing for a misspecified goal will have incentives to treat humans adversarially, as potential threats or obstacles to achieving the agent's goal.\n\nIf the goals of human and AI agents are not well-aligned, the more knowledgeable and technologically capable agent may use force to get what it wants, as has occurred in many conflicts between human communities. Having noticed this class of concerns in advance, we have an opportunity to reduce risk from this default scenario by directing research toward aligning artificial decision-makers' interests with our own.", "question": "Why is safety important for smarter-than-human AI?", "answer": ["Present-day AI algorithms already demand special safety guarantees when they must act in important domains without human oversight, particularly when they or their environment can change over time:\n
Achieving these gains [from autonomous systems] will depend on development of entirely new methods for enabling “trust in autonomy” through verification and validation (V&V) of the near-infinite state systems that result from high levels of [adaptability] and autonomy. In effect, the number of possible input states that such systems can be presented with is so large that not only is it impossible to test all of them directly, it is not even feasible to test more than an insignificantly small fraction of them. Development of such systems is thus inherently unverifiable by today’s methods, and as a result their operation in all but comparatively trivial applications is uncertifiable.\n\nIt is possible to develop systems having high levels of autonomy, but it is the lack of suitable V&V methods that prevents all but relatively low levels of autonomy from being certified for use.\n- Office of the US Air Force Chief Scientist (2010). [http://www.defenseinnovationmarketplace.mil/resources/AF_TechnologyHorizons2010-2030.pdf Technology Horizons: A Vision for Air Force Science and Technology 2010-30].
\nAs AI capabilities improve, it will become easier to give AI systems greater autonomy, flexibility, and control; and there will be increasingly large incentives to make use of these new possibilities. The potential for AI systems to become more general, in particular, will make it difficult to establish safety guarantees: reliable regularities during testing may not always hold post-testing.\n\nThe largest and most lasting changes in human welfare have come from scientific and technological innovation — which in turn comes from our intelligence. In the long run, then, much of AI’s significance comes from its potential to automate and enhance progress in science and technology. The creation of smarter-than-human AI brings with it the basic risks and benefits of intellectual progress itself, at digital speeds.\n\nAs AI agents become more capable, it becomes more important (and more difficult) to analyze and verify their decisions and goals. Stuart Russell [http://edge.org/conversation/the-myth-of-ai#26015 writes]:\n
The primary concern is not spooky emergent consciousness but simply the ability to make high-quality decisions. Here, quality refers to the expected outcome utility of actions taken, where the utility function is, presumably, specified by the human designer. Now we have a problem:\n# The utility function may not be perfectly aligned with the values of the human race, which are (at best) very difficult to pin down.\n# Any sufficiently capable intelligent system will prefer to ensure its own continued existence and to acquire physical and computational resources – not for their own sake, but to succeed in its assigned task.\nA system that is optimizing a function of n variables, where the objective depends on a subset of size k\nBostrom’s “[http://www.nickbostrom.com/superintelligentwill.pdf The Superintelligent Will]” lays out these two concerns in more detail: that we may not correctly specify our actual goals in programming smarter-than-human AI systems, and that most agents optimizing for a misspecified goal will have incentives to treat humans adversarially, as potential threats or obstacles to achieving the agent’s goal.\n\nIf the goals of human and AI agents are not well-aligned, the more knowledgeable and technologically capable agent may use force to get what it wants, as has occurred in many conflicts between human communities. Having noticed this class of concerns in advance, we have an opportunity to reduce risk from this default scenario by directing research toward aligning artificial decision-makers’ interests with our own."], "entry": "Answer to Why is safety important for smarter-than-human AI?", "id": "a645b44bad3a42cfcb90d1cdc592e791"} +{"source": "stampy", "source_filetype": "text", "url": "n/a", "title": "Why can’t we just use natural language instructions?", "authors": "n/a", "date_published": "n/a", "text": "Question: Why can't we just use natural language instructions?\n\nAnswer: When one person tells a set of natural language instructions to another person, they are relying on much other information which is already stored in the other person's mind.\n\nIf you tell me \"don't harm other people,\" I already have a conception of what harm means and doesn't mean, what people means and doesn't mean, and my own complex moral reasoning for figuring out the edge cases in instances wherein harming people is inevitable or harming someone is necessary for self-defense or the greater good.\n\nAll of those complex definitions and systems of decision making are already in our mind, so it's easy to take them for granted. An AI is a mind made from scratch, so programming a goal is not as simple as telling it a natural language command.", "question": "Why can’t we just use natural language instructions?", "answer": ["When one person tells a set of natural language instructions to another person, they are relying on much other information which is already stored in the other person's mind.\n\nIf you tell me \"don't harm other people,\" I already have a conception of what harm means and doesn't mean, what people means and doesn't mean, and my own complex moral reasoning for figuring out the edge cases in instances wherein harming people is inevitable or harming someone is necessary for self-defense or the greater good.\n\nAll of those complex definitions and systems of decision making are already in our mind, so it's easy to take them for granted. An AI is a mind made from scratch, so programming a goal is not as simple as telling it a natural language command."], "entry": "Answer to Why can’t we just use natural language instructions?", "id": "267f0fc1c4a3b980fddc3e532165caf3"} +{"source": "stampy", "source_filetype": "text", "url": "n/a", "title": "How likely is it that an AI would pretend be a human further its goals?", "authors": "n/a", "date_published": "n/a", "text": "Question: How likely is it that an AI would pretend be a human further its goals?\n\nAnswer: Talking about full AGI: Fairly likely, but depends on takeoff speed. In a slow takeoff of a misaligned AGI, where it is only weakly superintelligent, manipulating humans would be one of its main options for trying to further its goals for some time. Even in a fast takeoff, it's plausible that it would at least briefly manipulate humans in order to accelerate its ascent to technological superiority, though depending on what machines are available to hack at the time it may be able to skip this stage.\n\nIf the AI's goals include reference to humans it may have reason to continue deceiving us after it attains technological superiority, but will not necessarily do so. How this unfolds would depend on the details of its goals.\n\nEliezer Yudkowsky gives [https://www.lesswrong.com/posts/pxGYZs2zHJNHvWY5b/request-for-concrete-ai-takeover-mechanisms the example] of an AI solving protein folding, then mail-ordering synthesised DNA to a bribed or deceived human with instructions to mix the ingredients in a specific order to create [https://en.wikipedia.org/wiki/Atomically_precise_manufacturing wet nanotechnology].", "question": "How likely is it that an AI would pretend be a human further its goals?", "answer": ["Talking about full AGI: Fairly likely, but depends on takeoff speed. In a slow takeoff of a misaligned AGI, where it is only weakly superintelligent, manipulating humans would be one of its main options for trying to further its goals for some time. Even in a fast takeoff, it’s plausible that it would at least briefly manipulate humans in order to accelerate its ascent to technological superiority, though depending on what machines are available to hack at the time it may be able to skip this stage.\n\nIf the AI's goals include reference to humans it may have reason to continue deceiving us after it attains technological superiority, but will not necessarily do so. How this unfolds would depend on the details of its goals.\n\nEliezer Yudkowsky gives [https://www.lesswrong.com/posts/pxGYZs2zHJNHvWY5b/request-for-concrete-ai-takeover-mechanisms the example] of an AI solving protein folding, then mail-ordering synthesised DNA to a bribed or deceived human with instructions to mix the ingredients in a specific order to create [https://en.wikipedia.org/wiki/Atomically_precise_manufacturing wet nanotechnology]."], "entry": "Plex's Answer to How likely is it that an AI would pretend to be a human to further its goals?", "id": "41b46033c92119189905b9c7863b302e"} +{"source": "stampy", "source_filetype": "text", "url": "n/a", "title": "What are the potential benefits of AI as it grows increasingly sophisticated?", "authors": "n/a", "date_published": "n/a", "text": "Question: What are the potential benefits of AI as it grows increasingly sophisticated?\n\nAnswer: It's difficult to tell at this stage, but AI will enable many developments that could be terrifically beneficial if managed with enough foresight and care. For example, menial tasks could be automated, which could give rise to a society of abundance, leisure, and flourishing, free of poverty and tedium. As another example, AI could also improve our ability to understand and manipulate complex biological systems, unlocking a path to drastically improved longevity and health, and to conquering disease.", "question": "What are the potential benefits of AI as it grows increasingly sophisticated?", "answer": ["It’s difficult to tell at this stage, but AI will enable many developments that could be terrifically beneficial if managed with enough foresight and care. For example, menial tasks could be automated, which could give rise to a society of abundance, leisure, and flourishing, free of poverty and tedium. As another example, AI could also improve our ability to understand and manipulate complex biological systems, unlocking a path to drastically improved longevity and health, and to conquering disease."], "entry": "Answer to What are the potential benefits of AI as it grows increasingly sophisticated?", "id": "04b515df26a22fcd37ce8e1ae6e3942e"} +{"source": "stampy", "source_filetype": "text", "url": "n/a", "title": "What technical problems are MIRI working on?", "authors": "n/a", "date_published": "n/a", "text": "Question: What technical problems are MIRI working on?\n\nAnswer: \"Aligning smarter-than-human AI with human interests\" is an extremely vague goal. To approach this problem productively, we attempt to factorize it into several subproblems. As a starting point, we ask: \"What aspects of this problem would we still be unable to solve even if the problem were much easier?\"\n\nIn order to achieve real-world goals more effectively than a human, a general AI system will need to be able to learn its environment over time and decide between possible proposals or actions. A simplified version of the alignment problem, then, would be to ask how we could construct a system that learns its environment and has a very crude decision criterion, like \"Select the policy that maximizes the expected number of diamonds in the world.\"\n\n''Highly reliable agent design'' is the technical challenge of formally specifying a software system that can be relied upon to pursue some preselected toy goal. An example of a subproblem in this space is [https://intelligence.org/2015/07/27/miris-approach/#2 ontology identification]: how do we formalize the goal of \"maximizing diamonds\" in full generality, allowing that a fully autonomous agent may end up in unexpected environments and may construct unanticipated hypotheses and policies? Even if we had unbounded computational power and all the time in the world, we don't currently know how to solve this problem. This suggests that we're not only missing practical algorithms but also a basic theoretical framework through which to understand the problem.\n\nThe formal agent AIXI is an attempt to define what we mean by \"optimal behavior\" in the case of a reinforcement learner. A simple AIXI-like equation is lacking, however, for defining what we mean by \"good behavior\" if the goal is to change something about the external world (and not just to maximize a pre-specified reward number). In order for the agent to evaluate its world-models to count the number of diamonds, as opposed to having a privileged reward channel, what general formal properties must its world-models possess? If the system updates its hypotheses (e.g., discovers that string theory is true and quantum physics is false) in a way its programmers didn't expect, how does it identify \"diamonds\" in the new model? The question is a very basic one, yet the relevant theory is currently missing.\n\nWe can distinguish highly reliable agent design from the problem of value specification: \"Once we understand how to design an autonomous AI system that promotes a goal, how do we ensure its goal actually matches what we want?\" Since human error is inevitable and we will need to be able to safely supervise and redesign AI algorithms even as they approach human equivalence in cognitive tasks, MIRI also works on formalizing error-tolerant agent properties. Artificial Intelligence: A Modern Approach, the standard textbook in AI, summarizes the challenge:\n
Yudkowsky […] asserts that friendliness (a desire not to harm humans) should be designed in from the start, but that the designers should recognize both that their own designs may be flawed, and that the robot will learn and evolve over time. Thus the challenge is one of mechanism design — to design a mechanism for evolving AI under a system of checks and balances, and to give the systems utility functions that will remain friendly in the face of such changes.\n-Russell and Norvig (2009). [http://www.amazon.com/Artificial-Intelligence-Modern-Approach-Edition/dp/ Artificial Intelligence: A Modern Approach].
\nOur [https://intelligence.org/technical-agenda/ technical agenda] describes these open problems in more detail, and our research guide collects online resources for learning more.", "question": "What technical problems are MIRI working on?", "answer": ["“Aligning smarter-than-human AI with human interests” is an extremely vague goal. To approach this problem productively, we attempt to factorize it into several subproblems. As a starting point, we ask: “What aspects of this problem would we still be unable to solve even if the problem were much easier?”\n\nIn order to achieve real-world goals more effectively than a human, a general AI system will need to be able to learn its environment over time and decide between possible proposals or actions. A simplified version of the alignment problem, then, would be to ask how we could construct a system that learns its environment and has a very crude decision criterion, like “Select the policy that maximizes the expected number of diamonds in the world.”\n\n''Highly reliable agent design'' is the technical challenge of formally specifying a software system that can be relied upon to pursue some preselected toy goal. An example of a subproblem in this space is [https://intelligence.org/2015/07/27/miris-approach/#2 ontology identification]: how do we formalize the goal of “maximizing diamonds” in full generality, allowing that a fully autonomous agent may end up in unexpected environments and may construct unanticipated hypotheses and policies? Even if we had unbounded computational power and all the time in the world, we don’t currently know how to solve this problem. This suggests that we’re not only missing practical algorithms but also a basic theoretical framework through which to understand the problem.\n\nThe formal agent AIXI is an attempt to define what we mean by “optimal behavior” in the case of a reinforcement learner. A simple AIXI-like equation is lacking, however, for defining what we mean by “good behavior” if the goal is to change something about the external world (and not just to maximize a pre-specified reward number). In order for the agent to evaluate its world-models to count the number of diamonds, as opposed to having a privileged reward channel, what general formal properties must its world-models possess? If the system updates its hypotheses (e.g., discovers that string theory is true and quantum physics is false) in a way its programmers didn’t expect, how does it identify “diamonds” in the new model? The question is a very basic one, yet the relevant theory is currently missing.\n\nWe can distinguish highly reliable agent design from the problem of value specification: “Once we understand how to design an autonomous AI system that promotes a goal, how do we ensure its goal actually matches what we want?” Since human error is inevitable and we will need to be able to safely supervise and redesign AI algorithms even as they approach human equivalence in cognitive tasks, MIRI also works on formalizing error-tolerant agent properties. Artificial Intelligence: A Modern Approach, the standard textbook in AI, summarizes the challenge:\n
Yudkowsky […] asserts that friendliness (a desire not to harm humans) should be designed in from the start, but that the designers should recognize both that their own designs may be flawed, and that the robot will learn and evolve over time. Thus the challenge is one of mechanism design — to design a mechanism for evolving AI under a system of checks and balances, and to give the systems utility functions that will remain friendly in the face of such changes.\n-Russell and Norvig (2009). [http://www.amazon.com/Artificial-Intelligence-Modern-Approach-Edition/dp/0136042597 Artificial Intelligence: A Modern Approach].
\nOur [https://intelligence.org/technical-agenda/ technical agenda] describes these open problems in more detail, and our research guide collects online resources for learning more."], "entry": "Answer to What technical problems are MIRI working on?", "id": "bb5c937e0b061cc9b07ce67766b8c147"} +{"source": "stampy", "source_filetype": "text", "url": "n/a", "title": "Won’t AI be just like us?", "authors": "n/a", "date_published": "n/a", "text": "Question: Won't AI be just like us?\n\nAnswer: The degree to which an Artificial Superintelligence (ASI) would resemble us depends heavily on how it is implemented, but it seems that differences are unavoidable. If AI is accomplished through whole brain emulation and we make a big effort to make it as human as possible (including giving it a humanoid body), the AI could probably be said to think like a human. However, by definition of ASI it would be much smarter. Differences in the substrate and body might open up numerous possibilities (such as immortality, different sensors, easy self-improvement, ability to make copies, etc.). Its social experience and upbringing would likely also be entirely different. All of this can significantly change the ASI's values and outlook on the world, even if it would still use the same algorithms as we do. This is essentially the \"best case scenario\" for human resemblance, but whole brain emulation is kind of a separate field from AI, even if both aim to build intelligent machines. Most approaches to AI are vastly different and most ASIs would likely not have humanoid bodies. At this moment in time it seems much easier to create a machine that is intelligent than a machine that is exactly like a human (it's certainly a bigger target).", "question": "Won’t AI be just like us?", "answer": ["The degree to which an Artificial Superintelligence (ASI) would resemble us depends heavily on how it is implemented, but it seems that differences are unavoidable. If AI is accomplished through whole brain emulation and we make a big effort to make it as human as possible (including giving it a humanoid body), the AI could probably be said to think like a human. However, by definition of ASI it would be much smarter. Differences in the substrate and body might open up numerous possibilities (such as immortality, different sensors, easy self-improvement, ability to make copies, etc.). Its social experience and upbringing would likely also be entirely different. All of this can significantly change the ASI's values and outlook on the world, even if it would still use the same algorithms as we do. This is essentially the \"best case scenario\" for human resemblance, but whole brain emulation is kind of a separate field from AI, even if both aim to build intelligent machines. Most approaches to AI are vastly different and most ASIs would likely not have humanoid bodies. At this moment in time it seems much easier to create a machine that is intelligent than a machine that is exactly like a human (it's certainly a bigger target)."], "entry": "Answer to Won’t AI be just like us?", "id": "383ca783184c8c364ecde834b62f843d"} +{"source": "stampy", "source_filetype": "text", "url": "n/a", "title": "Why might an AI do something that we don’t want it to, if it’s really so intelligent?", "authors": "n/a", "date_published": "n/a", "text": "Question: Why might an AI do something that we don't want it to, if it's really so intelligent?\n\nAnswer: A Superintelligence would be intelligent enough to understand what the programmer's motives were when designing its goals, but it would have no intrinsic reason to care about what its programmers had in mind. The only thing it will be beholden to is the actual goal it is programmed with, no matter how insane its fulfillment may seem to us.\n\nConsider what \"intentions\" the process of evolution may have had for you when designing your goals. When you consider that you were made with the \"intention\" of replicating your genes, do you somehow feel beholden to the \"intention\" behind your evolutionary design? Most likely you don't care. You may choose to never have children, and you will most likely attempt to keep yourself alive long past your biological ability to reproduce.", "question": "Why might an AI do something that we don’t want it to, if it’s really so intelligent?", "answer": ["A Superintelligence would be intelligent enough to understand what the programmer’s motives were when designing its goals, but it would have no intrinsic reason to care about what its programmers had in mind. The only thing it will be beholden to is the actual goal it is programmed with, no matter how insane its fulfillment may seem to us.\n\nConsider what “intentions” the process of evolution may have had for you when designing your goals. When you consider that you were made with the “intention” of replicating your genes, do you somehow feel beholden to the “intention” behind your evolutionary design? Most likely you don't care. You may choose to never have children, and you will most likely attempt to keep yourself alive long past your biological ability to reproduce."], "entry": "Answer to Why might an AI do something that we don’t want it to, if it’s really so intelligent?", "id": "74dda4b8067fa5fa412fbfeeae7c1541"} +{"source": "stampy", "source_filetype": "text", "url": "n/a", "title": "Can humans stay in control of the world if human- or superhuman-level AI is developed?", "authors": "n/a", "date_published": "n/a", "text": "Question: Can humans stay in control of the world if human- or superhuman-level AI is developed?\n\nAnswer: This is a big question that it would pay to start thinking about. Humans are in control of this planet not because we are stronger or faster than other animals, but because we are smarter! If we cede our position as smartest on our planet, it's not obvious that we'll retain control.", "question": "Can humans stay in control of the world if human- or superhuman-level AI is developed?", "answer": ["This is a big question that it would pay to start thinking about. Humans are in control of this planet not because we are stronger or faster than other animals, but because we are smarter! If we cede our position as smartest on our planet, it’s not obvious that we’ll retain control."], "entry": "Answer to Can humans stay in control of the world if human- or superhuman-level AI is developed?", "id": "dd09177954594f94df9fa00b1e66ef0c"} +{"source": "stampy", "source_filetype": "text", "url": "n/a", "title": "A lot of concern appears focus on human-level or “superintelligent” AI. Is that a realistic prospect in the foreseeable future?", "authors": "n/a", "date_published": "n/a", "text": "Question: A lot of concern appears focus on human-level or \"superintelligent\" AI. Is that a realistic prospect in the foreseeable future?\n\nAnswer: AI is already superhuman at some tasks, for example numerical computations, and will clearly surpass humans in others as time goes on. We don't know when (or even if) machines will reach human-level ability in all cognitive tasks, but most of the AI researchers at FLI's conference in Puerto Rico put the odds above 50% for this century, and many offered a significantly shorter timeline. Since the impact on humanity will be huge if it happens, it's worthwhile to start research now on how to ensure that any impact is positive. Many researchers also believe that dealing with superintelligent AI will be qualitatively very different from more narrow AI systems, and will require very significant research effort to get right.", "question": "A lot of concern appears focus on human-level or “superintelligent” AI. Is that a realistic prospect in the foreseeable future?", "answer": ["AI is already superhuman at some tasks, for example numerical computations, and will clearly surpass humans in others as time goes on. We don’t know when (or even if) machines will reach human-level ability in all cognitive tasks, but most of the AI researchers at FLI’s conference in Puerto Rico put the odds above 50% for this century, and many offered a significantly shorter timeline. Since the impact on humanity will be huge if it happens, it’s worthwhile to start research now on how to ensure that any impact is positive. Many researchers also believe that dealing with superintelligent AI will be qualitatively very different from more narrow AI systems, and will require very significant research effort to get right."], "entry": "Answer to A lot of concern appears to focus on human-level or “superintelligent” AI. Is that a realistic prospect in the foreseeable future?", "id": "ddc75a6e85701fabab1bcdaeff96e4fc"} +{"source": "stampy", "source_filetype": "text", "url": "n/a", "title": "How is AGI different from current AI?", "authors": "n/a", "date_published": "n/a", "text": "Question: How is AGI different from current AI?\n\nAnswer: Current narrow systems are much more domain-specific than AGI. We don't know what the first AGI will look like, some people think the [https://www.lesswrong.com/posts/6Hee7w2paEzHsD6mn/collection-of-gpt-3-results GPT-3] architecture but scaled up a lot may get us there (GPT-3 is a giant prediction model which when trained on a vast amount of text seems to [https://www.lesswrong.com/posts/D3hP47pZwXNPRByj8/an-102-meta-learning-by-gpt-3-and-a-list-of-full-proposals learn how to learn] and do [https://gpt3examples.com/ all sorts of crazy-impressive things], a related model can [https://openai.com/blog/dall-e/ generate pictures from text]), some people don't think scaling this kind of model will get us all the way.", "question": "How is AGI different from current AI?", "answer": ["Current narrow systems are much more domain-specific than AGI. We don’t know what the first AGI will look like, some people think the [https://www.lesswrong.com/posts/6Hee7w2paEzHsD6mn/collection-of-gpt-3-results GPT-3] architecture but scaled up a lot may get us there (GPT-3 is a giant prediction model which when trained on a vast amount of text seems to [https://www.lesswrong.com/posts/D3hP47pZwXNPRByj8/an-102-meta-learning-by-gpt-3-and-a-list-of-full-proposals learn how to learn] and do [https://gpt3examples.com/ all sorts of crazy-impressive things], a related model can [https://openai.com/blog/dall-e/ generate pictures from text]), some people don’t think scaling this kind of model will get us all the way."], "entry": "Plex's Answer to How is AGI different from current AI?", "id": "4e911dc0a26728448658404d74b4689b"}