{"source": "gwern", "url": "https://www.gwern.net/Scaling-hypothesis.page", "title": "\"The Scaling Hypothesis\"", "authors": "Gwern Branwen", "date_published": "n/a", "text": "---\ntitle: \"The Scaling Hypothesis\"\ndescription: \"On GPT-3: meta-learning, scaling, implications, and deep theory. The scaling hypothesis: neural nets absorb data & compute, generalizing and becoming more Bayesian as problems get harder, manifesting new abilities even at trivial-by-global-standards-scale. The deep learning revolution has begun as foretold.\"\nthumbnail: /doc/ai/nn/transformer/gpt/2020-brown-gpt3-figure13-meanperformancescalingcurve.png\nthumbnailText: \"Figure 1.3 from Brown et al 2020 (OpenAI, GPT-3), showing roughly log-scaling of GPT-3 parameter/compute size vs benchmark performance on all text/natural language benchmarks test.\"\ncreated: 2020-05-28\nmodified: 2022-01-02\nstatus: finished\nprevious: /newsletter/2020/05\nnext: /fiction/clippy\nimportance: 10\nconfidence: likely\ncssExtension: drop-caps-kanzlei\n...\n\n
\n> GPT-3, announced by OpenAI in May 2020, is the largest neural network ever trained, by over an order of magnitude.\n> Trained on Internet text data, it is the successor to GPT-2, which had surprised everyone by its natural language understanding & generation ability.\n> To the surprise of most (including myself), this vast increase in size did not run into diminishing or negative returns, as many expected, but the benefits of scale continued to happen as forecasted by OpenAI.\n> These benefits were not merely learning more facts & text than GPT-2, but qualitatively distinct & even more surprising in showing [*meta-learning*](#meta-learning): while GPT-2 learned how to do common natural language tasks like text summarization, GPT-3 instead learned how to follow directions and learn new tasks from a few examples.\n> (As a result, GPT-3 outputs & interaction are more fascinating & human-like than GPT-2.)\n>\n> While the immediate applications of GPT-3, like my poetry or humor writings, are nice, the short-term implications of GPT-3 are much more important.\n>\n> First, while GPT-3 is expensive by conventional DL standards, it is cheap by scientific/commercial/military/government budget standards, and the results indicate that models could be made much larger.\n> Second, models can also be made much more powerful, as GPT is an old approach known to be flawed in both minor & major ways, and far from an 'ideal' Transformer.\n> Third, GPT-3's capabilities come from learning on raw (unsupervised) data; that has long been one of the weakest areas of DL, holding back progress in other areas like reinforcement learning or robotics. Models like GPT-3 suggest that large unsupervised models will be vital components of future DL systems, as they can be 'plugged into' systems to immediately provide understanding of the world, humans, natural language, and reasoning.\n>\n> The meta-learning has a longer-term implication: it is a demonstration of the [*blessings of scale*](#blessings-of-scale), where problems with simple neural networks vanish, and they become more powerful, more generalizable, more human-like when simply made very large & trained on very large datasets with very large compute---even though those properties are believed to require complicated architectures & fancy algorithms (and this perceived need drives much research).\n> Unsupervised models benefit from this, as training on large corpuses like Internet-scale text present a myriad of difficult problems to solve; this is enough to drive meta-learning despite GPT not being designed for meta-learning in any way.\n> (This family of phenomena is perhaps driven by neural networks functioning as ensembles of many sub-networks with them all averaging out to an Occam's razor, which for small data & models, learn superficial or memorized parts of the data, but can be forced into true learning by making the problems hard & rich enough; as [meta-learners learn amortized Bayesian inference](/backstop#deep-bayes), they build in informative priors when trained over many tasks, and become dramatically more sample-efficient and better at generalization.)\n>\n> The blessings of scale in turn support a radical theory: an old AI paradigm held by a few pioneers in connectionism (early artificial neural network research) and by more recent deep learning researchers, the [*scaling hypothesis*](#scaling-hypothesis).\n> The scaling hypothesis regards the blessings of scale as the secret of AGI: intelligence is 'just' simple neural units & learning algorithms applied to diverse experiences at a (currently) unreachable scale.\n> As increasing computational resources permit running such algorithms at the necessary scale, the neural networks will get ever more intelligent.\n>\n> When? Estimates of Moore’s law-like progress curves decades ago by pioneers like Hans Moravec indicated that it would take until the 2010s for the sufficiently-cheap compute for tiny insect-level prototype systems to be available, and the 2020s for the first sub-human systems to become feasible, and these forecasts are holding up.\n> (Despite this vindication, the scaling hypothesis is so unpopular an idea, and difficult to prove in advance rather than as a _fait accompli_, that while the GPT-3 results finally drew some public notice after OpenAI enabled limited public access & people could experiment with it live, it is unlikely that many entities will modify their research philosophies, much less kick off an 'arms race'.)\n>\n> More concerningly, GPT-3's scaling curves, unpredicted meta-learning, and success on various anti-AI challenges suggests that in terms of futurology, AI researchers' forecasts are an emperor sans garments: they have no coherent model of how AI progress happens or why GPT-3 was possible or what specific achievements should cause alarm, where intelligence comes from, and do not learn from any falsified predictions.\n> Their primary concerns appear to be supporting the status quo, placating public concern, and remaining respectable.\n> As such, their comments on AI risk are meaningless: they would make the same public statements if the scaling hypothesis were true or not.\n>\n> Depending on what investments are made into scaling DL, and how fast compute grows, the 2020s should be quite interesting---sigmoid or singularity?\n>\n> For more ML scaling research, follow the [/r/MLScaling](https://www.reddit.com/r/mlscaling/ \"'ML Scaling subreddit', Branwen 2020\") subreddit. For a fiction treatment as SF short story, see [\"It Looks Like You're Trying To Take Over The World\"](/fiction/clippy).\n
\n\n
\n
Read The Samples
\nOn [\"GPT-3: Language Models are Few-Shot Learners\", Brown et al 2020](https://arxiv.org/abs/2005.14165#openai \"'GPT-3: Language Models are Few-Shot Learners', Brown et al 2020\") ([poems](https://arxiv.org/pdf/2005.14165.pdf&org=openai#page=48 \"Figure F.1: Four uncurated completions from a context suggesting the model compose a poem in the style of Wallace Stevens with the title 'Shadows on the Way'\") & my followup [GPT-3 Creative Writing](/gpt-3 \"Creative writing by OpenAI's GPT-3 model, demonstrating poetry, dialogue, puns, literary parodies, and storytelling.\"), compare [my old finetuned GPT-2 poetry](/gpt-2 \"'GPT-2 Neural Network Poetry', Branwen & Presser 2019\"); [random samples](https://justpaste.it/7eovk \"GPT-3 Github JSON dump reformatted to readable HTML\"); [\"OpenAI API\"](https://openai.com/blog/openai-api/) with real-world demos)\n\nI strongly encourage anyone interested in GPT-3 to also at least skim OA's [random samples](https://justpaste.it/7eovk \"GPT-3 Github JSON dump reformatted to readable HTML\"), or better yet, my samples in [\"GPT-3 Creative Writing\"](/gpt-3 \"Creative writing by OpenAI's GPT-3 model, demonstrating poetry, dialogue, puns, literary parodies, and storytelling.\")---reading the paper & looking at some standard benchmark graphs does not give a good feel for what working with GPT-3 is like or the diversity of things it can do which are missed by benchmarks.\n
\n\n# Meta-Learning\n\n[Learning to learn.]{.marginnote} In May 2020, OA released---to remarkably little interest from researchers, no blog post, no media blitz, and little public discussion beyond the snidely dismissive---the long-awaited followup to [GPT-2](https://openai.com/research/better-language-models \"Better Language Models and Their Implications\"), one model to rule them all: a 117× larger 175b-parameter model with far more powerful language generation, which lets it solve a wide variety of problems from arithmetic^[Given the number of comments on the paper's arithmetic benchmark, I should point out that the arithmetic benchmark appears to greatly understate GPT-3's abilities due to the [BPE encoding issue](/gpt-3#bpes \"'GPT-3 Creative Fiction § BPEs', Branwen 2020\"): even using commas markedly improves its 5-digit addition ability, for example. The BPE issue also appears to explain much of the poor performance on the anagram/shuffling tasks. This is something to keep in mind for any task which requires character-level manipulation or understanding.] to English translation to unscrambling anagrams to SAT analogies---purely from being prompted with text examples, without any specialized training or finetuning whatsoever, merely next-word prediction training on a big Internet text corpus.\nThis implies GPT-3's attention mechanisms serve as [\"fast weights\"](https://arxiv.org/abs/1610.06258#deepmind \"'Using Fast Weights to Attend to the Recent Past', Ba et al 2016\") that have \"learned to learn\" by training on sufficiently varied data^[On implicit [meta-learning](https://www.reddit.com/r/reinforcementlearning/search/?q=flair%3AMetaRL&include_over_18=on&restrict_sr=on&sort=top), see: [Santoro et al 2016](https://arxiv.org/abs/1605.06065#deepmind \"One-shot Learning with Memory-Augmented Neural Networks\")/[Wang et al 2018](/doc/reinforcement-learning/meta-learning/2018-wang.pdf#deepmind \"Prefrontal cortex as a meta-reinforcement learning system\") ([Botvinick commentary](https://www.lesswrong.com/posts/Wnqua6eQkewL3bqsF/matt-botvinick-on-the-spontaneous-emergence-of-learning \"Matt Botvinick on the spontaneous emergence of learning algorithms\"))/[Botvinick et al 2019a](https://www.cell.com/trends/cognitive-sciences/fulltext/S1364-6613\\(19\\)30061-0#deepmind \"Reinforcement Learning, Fast and Slow\"), [Clune 2019](https://arxiv.org/abs/1905.10985#uber \"AI-GAs: AI-generating algorithms, an alternate paradigm for producing general artificial intelligence\"), [Schmidhuber 2015](https://arxiv.org/abs/1511.09249#schmidhuber \"On Learning to Think: Algorithmic Information Theory for Novel Combinations of Reinforcement Learning Controllers and Recurrent Neural World Models\")/[2018](https://arxiv.org/abs/1802.08864#schmidhuber \"One Big Net for Everything\"), [Weng 2018](https://lilianweng.github.io/lil-log/2018/11/30/meta-learning.html#openai \"Meta-Learning: Learning to Learn Fast\")/[Weng 2019](https://lilianweng.github.io/lil-log/2019/06/23/meta-reinforcement-learning.html#openai \"Meta Reinforcement Learning\").], forcing it to do more than just learn ordinary textual relationships.\nLike OpenAI's [Jukebox](https://openai.com/research/jukebox \"'Jukebox: We're introducing Jukebox, a neural net that generates music, including rudimentary singing, as raw audio in a variety of genres and artist styles. We're releasing the model weights and code, along with a tool to explore the generated samples.', Dhariwal et al 2020\") just weeks ago (itself a remarkable demonstration of scaling in synthesizing *raw audio* music complete with remarkably realistic voices/instruments), the announcement of GPT-3 appears to have sunk almost without a trace, so I will go into more depth than usual.\n\n# Flexing GPT\n\n
\n> '\"They are absolutely reasonable. I think that is their distinguishing characteristic. Yes, Mr. Erskine, an absolutely reasonable people. I assure you there is no nonsense about the Americans.\" \"How dreadful!\" cried Lord Henry. \"I can stand brute force, but brute reason is quite unbearable. There is something unfair about its use. It is hitting below the intellect.\"'\n>\n> _The Picture of Dorian Gray_, Oscar Wilde\n
\n\n[\"Attacks only get better.\"]{.marginnote} 2 years ago, [GPT-1](https://openai.com/research/language-unsupervised \"Improving Language Understanding with Unsupervised Learning: We've obtained state-of-the-art results on a suite of diverse language tasks with a scalable, task-agnostic system, which we're also releasing. Our approach is a combination of two existing ideas: transformers and unsupervised pre-training. These results provide a convincing example that pairing supervised learning methods with unsupervised pre-training works very well; this is an idea that many have explored in the past, and we hope our result motivates further research into applying this idea on larger and more diverse datasets.\") was interestingly useful pretraining and adorable with its \"sentiment neuron\".\n1 year ago, GPT-2 was impressive with its excellent text generation & finetuning capabilities.\nThis year, GPT-3 is scary because it's a magnificently obsolete architecture from early 2018 (used mostly for software engineering convenience as the infrastructure has been debugged), which is small & shallow compared to what's possible[^overhang][^overhang-NN], with a simple uniform architecture^[Eg a narrow context window [severely limits it](https://arxiv.org/pdf/2001.08361.pdf#page=25 \"D.5: Context Dependence\"), and motivates the need for [efficient attention](/note/attention \"'Efficient Attention: Breaking The Quadratic Transformer Bottleneck', Branwen 2020\"). More broadly, GPT-3 does nothing exotic---no use of [brain imitation learning](https://www.reddit.com/r/reinforcementlearning/comments/9pwy2f/wbe_and_drl_a_middle_way_of_imitation_learning/ \"'WBE and DRL: a Middle Way of imitation learning from the human brain', Branwen 2018\") or neural architecture search to try to tailor the model, online hyperparameter optimization (possibly [>3× speedup](https://arxiv.org/abs/2106.00958#openai \"'A Generalizable Approach to Learning Optimizers', Almeida et al 2021\")) or even decide basic hyperparameters like widths (which as [EfficientNet](https://arxiv.org/abs/1905.11946#google \"'EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks', Tan & Le 2019\"){#tan-le-2019-2} shows, can make quite a different even in \"well-understood and hand-optimized vanilla architectures\").] trained in the dumbest way possible (unidirectional prediction of next text token) on a single impoverished modality (random Internet HTML text dumps^[Not even PDFs---so no Google Books, no Arxiv, no Libgen, no Sci-Hub...]) on tiny data (fits on a laptop), sampled in a dumb way^[Generating text from a LM can reveal the presence of knowledge, but not its absence, and it is universally agreed that the current crude heuristic methods like top-_k_ cannot possibly be optimal.], its benchmark performance sabotaged by bad prompts & [data encoding problems](/gpt-3#bpes \"'GPT-3 Creative Fiction § BPEs', Branwen 2020\") (especially arithmetic & commonsense reasoning), and yet, the first version already manifests crazy runtime meta-learning---and the scaling curves *still* are not bending!\nThe samples are also better than ever, whether it's GPT-3 inventing new penis jokes^['A man is at the doctor's office, and the doctor tells him, \"I've got some good news and some bad news for you.\" / The man says, \"Well, I can't take the bad news right now, so give me the good news first.\" / The doctor says, \"Well, the good news is that you have an 18-inch penis.\" / The man looks stunned for a moment, and then asks, \"What's the bad news?\" / The doctor says, \"Your brain's in your dick.\"'] or writing (mostly working) [JavaScript tutorials](https://justpaste.it/7eovk#javascript \"'GPT-3 random sample dump: JavaScript tutorial', GPT- 2020\") about rotating arrays.\n\n[^overhang]: GPT-3 hardly costs more than a few million dollars of compute (as of early 2020) as the extensive scaling research beforehand enabled one training run, and it is cheap to run (pg39): \"Even with the full GPT-3 175B, generating 100 pages of content from a trained model can cost on the order of 0.4 kW-hr, or only a few cents in energy costs.\" (Likewise, T5 was trained [only once](https://twitter.com/colinraffel/status/1313097438299910147 \"I recently came across https://arxiv.org/abs/2004.08900, which 'assumes 2-3 runs' of T5-11B. In fact, we trained T5-11B once. That's why we spend 35 pages figuring out how we should train before we start training. You don't want to mess up a training run that big.\").) And for the cost of one model, GPT-3 API users have shown that you get the equivalent of hundreds of smaller special-purpose models, each requiring more researchers, custom datasets, countless training runs, and tinkering, assuming said models could be created at all. (A slogan for the future: \"One model, one vector---once.\")\n\n For comparison, the [PDP-11](!W) was a common academic workhorse due to its extremely low cost, a mere [$20,000]($1970), while the first [Lisp Machine](!W) cost >[$50,000]($1972)---expensive for a workstation but a bargain compared to researchers hogging mainframes costing tens of millions. IBM's (otherwise useless) Deep Blue AI project reputedly cost >[$5]($1997)m for the final iteration (reports of [$100]($1997)m appear to be a confusion with the estimated value of *publicity* mentioned in pg187 of Hsu's _Behind Deep Blue_) and Big Science projects like [ITER](https://en.wikipedia.org/wiki/ITER) blow >5000× the funding to mostly fail. (The particle physicists, incidentally, are [back asking for](https://www.nature.com/articles/d41586-020-01866-9 \"CERN makes bold push to build €21-billion supercollider: European particle-physics lab will pursue a 100-kilometer machine to uncover the Higgs boson's secrets---but it doesn't yet have the funds\") ≫[$24]($2020)b, based on, presumably the scientific revolutions & world-changing breakthroughs that the LHC's >[$9]($2010)b investment produced, or the [$2]($1993)b spent to (not) build the [SSC](!W \"Superconducting Super Collider\")...)\n\n GPT-3 could have been done decades ago with global computing resources & scientific budgets; what could be done with today's hardware & budgets that we just don't know or care to do? There *is* a hardware overhang. (See also the [_Whole Brain Emulation Roadmap_](/doc/ai/scaling/hardware/2008-sandberg-wholebrainemulationroadmap.pdf \"Sandberg & Bostrom 2008\") & [\"2019 recent trends in GPU price per FLOPS\"](https://aiimpacts.org/2019-recent-trends-in-gpu-price-per-flops/).)\n[^overhang-NN]: Further, NNs have additional hardware overhangs of their own due to the many orders of magnitude asymmetry of training vs running. Transfer learning and meta-learning are so much faster than the baseline model training. You can 'train' GPT-3 without even any gradient steps---just examples. You pay the extremely steep upfront cost of One Big Model to Rule Them All, and then reuse it everywhere at tiny marginal cost. If you train a model, then as soon as it's done you get, among other things:\n\n - the ability to run thousands of copies in parallel on the same hardware\n\n - in a context like AlphaGo, I estimate several hundred ELO strength gains if you reuse the same hardware to merely run tree search with exact copies of the original model\n - meta-learning/transfer-learning to any related domain, cutting training requirements by orders of magnitude\n - model compression/distillation to train student models which are a fraction of the size, FLOPS, or latency (ratios varying widely based on task, approach, domain, acceptable performance degradation, targeted hardware etc, but often extreme like 1⁄100^th^)\n - reuse of the model elsewhere to instantly power up other models (eg. use of text or image embeddings for a DRL agent)\n - learning-by-doing/[experience curve effects](https://en.wikipedia.org/wiki/Experience_curve_effects) (highest in information technologies, and high for DL: Hernandez & Brown 2020), so the next from-scratch model may be much cheaper.\n\n For example: after all the iterative model architecture & game upgrades done while training the first [OpenAI Five](!W) (OA5) DoTA2 agent was completed, the second iteration of OA5, [\"Rerun\"](https://arxiv.org/pdf/1912.06680.pdf#page=11&org=openai \"'Dota 2 with Large Scale Deep Reinforcement Learning', Berner et al 2019: §4.2: Validating Surgery with Rerun\"), was trained from scratch. Rerun required only 20% of the training for a \"98% win-rate against the final version of OpenAI Five.\"\n As the authors note: \"The ideal option would be to run Rerun-like training from the very start, but this is impossible---the OpenAI Five curve represents lessons learned that led to the final codebase, environment, etc., without which it would not be possible to train Rerun.\"\n - baseline for engineering much more efficient ones by ablating and comparing with the original\n\nIt's odd that this qualitative leap appears to be largely missed by the standard NLP benchmarks.\nNothing in the raw metrics reported on, say, Penn Tree Bank or LAMBADA or WinoGrande would lead you to expect all of this hilarious and creative output; the meta-learning results might, but only if you already thought meta-learning was important.\nThis suggests to me that a useful post-GPT-3 contribution would be figuring out how to benchmark these sorts of flexible text generation capabilities (possibly something along the lines of Chollet's image-based [Abstraction and Reasoning Corpus (ARC)](https://arxiv.org/abs/1911.01547#google \"'On the Measure of Intelligence', Chollet 2019\")).\n\n# Baking The Cake\n\n![Is GPT actually part of AGI---or is the cake a lie? ([LeCun 2019](/doc/ai/scaling/2019-02-18-lecun-isscc-talk-deeplearninghardwarepastpresentandfuture.pdf#page=60 \"Deep Learning Hardware: Past, Present, & Future: slide 60: 'How Much Information is the Machine Given during Learning?'\"))](/doc/ai/2019-lecun-isscctalk-cake.png){.float-right .invert}\n\n[Not the whole picture, but a big part.]{.marginnote} Does it set SOTA on every task? No, of course not.\nBut the question is not whether we can lawyerly find any way in which it might not work, but [whether there is any way which it might work](/forking-path \"'Technology Forecasting: The Garden of Forking Paths', Branwen 2014\").\nAnd there are many ways it might work better (see the [\"Limitations\" section](https://arxiv.org/pdf/2005.14165.pdf&org=openai#page=34 \"GPT-3: Language Models are Few-Shot Learners: 5. Limitations\") for just a few).\nDoes GPT-3 *do* anything like steer a robot around SF shooting lasers and rockets at humans⸮ No, of course not.\nIt is 'just' a text prediction model, an idiot savant of text; but an idiot savant, we should remember, is only a genetic mutation or bit of brain damage away from a normal human.\nIf RL is the cherry on the top of the supervised learning frosting, and supervised learning is the frosting on top of the unsupervised learning cake, well, it looks like the cake layers are finally rising.\n\n![A better GPT-3 lesson.](/doc/ai/nn/cnn/2020-07-24-gwern-meme-moneyprinter-bitterlesson-gpt3.png \"GPT-3's implications in the 'money printer go brr' meme format: the head of Rich Sutton says 'GPUs go bitter', referencing his 'bitter lesson' that most clever AI innovations are ultimately useless as they hamstring AI performance and are surpassed by methods that make fewer assumptions & use more compute/data, while the personification of AI academia, where cleverness is rewarded and heavy use of compute is considered cheating and ugly, sheds tears and complains about approaches like GPT-3 beating decades of clever academic systems.\"){.float-left}\n\n[Scaling still working.]{.marginnote} I was surprised, as I had expected closer to 100b parameters, and I thought that the performance of [CTRL](https://arxiv.org/abs/1909.05858#salesforce \"'CTRL: A Conditional Transformer Language Model for Controllable Generation', Keskar et al 2019\")/[Meena](https://arxiv.org/abs/2001.09977#google \"'Towards a Human-like Open-Domain Chatbot', Adiwardana et al 2020\")/[MegatronLM](https://nv-adlr.github.io/MegatronLM \"MegatronLM: Training Billion+ Parameter Language Models Using GPU Model Parallelism\")/[T5](https://arxiv.org/abs/1910.10683#google \"'T5: Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer', Raffel et al 2019\")/[Turing-NLG](https://www.microsoft.com/en-us/research/blog/turing-nlg-a-17-billion-parameter-language-model-by-microsoft/ \"'Turing-NLG: A 17-billion-parameter language model by Microsoft', Rosset 2020\")/[GPipe](https://arxiv.org/abs/1811.06965#google \"'GPipe: Easy Scaling with Micro-Batch Pipeline Parallelism', Huang et al 2018\") suggested that, [the scaling papers](/note/scaling \"'Machine Learning Scaling', Branwen 2021\")[^scaling-papers] notwithstanding, the scaling curves had started to bend and by 100b, it might be hard to justify further scaling.\nHowever, in the latest version of [\"the unreasonable effectiveness of data\"](/doc/ai/scaling/2009-halevy.pdf \"'The Unreasonable Effectiveness of Data', Halevy et al 2009\") where \"the curves cross\"/\"scissor effect\" and the neural method eventually wins (eg. [Banko & Brill 2001](/doc/ai/scaling/2001-banko.pdf#microsoft \"Scaling to Very Very Large Corpora for Natural Language Disambiguation\"), [Brants et al 2007](/doc/ai/scaling/2007-brants.pdf#google \"Large Language Models in Machine Translation\"), [Koehn & Knowles 2017](/doc/ai/2017-koehn-figure3-bleuscoreswithvaryingamountsoftrainingdata.png \"Six Challenges for Neural Machine Translation: Challenges: 3.2. Amount of Training Data: Figure 3: BLEU scores for English-Spanish systems trained on 0.4 million to 385.7 million words of parallel data. Quality for NMT starts much lower, outperforms SMT at about 15 million words, and even beats a SMT system with a big 2 billion word in-domain language model under high-resource conditions.\")), GPT-3 hits twice that without noticeable change in scaling factors: its scaling continues to be roughly logarithmic/power-law, as it was for much smaller models & as forecast, and it has not hit a regime where gains effectively halt or start to require increases vastly beyond feasibility.\nThat suggests that it would be both possible and useful to head to trillions of parameters (which are still well within available compute & budgets, requiring merely thousands of GPUs & perhaps [$10]($2020)--[$100]($2020)m budgets assuming no improvements which of course there will be, see Hernandez & Brown 2020 etc in this issue), and eyeballing the graphs, many benchmarks like the [Winograd schema](https://en.wikipedia.org/wiki/Winograd_schema_challenge) [WinoGrande](https://arxiv.org/abs/1907.10641#allen \"'WinoGrande: An Adversarial Winograd Schema Challenge at Scale', Sakaguchi et al 2019\") would fall by 10t parameters.\nThe predictability of scaling is striking, and makes scaling models more like statistics than AI.\n(AI is statistics which does what we want it to but doesn't work; and statistics is AI which works but doesn't do what we want.)\n\n[^scaling-papers]: In particular, sample-efficiency increases with model size up to compute-efficient scaling, and [GPT-2 can memorize data after seeing it only once](https://arxiv.org/abs/2012.07805 \"'Extracting Training Data from Large Language Models', Carlini et al 2020\")---a [desirable property](https://arxiv.org/abs/1906.05271#google \"'Does Learning Require Memorization? A Short Tale about a Long Tail', Feldman 2019\") given long-tailed real-world distributions of data. (An example of how *not* to do scaling papers is [Thompson et al 2020](https://arxiv.org/abs/2007.05558 \"The Computational Limits of Deep Learning\"), which, in stark contrast to the foregoing papers---which Thompson et al do not mention at all!---attempts to infer scaling not from well-controlled experiments run by the authors, which yield extremely tight and highly predictive curves, but attempts to infer them from occasional reported numbers in highly disparate research papers; unsurprisingly, their curves barely predict anything and seem to be serious overestimates anyway.)\n\n It is noteworthy that the pursuit of large models is driven almost exclusively by OpenAI & industry entities (the latter of which are content with far smaller models), and that academia has evinced an almost total disinterest---disgust & anger, even, and denial (one might say \"green AI\" is green with envy). For all that the scaling hypothesis is 'obvious' and scaling is 'predicted', there is remarkably little interest in actually *doing* it. Perhaps we should pay more attention to what people do rather than what they say.\n\n For more ML scaling research, follow the [/r/MLScaling](https://www.reddit.com/r/mlscaling/ \"'ML Scaling subreddit', Branwen 2020\") subreddit.\n\n![GPT-3: not even that much compute---[3640 petaflop/s-day](https://arxiv.org/pdf/2005.14165.pdf#org=openai&page=46 \"Total Compute Used to Train Language Model: Table D.1\"), only 2× their estimate for AlphaGo Zero, 1860. (Historical graph modified by myself from [\"AI and Compute\", Amodei et al 2018](https://openai.com/research/ai-and-compute \"We're releasing an analysis showing that since 2012, the amount of compute used in the largest AI training runs has been increasing exponentially with a 3.4-month doubling time (by comparison, Moore's Law had a 2-year doubling period). Since 2012, this metric has grown by more than 300,000× (a 2-year doubling period would yield only a 7× increase). Improvements in compute have been a key component of AI progress, so as long as this trend continues, it's worth preparing for the implications of systems far outside today's capabilities.\").)](/doc/ai/nn/transformer/gpt/2019-11-07-amodei-aiandcompute-twodistincteras-gpt3modified.png)\n\n[Anti-scaling: penny-wise, pound-foolish.]{.marginnote} GPT-3 is an extraordinarily expensive model by the standards of machine learning: it is estimated that training it may require the annual cost of more machine learning researchers than you can count on one hand (~[$5]($2020)m^[Roughly around [Chuan Li's](https://lambdalabs.com/blog/demystifying-gpt-3 \"OpenAI's GPT-3 Language Model: A Technical Overview\") estimate, using nominal list prices without discounts (which could be steep as the marginal costs of cloud compute are substantially lower). The R&D project cost would be much higher, but is amortized over all subsequent models & projects.]), up to [$30]($2020) of hard drive space to store the model (500--800GB), and multiple pennies of electricity per 100 pages of output (0.4 kWH).\nResearchers are concerned about the prospects for scaling: can ML afford to run projects which cost more than 0.1 milli-Manhattan-Projects⸮^[The Manhattan Project cost ~[$2]($1946)b.]\nSurely it would be too expensive, even if it represented another large leap in AI capabilities, to spend up to 10 milli-Manhattan-Projects to scale GPT-3 100× to a trivial thing like human-like performance in many domains⸮\nMany researchers feel that such a suggestion is absurd and refutes the entire idea of scaling machine learning research further; they asseverate that their favored approaches (you know, the ones which don't work[^butcher]) will run far more efficiently, and that the field would be more productive if it instead focused on research which can be conducted by an impoverished goatherder on an old laptop running off solar panels.^[As if we live in a world where grad students could go to the Moon on a ramen budget if we just wished hard enough, as if focusing on CO~2~ costs & not benefits in our evaluations is not like making a scissor with only one blade, or as if \"green AI\" approaches to try to create small models without going through big models did not look increasingly futile and like throwing good money after bad, and were not the least green of all AI research... To the extent that all cutting-edge AI research ~2010 could be done with grad student money like [$1000]($2010) of hardware, where AI research in decades before & after benefited from big iron, that is an indictment of that era, demonstrating what a stagnant dead end that research was, that its techniques were so smallminded and hobbled it could not benefit from the available large-scale compute.]\nNonetheless, I think we can expect further scaling.\n(10×? No, 10× isn't cool. You know what's cool? [100--1000×](https://www.reddit.com/r/slatestarcodex/comments/hys565/are_we_in_an_ai_overhang/fzezi7d/ \"People I know at OpenAI say v4 is around the corner and easily doable, and...will be here soon (not months but year or so). And they are confident it will scale and be around 100--1000×.\"), trained on a [fancy new supercomputer](https://news.microsoft.com/source/features/ai/openai-azure-supercomputer/ \"Microsoft announces new supercomputer, lays out vision for future AI work\").)\n\n[^butcher]: One is reminded of the joke about the customer complaining to the butcher:\n\n \"Your meat is \\$10/lb, while your competitor across the street sells it at \\$1!\" \"So go buy his meat.\" \"I would, but he has none.\" \"When I don't have any meat, it costs \\$1 too.\"\n\n# Scaling\n\n[How far will scaling go?]{.marginnote} The scaling papers suggest that the leaps we have seen over the past few years are not even half way there in terms of absolute likelihood loss, never mind what real-world capabilities each additional decrement translates into.\nThe scaling curves are clean; from [\"Scaling Laws for Neural Language Models\", Kaplan et al 2020](https://arxiv.org/abs/2001.08361#openai \"'Scaling Laws for Neural Language Models', Kaplan et al 2020\"):\n\n![DL scaling laws: compute, data, model parameters. ([Figure 1](https://arxiv.org/pdf/2001.08361.pdf#page=3&org=openai \"Scaling Laws for Neural Language Models: Figure 1: Language modeling performance improves smoothly as we increase the model size, dataset size, and amount of compute used for training.\"))](/doc/ai/nn/transformer/gpt/2020-kaplan-figure1-dlscaling.png \"Figure 1: Language modeling performance improves smoothly as we increase the model size, dataset size, and amount of compute used for training. For optimal performance all three factors must be scaled up in tandem. Empirical performance has a power-law relationship with each individual factor when not bottlenecked by the other two. (Kaplan et al 2020)\"){.invert}\n\nGPT-3 represents ~10^3^ on this chart, leaving plenty of room for further loss decreases---especially given the [uncertainty in extrapolation](https://arxiv.org/pdf/2001.08361.pdf#page=17&org=openai \"'Scaling Laws for Neural Language Models: Figure 15: Far beyond the model sizes we study empirically, we find a contradiction between our equations', Kaplan et al 2020\"):\n\n![Projecting DL power laws: still room beyond GPT-3.](/doc/ai/nn/transformer/gpt/2020-kaplan-figure15-projectingscaling.png \"Figure 15: Far beyond the model sizes we study empirically, we find a contradiction between our equations for _L(C~min~)_ and _L(D)_ due to the slow growth of data needed for compute-efficient training. The intersection marks the point before which we expect our predictions to break down. The location of this point is highly sensitive to the precise exponents from our power-law fits. (Kaplan et al 2020)\"){.invert}\n\nLo and behold, the scaling laws continue for GPT-3 models for several orders past [Kaplan et al 2020](#kaplan-et-al-2020); from [Brown et al 2020](https://arxiv.org/pdf/2005.14165.pdf#page=11&org=openai \"GPT-3: Language Models are Few-Shot Learners: Figure 3.1: Smooth scaling of performance with compute\"):\n\n![GPT-3 continues to scale as predicted. (Note GPT-3's curve has not 'bounced', and it trained only ~0.5 epoches, see [Table 2.2](https://arxiv.org/pdf/2005.14165.pdf#org=openai&page=9 \"Table 2.2: Datasets used to train GPT-3. 'Weight in training mix' refers to the fraction of examples during training that are drawn from a given dataset, which we intentionally do not make proportional to the size of the dataset. As a result, when we train for 300 billion tokens, some datasets are seen up to 3.4 times during training while other datasets are seen less than once.\"))](/doc/ai/nn/transformer/gpt/2020-brown-figure31-gpt3scaling.png \"Brown et al 2020: Figure 3.1: Smooth scaling of performance with compute. Performance (measured in terms of cross-entropy validation loss) follows a power-law trend with the amount of compute used for training. The power-law behavior observed in Kaplan et al 2020 continues for an additional two orders of magnitude with only small deviations from the predicted curve. For this figure, we exclude embedding parameters from compute and parameter counts. (Brown et al 2020). Cross-validation loss extrapolation: $L(oss) = 2.57 · C(ompute in petaflop-s/days) ^ −0.048$\"){.invert}\n\nIf we see such striking gains in halving the validation loss but with so far left to go, what is left to emerge as we third or halve again?\nHow far does this go, exactly? How do we predict what emerges when?\nBueller? Bueller?\n(See also [Meena's perplexity vs human-ness chatbot ratings](/doc/ai/2020-adiwardana-meena-figure1-humanratingsvslikelihood.png \"Towards a Human-like Open-Domain Chatbot, Adiwardana et al 2020: Figure 1: Interactive SSA vs Perplexity [exp(cross-entropy loss)]. Each point is a different version of the Meena model. A regression line is plotted, for which the coefficient of determination (R^2) is 0.93, an indication of strong correlation between perplexity and the human evaluation metric (SSA). The dotted lines show the SSA performance of other chatbots, humans (86%), the best end-to-end trained Meena model (72%), and the full version of Meena which incorporates a filtering mechanism and tuned decoding (§5) and scores 79%. Mitsuku and Cleverbot scored the same on overall SSA, but Mitsuku displayed higher sensibleness, whereas Cleverbot had higher specificity. See Sections 2.5, 2.6, and 4.3 for more details on how we performed these comparisons and how to interpret the results\"){.invert}, GPT-3-written news articles' [probability of fooling humans by parameter count](/doc/ai/nn/transformer/gpt/2020-brown-figure313-humanabilitytodetectmodelgeneratednewsstories.png \"Brown et al 2020: Figure 3.13: People's ability to identify whether news articles are model-generated (measured by the ratio of correct assignments to non-neutral assignments) decreases as model size increases. Accuracy on the outputs on the deliberately-bad control model (an unconditioned GPT-3 Small model with higher output randomness) is indicated with the dashed line at the top, and the random chance (50%) is indicated with the dashed line at the bottom. Line of best fit is a power law with 95% confidence intervals.\"), and [GPT-3 model size vs Q&A](/doc/ai/nn/transformer/gpt/2020-hendrycks-figure1b-gpt3-qascaling.png \"Figure 1b: GPT-3 Few Shot Test Performance: Performance on a commonsense benchmark (HellaSwag), a linguistic understanding benchmark (SuperGLUE), and the massive multitask test. On previous benchmarks, smaller models start well above random chance levels and exhibit more continuous improvements with model size increases, but on our test, GPT-3 moves beyond random chance with the largest model.\"){.invert} from [Hendrycks et al 2020](https://arxiv.org/abs/2009.03300 \"Measuring Massive Multitask Language Understanding\").)\n\n## Blessings Of Scale\n\n
\n> Extrapolating the spectacular performance of GPT-3 into the future suggests that the answer to life, the universe and everything is just 4.398 trillion parameters.\n>\n> [Geoff Hinton](https://twitter.com/geoffreyhinton/status/1270814602931187715)\n
\n\n\n\n[We don't know how to train NNs.]{.marginnote} The *blessings of scale* is the observation that for deep learning, hard problems are easier to solve than easy problems---everything gets better as it gets larger (in contrast to the usual outcome in research, where small things are hard and large things impossible).\nThe bigger the neural net/compute/data/problem, the faster it learns, the better it learns, the stabler it learns, and so on.\nA problem we can't solve at all at small _n_ may suddenly become straightforward with millions or billions of _n_.\n\"NNs are lazy\": they can do far more than we make them do when we push them beyond easy answers & cheap shortcuts.\nThe [bitter lesson](http://www.incompleteideas.net/IncIdeas/BitterLesson.html \"'The Bitter Lesson', Sutton 2019\") is the harder and bigger, the better.\n(Besides GPT-3, one could mention recent progress in semi-supervised learning & the model-based DRL renaissance.)\n\n![AlphaGo Zero: 'just stack moar layers lol!'](/doc/reinforcement-learning/2017-12-24-gwern-meme-nnlayers-alphagozero.jpg \"Humorous description of the simplicity of the AlphaGo Zero architecture compared to AlphaGo Master\"){.float-right}\n\n[Blessings of scale: stability → generalization → meta-learning.]{.marginnote} GPT-3 is hamstrung by its training & data, but DL enjoys an unreasonably effective [blessing of dimensionality](!W)---just simply training a *big* model on a *lot* of data induces better properties like meta-learning without even the slightest bit of that architecture being built in; and in general, training on more and harder tasks creates ever more human-like performance, generalization, and robustness.\nThe GPT natural-language & programming language models, [iGPT](https://openai.com/research/image-gpt \"Image GPT: We find that, just as a large transformer model trained on language can generate coherent text, the same exact model trained on pixel sequences can generate coherent image completions and samples. By establishing a correlation between sample quality and image classification accuracy, we show that our best generative model also contains features competitive with top convolutional nets in the unsupervised setting.\")/[Vision Transformer](https://arxiv.org/abs/2010.11929#google \"Vision Transformer (ViT): An Image is Worth 16×16 Words: Transformers for Image Recognition at Scale\") for images (and to some degree [GPT-f](https://arxiv.org/abs/2009.03393#openai \"'GPT-f: Generative Language Modeling for Automated Theorem Proving', Polu & Sutskever 2020\")), show that simply scaling up models & datasets without any supervision produces results competitive with the best (and most complex) alternatives, using the same simple architecture, gradually passing from superficial surface correlations to more human-like brain activity ([Schrimpf et al 2020](https://www.biorxiv.org/content/10.1101/2020.06.26.174482.full \"The neural architecture of language: Integrative reverse-engineering converges on a model for predictive processing\")) and linguistic biases as data increases (eg. [Warstadt et al 2020](https://arxiv.org/abs/2010.05358 \"Learning Which Features Matter: RoBERTa Acquires a Preference for Linguistic Generalizations (Eventually)\")).\nIn fact, one may not even need complicated attention mechanisms at scale, as fully-connected networks---hard to get much simpler than them!---[work surprisingly well](/note/fc \"'Fully-Connected Neural Nets', Branwen 2021\") for many tasks.\nOne typically trains such large models with simple optimizers like Adam---because the complicated ones lose their advantages as batch sizes increase and [the simple optimizers work fine](https://arxiv.org/abs/2102.06356 \"'A Large Batch Optimizer Reality Check: Traditional, Generic Optimizers Suffice Across Batch Sizes', Nado et al 2021\") and are more memory-efficient anyway.\n[OA5](https://arxiv.org/pdf/1912.06680.pdf&org=openai#page=13 \"'Dota 2 with Large Scale Deep Reinforcement Learning: §4.3: Batch Size', Berner et al 2019\") does not just scale to, but [stabilizes at](#ppo-dota2), minibatches of millions due to [gradient noise](https://openai.com/research/how-ai-training-scales \"How AI Training Scales: We've discovered that the gradient noise scale, a simple statistical metric, predicts the parallelizability of neural network training on a wide range of tasks. Since complex tasks tend to have noisier gradients, increasingly large batch sizes are likely to become useful in the future, removing one potential limit to further growth of AI systems. More broadly, these results show that neural network training need not be considered a mysterious art, but can be rigorized and systematized. ['An Empirical Model of Large-Batch Training', McCandlish et al 2018]\").\nOA5-like, [BigGAN](https://arxiv.org/pdf/1809.11096.pdf#page=8&org=deepmind \"'BigGAN: Large Scale GAN Training For High Fidelity Natural Image Synthesis: 5.2 Additional Evaluation On JFT-300M', Brock et al 2018\") stabilizes at large-scale image datasets like JFT-300M & benefits from unusually large minibatches and VAEs (long an also-ran to GANs or autoregressive models in terms of sharp image generation) catch up if you make them very deep ([Child 2020](https://arxiv.org/abs/2011.10650#openai \"VDVAE: Very Deep VAEs Generalize Autoregressive Models and Can Outperform Them on Images\"), [Vahdat & Kautz 2020](https://arxiv.org/abs/2007.03898#nvidia \"NVAE: A Deep Hierarchical Variational Autoencoder\")); while classifier CNNs like [BiT](https://arxiv.org/abs/1912.11370#google \"'Big Transfer (BiT): Large Scale Learning of General Visual Representations for Transfer', Kolesnikov et al 2019\")^[Fun trivia: BiT [is now more accurate](https://arxiv.org/abs/2006.07159#google \"'Are we done with ImageNet?', Beyer et al 2020\") at predicting (cleaned, corrected) ImageNet labels than the original ImageNet labels are.]/[Dojolonga et al 2020](https://arxiv.org/abs/2007.08558#google \"On Robustness and Transferability of Convolutional Neural Networks\") or [ResNeXt](https://arxiv.org/abs/1907.07640 \"'Robustness properties of Facebook's ResNeXt WSL models', Orhan 2019\") or [Noisy Student](https://arxiv.org/abs/1911.04252#google \"'Self-training with Noisy Student improves ImageNet classification', Xie et al 2019\") transfer & [robustify](https://arxiv.org/abs/2007.00644 \"'Measuring Robustness to Natural Distribution Shifts in Image Classification', Taori et al 2020\") [with](https://arxiv.org/abs/2103.14586#google \"'Understanding Robustness of Transformers for Image Classification', Bhojanapalli et al 2021\") human-like errors^[One interesting aspect of image scaling experiments like Dojolonga et al 2020 is that even when performance is 'plateauing' on the original task & approaching label error, the transfer learning continues to improve. Apparently the internal representations, even when adequate for mere classification and so the score cannot increase more than a small percentage, become more human-like---because it's encoding [dark knowledge](https://arxiv.org/abs/1503.02531#google \"'Distilling the Knowledge in a Neural Network', Hinton et al 2015\") or more [adversarial robustness](https://arxiv.org/abs/2006.14536#google \"'Smooth Adversarial Training', Xie et al 2020\")? I've noticed with language models, the final fractions of a loss appear to make a substantial difference to generated sample quality, perhaps because it is only after all the easier modeling is finished that the lazy language model is forced to squeeze out the next bit of performance by more correctly modeling more sophisticated things like logic, objects, world-knowledge, etc.], multimodal learning produces better representations on fewer data (eg. [ViLBERT](https://arxiv.org/abs/1912.02315#facebook \"'12-in-1: Multi-Task Vision and Language Representation Learning', Lu et al 2019\")/[VideoBERT](https://arxiv.org/abs/1904.01766#google \"'VideoBERT: A Joint Model for Video and Language Representation Learning', Sun et al 2019\"), motivating [OA's interest in big multimodal models](https://www.technologyreview.com/2020/02/17/844721/ai-openai-moonshot-elon-musk-sam-altman-greg-brockman-messy-secretive-reality/ \"The messy, secretive reality behind OpenAI's bid to save the world\")), and RNNs can [predict videos](https://arxiv.org/abs/1911.01655#google \"'High Fidelity Video Prediction with Large Stochastic Recurrent Neural Networks', Villegas et al 2019\").\n[AlphaStar](/doc/reinforcement-learning/model-free/alphastar/2019-vinyals.pdf#deepmind \"'Grandmaster level in StarCraft II using multi-agent reinforcement learning', Vinyals et al 2019\") reaches human-level with hundreds of competing self-players to cover possible strategies.\nImitation learning DRL like [MetaMimic](https://arxiv.org/abs/1810.05017#deepmind \"'MetaMimic: One-Shot High-Fidelity Imitation: Training Large-Scale Deep Nets with RL', Le Paine et al 2018\") generalizes at hundreds of tasks to train a deep net.\nDisentanglement emerges in [StyleGAN](https://arxiv.org/abs/1812.04948#nvidia \"'A Style-Based Generator Architecture for Generative Adversarial Networks', Karras et al 2018\") with sufficiently deep _w_ embeddings, with enough parameters to train raw audio in the aforementioned Jukebox, or in [relational networks](https://arxiv.org/abs/1706.01427#deepmind \"'A simple neural network module for relational reasoning', Santoro et al 2017\")/[GQN](/doc/reinforcement-learning/model/2018-eslami.pdf#deepmind \"'Neural scene representation and rendering', Eslami et al 2018\")/[Transformers](https://arxiv.org/abs/2002.05867 \"'Transformers as Soft Reasoners over Language', Clark et al 2020\") with enough samples to force factorization.\n(See also [Hill et al 2019](https://arxiv.org/abs/1910.00571#deepmind \"Environmental drivers of systematicity and generalization in a situated agent\")/[Chaplot et al 2017](https://arxiv.org/abs/1706.07230 \"Gated-Attention Architectures for Task-Oriented Language Grounding\")/[Yu et al 2018](https://arxiv.org/abs/1802.01433#baidu \"Interactive Grounded Language Acquisition and Generalization in a 2D World\")/[Lake 2019](https://arxiv.org/abs/1906.05381 \"Compositional generalization through meta sequence-to-sequence learning\")/[Interactive Agents Group 2020](https://arxiv.org/abs/2012.05672#deepmind \"Imitating Interactive Intelligence\").)\nTraining [Dactyl](https://arxiv.org/abs/1910.07113#openai \"'Solving Rubik's Cube With A Robot Hand', Akkaya et al 2019\") (or [humanoid robots](https://arxiv.org/abs/2304.13653#deepmind \"‘Learning Agile Soccer Skills for a Bipedal Robot with Deep Reinforcement Learning’, Haarnoja et al 2023\")) on millions of domain randomizations induced similar implicit meta-learning where during each runtime invocation, the RNN probes its environment and encodes its understanding of robot hand control into its hidden state; and [DD-PPO](https://arxiv.org/abs/1911.00357#facebook \"'DD-PPO: Learning Near-Perfect PointGoal Navigators from 2.5 Billion Frames', Wijmans et al 2019\") outperforms classical robot planners by scaling 2 orders.\nOr in [Procgen](https://openai.com/research/procgen-benchmark \"Procgen Benchmark: We're releasing Procgen Benchmark, 16 simple-to-use procedurally-generated environments which provide a direct measure of how quickly a reinforcement learning agent learns generalizable skills\") or [CoinRun](https://distill.pub/2020/understanding-rl-vision/#diversity-hypothesis \"'Understanding RL Vision', Hilton et al 2020\"), training on hundreds of levels trains agents to solve levels individually and worsens performance on other levels, but at thousands of levels, they begin to generalize to unseen levels. (Similarly, [language model pretraining-finetuning](https://arxiv.org/abs/2101.11038#facebook \"'Muppet: Massive Multi-task Representations with Pre-Finetuning', Aghajanyan et al 2021\") overfits at small numbers of datasets but improves markedly with enough diversity.)\n[AlphaZero](/doc/reinforcement-learning/model/alphago/2018-silver.pdf#deepmind \"'A general reinforcement learning algorithm that masters chess, shogi and Go through self-play', Silver et al 2018\") demonstrated truly superhuman Go without 'delusions' just by training a bigger model on a richer signal & pro-level play without any search---and [MuZero](https://arxiv.org/abs/1911.08265#deepmind \"'MuZero: Mastering Atari, Go, Chess and Shogi by Planning with a Learned Model', Schrittwieser et al 2019\"), for that matter, demonstrated that just training an RNN end-to-end to predict a reward on enough data is enough to obsolete even AlphaZero and learn tree search implicitly (but better).\nAnd on and on.\nDM researcher [Matthew Botvinick](https://www.lesswrong.com/posts/Wnqua6eQkewL3bqsF/matt-botvinick-on-the-spontaneous-emergence-of-learning \"Matt Botvinick on the spontaneous emergence of learning algorithms\"), discussing their meta-reinforcement learning work where they were surprised to discover meta-learning emerging, and that it did so regardless of which specific architecture they used:\n\n> ...it's something that just happens. In a sense, you can't avoid this happening. If you have a system that has memory, and the function of that memory is shaped by reinforcement learning, and this system is trained on a series of interrelated tasks, this is going to happen. You can't stop it.\n\nPace [Breiman](/doc/ai/scaling/1995-breiman.pdf \"Reflections After Refereeing Papers for NIPS\"), **why**?\nWhy do they transfer and generalize?\nWhy do these blessings of scale exist?\nWhy do we need to train large models when small models provably exist with the same performance?\nWhy do larger models not overfit (though they [can](https://arxiv.org/abs/1611.03530#google \"'Understanding deep learning requires rethinking generalization', Zhang et al 2016\")) and generalize better than smaller models?\nWhat's up with the whole ['double descent'](https://openai.com/research/deep-double-descent \"Deep Double Descent: We show that the double descent phenomenon occurs in CNNs, ResNets, and transformers: performance first improves, then gets worse, and then improves again with increasing model size, data size, or training time. This effect is often avoided through careful regularization. While this behavior appears to be fairly universal, we don't yet fully understand why it happens, and view further study of this phenomenon as an important research direction.\") anyway?\n\nThese are all, ahem, deep questions about neural networks and heavily debated, but right now, I would suggest that the answer lies in some mix of the model compression/distillation, ['lottery ticket hypothesis'](https://ai.facebook.com/blog/understanding-the-generalization-of-lottery-tickets-in-neural-networks/ \"Understanding the generalization of 'lottery tickets' in neural networks\"), [Bayesian neural network](https://arxiv.org/abs/2002.08791 \"'Bayesian Deep Learning and a Probabilistic Perspective of Generalization', Wilson & Izmailov 2020\"), and [learned representation](https://arxiv.org/abs/2007.00810#google \"'On Linear Identifiability of Learned Representations', Roeder et al 2020\") (like [circuits](https://distill.pub/2020/circuits/zoom-in/#openai \"'Zoom In: An Introduction to Circuits: By studying the connections between neurons, we can find meaningful algorithms in the weights of neural networks', Olah et al 2020\")) literatures.\n\nBig models work because they encode a dizzyingly vast number of sub-models in an extremely [high-dimensional](https://colah.github.io/posts/2014-03-NN-Manifolds-Topology/ \"'Neural Networks, Manifolds, and Topology', Olah 2014\") abstract space, representing countless small sub-models ([Orseau et al 2020](https://arxiv.org/abs/2006.12156#deepmind \"Logarithmic Pruning is All You Need\")) [interpolating over data](/doc/ai/scaling/2020-hasson.pdf \"'Direct Fit to Nature: An Evolutionary Perspective on Biological and Artificial Neural Networks', Hasson et al 2020\"), one of which is likely to solve the problem well, and so ensures the problem is soluble by the overall model.\nThey function as an ensemble: even though there countless overfit sub-models inside the single big model, they all average out, leading to a preference for simple solutions.\nThis Occam's razor biases the model towards simple solutions which are flexible enough to gradually expand in complexity to match the data.\n\nHowever, \"neural nets are lazy\": sub-models which memorize pieces of the data, or latch onto superficial features, learn quickest and are the easiest to represent internally.\nIf the model & data & compute are not big or varied enough, the optimization, by the end of the cursory training, will have only led to a sub-model which achieves a low loss but missed important pieces of the desired solution.\n\nOn the other hand, for a model like GPT-3, it is sufficiently powerful a model that its sub-models can do anything from poetry to arithmetic, and it is trained on so much data that those superficial models may do well early on, but gradually fall behind more abstract models; a sub-model which memorizes some of the data is indeed much simpler than a sub-model which encodes genuine arithmetic (a NN can probably memorize tens of thousands of lookup table entries storing examples of addition in the space it would take to encode an abstract algorithm like 'addition'), but it can't possibly memorize *all* the instances of arithmetic (implicit or explicit) in GPT-3's Internet-scale dataset.\nIf a memorizing sub-model tried to do so, it would become extremely large and penalized.\nEventually, after enough examples and enough updates, there may be a phase transition ([Viering & Loog 2021](https://arxiv.org/pdf/2103.10948.pdf#page=22 \"The Shape of Learning Curves: a Review: 6. Ill-behaved learning curves: 6.1. Phase transitions\")), and the simplest 'arithmetic' model which accurately predicts the data just *is* arithmetic.\nAnd then the meta-learning, after seeing enough instances of algorithms which vary slightly within each sample, making it hard to learn each task separately, just *is* learning of more generic algorithms, yielding sub-models which achieve lower loss than the rival sub-models, which either fail to predict well or bloat unacceptably.\n(GPT-2-1.5b apparently was too small or shallow to ensemble easily over sub-models encoding meta-learning algorithms, or perhaps not trained long enough on enough data to locate the meta-learner models; GPT-3 was.)\n\nSo, the larger the model, the better, if there is enough data & compute to push it past the easy convenient sub-models and into the sub-models which express desirable traits like generalizing, factorizing perception into meaningful latent dimensions, meta-learning tasks based on descriptions, learning causal reasoning & logic, and so on.\nIf the ingredients are there, it's going to happen.\n\n## Scaling Hypothesis\n\nThe strong *scaling hypothesis* is that, once we find a scalable architecture like self-attention or convolutions, which like the brain can be applied fairly uniformly (eg. [\"The Brain as a Universal Learning Machine\"](https://www.lesswrong.com/posts/9Yc7Pp7szcjPgPsjf/the-brain-as-a-universal-learning-machine) or Hawkins), we can simply train ever larger NNs and ever more sophisticated behavior will emerge naturally as the easiest way to optimize for all the tasks & data.\nMore powerful NNs are 'just' scaled-up weak NNs, in much the same way that human brains look much like [scaled-up primate brains](/doc/psychology/neuroscience/2012-herculanohouzel.pdf \"'The remarkable, yet not extraordinary, human brain as a scaled-up primate brain and its associated cost', Herculano-Houzel 2012\").\n\nWhile I was highly skeptical of scaling hypothesis advocates when I first became interested in AI 2004--2010 (back when AI was stuck in the doldrums of hopelessly narrow tools and dates like 2028 seemed impossibly far away), which smacked of numerology and \"if you build it they will come\" logic (at the time, we certainly didn't have general algorithms that you could just throw compute at), in 2020, I have to admit, I was wrong and they were right.\nWe built the compute, and the algorithms *did* come, and the scaling hypothesis has only looked more and more plausible every year since 2010.\n\n# Why Does Pretraining Work?\n\nThe pretraining thesis goes something like this:\n\n![\"Figure 1: Envisioned evolution of NLP research through three different eras or curves\" (the hypothetical S-curves & progress in natural language modeling; from [Cambria & White 2014](/doc/ai/scaling/2014-cambria.pdf \"Jumping NLP Curves: A Review of Natural Language Processing Research\"))](/doc/ai/scaling/2014-cambria-figure1-hypotheticalnlpprogresscurves.png){.invert}\n\nHumans, one might say, are the [cyanobacteria of AI](!W \"Great Oxidation Event\"): we constantly emit large amounts of structured data, which implicitly rely on logic, causality, object permanence, history---all of that good stuff.\nAll of that is implicit and encoded into our writings and videos and 'data exhaust'.\nA model learning to predict must learn to understand all of that to get the best performance; as it predicts the easy things which are mere statistical pattern-matching, what's left are the hard things.\nAI critics often say that the long tail of scenarios for tasks like self-driving cars or natural language can only be solved by true generalization & reasoning; it follows then that if models solve the long tail, they must learn to generalize & reason.\n\nEarly on in training, a model learns the crudest levels: that some letters like 'e' are more frequent than others like 'z', that every 5 characters or so there is a space, and so on.\nIt goes from predicted uniformly-distributed bytes to what looks like Base-60 encoding---alphanumeric gibberish.\nAs crude as this may be, it's enough to make quite a bit of absolute progress: a random predictor needs 8 bits to 'predict' a byte/character, but just by at least matching letter and space frequencies, it can almost halve its error to around 5 bits.^[The numbers here are not exact and are for illustration; because BPEs don't correspond to any intuitive, I am going to borrow from my observations watching char-RNNs, and talk about the loss per character instead of BPE.]\nBecause it is learning so much from every character, and because the learned frequencies are simple, it can happen so fast that if one is not logging samples frequently, one might not even observe the improvement.\n\nAs training progresses, the task becomes more difficult. Now it begins to learn what words actually exist and do not exist. It doesn't know anything about meaning, but at least now when it's asked to predict the second half of a word, it can actually do that to some degree, saving it a few more bits.\nThis takes a while because any specific instance will show up only occasionally: a word may not appear in a dozen samples, and there are many thousands of words to learn.\nWith some more work, it has learned that punctuation, pluralization, possessives are all things that exist.\nPut that together, and it may have progressed again, all the way down to 3--4 bits error per character!\n(While the progress is gratifyingly fast, it's still all gibberish, though, makes no mistake: a sample may be spelled correctly, but it doesn't make even a bit of sense.)\n\nBut once a model has learned a good English vocabulary and correct formatting/spelling, what's next? There's not much juice left in predicting within-words.\nThe next thing is picking up associations among words. What words tend to come first? What words 'cluster' and are often used nearby each other?\nNautical terms tend to get used a lot with each other in sea stories, and likewise Bible passages, or American history Wikipedia article, and so on.\nIf the word \"Jefferson\" is the last word, then \"Washington\" may not be far away, and it should hedge its bets on predicting that 'W' is the next character, and then if it shows up, go all-in on \"ashington\".\nSuch bag-of-words approaches still predict badly, but now we're down to perhaps <3 bits per character.\n\nWhat next? Does it stop there? Not if there is enough data and the earlier stuff like learning English vocab doesn't hem the model in by using up its learning ability.\nGradually, other words like \"President\" or \"general\" or \"after\" begin to show the model subtle correlations: \"Jefferson was President after...\"\nWith many such passages, the word \"after\" begins to serve a use in predicting the next word, and then the use can be broadened.\n\nBy this point, the loss is perhaps 2 bits: every additional 0.1 bit decrease comes at a steeper cost and takes more time.\nHowever, now the sentences have started to make sense.\nA sentence like \"Jefferson was President after Washington\" does in fact mean something (and if occasionally we sample \"Washington was President after Jefferson\", well, what do you expect from such an un-converged model).\nJarring errors will immediately jostle us out of any illusion about the model's understanding, and so training continues.\n(Around here, Markov chain & _n_-gram models start to fall behind; they can memorize increasingly large chunks of the training corpus, but they can't solve increasingly critical syntactic tasks like balancing parentheses or quotes, much less start to ascend from syntax to semantics.)\n\nNow training is hard. Even subtler aspects of language must be modeled, such as keeping pronouns consistent.\nThis is hard in part because the model's errors are becoming rare, and because the relevant pieces of text are increasingly distant and 'long-range'.\nAs it makes progress, the absolute size of errors shrinks dramatically.\nConsider the case of associating names with gender pronouns: the difference between \"Janelle ate some ice cream, because he likes sweet things like ice cream\" and \"Janelle ate some ice cream, because she likes sweet things like ice cream\" is one no human could fail to notice, and yet, it is a difference of a single letter.\nIf we compared two models, one of which didn't understand gender pronouns at all and guessed 'he'/'she' purely at random, and one which understood them perfectly and always guessed 'she', the second model would attain a lower average error of barely <0.02 bits per character!\n\nNevertheless, as training continues, these problems and more, like imitating genres, get solved, and eventually at a loss of 1--2 (where a small char-RNN might converge on a small corpus like Shakespeare or some Project Gutenberg ebooks), we will finally get samples that sound human---at least, for a few sentences.\nThese final samples may convince us briefly, but, aside from issues like repetition loops, even with good samples, the errors accumulate: a sample will state that someone is \"alive\" and then 10 sentences later, use the word \"dead\", or it will digress into an irrelevant argument instead of the expected next argument, or someone will do something physically improbable, or it may just continue for a while without seeming to *get* anywhere.\n\nAll of these errors are far less than <0.02 bits per character; we are now talking not hundredths of bits per characters but less than ten-thousandths.\n\nThe pretraining thesis argues that this can go even further: we can compare this performance directly with humans doing the same objective task, who can achieve closer to [0.7 bits per character](/difference#efficient-natural-languages).\nWhat is in that missing >0.4?\n\n![\"Yeah, but there's more to being smart than knowing compression schemes!\" \"No there's not!\" \"Shoot---he knows the secret!!\"](/doc/cs/2004-ryannorth-dinosaurcomics-391.png \"https://qwantz.com/index.php?comic=354\"){.invert}\n\nWell---*everything*! Everything that the model misses.\nWhile just babbling random words was good enough at the beginning, at the end, it needs to be able to reason our way through the most difficult textual scenarios requiring causality or commonsense reasoning.\nEvery error where the model predicts that ice cream put in a freezer will \"melt\" rather than \"freeze\", every case where the model can't keep straight whether a person is alive or dead, every time that the model chooses a word that doesn't help build somehow towards the ultimate conclusion of an 'essay', every time that it lacks the theory of mind to compress novel scenes describing the Machiavellian scheming of a dozen individuals at dinner jockeying for power as they talk, every use of logic or abstraction or instructions or Q&A where the model is befuddled and needs more bits to cover up for its mistake where a human would think, understand, and predict.\nFor a language model, the truth is that which keeps on predicting well---because truth is one and error many.\nEach of these cognitive breakthroughs allows ever so slightly better prediction of a few relevant texts; nothing less than true understanding will suffice for ideal prediction.\n\nIf we trained a model which reached that loss of <0.7, which could predict text indistinguishable from a human, whether in a dialogue or quizzed about ice cream or being tested on SAT analogies or tutored in mathematics, if for every string the model did just as good a job of predicting the next character as you could do, how could we say that it doesn't *truly* understand everything?\n(If nothing else, we could, by definition, replace humans in any kind of text-writing job!)\n\n[The last bits are deepest.]{.marginnote} The implication here is that the final few bits are the most valuable bits, which require the most of what we think of as intelligence.\nA helpful analogy here might be our actions: for the most part, all humans execute actions equally well.\nWe all pick up a tea mug without dropping, and can lift our legs to walk down thousands of steps without falling even once.\nFor everyday actions (the sort which make up most of a corpus), anybody, of any intelligence, can get enough practice & feedback to do them quite well, learning individual algorithms to solve each class of problems extremely well, in isolation.^[If you see thousands of images labeled 'dog' and thousands more labeled 'cat', you can simply learn separate dog & cat classifiers without bothering to understand their shared aspects like being domesticated quadruped mammal predators. This won't be useful if you are then asked to classify 'ferret' images, but you weren't asked to, so that's not your problem, since you can just learn yet another separate classifier for ferrets if you then get a lot of ferret images.]\nMeanwhile for rare problems, there may be too few instances to do any better than memorize the answer.\nIn the middle of the spectrum are problems which are similar but not *too* similar to other problems; these are the sorts of problem which reward flexible meta-learning and generalization, and many intermediate problems may be necessary to [elicit those capabilities](https://arxiv.org/abs/2205.05055#deepmind \"‘Data Distributional Properties Drive Emergent Few-Shot Learning in Transformers’, Chan et al 2022\") (\"neural nets are lazy\").\n\nWhere individuals differ is when they start running into the long tail of novel choices, rare choices, choices that take seconds but unfold over a lifetime, choices where we will never get any feedback (like after our death).\nOne only has to make a single bad decision, out of a lifetime of millions of discrete decisions, to wind up in jail or dead.\nA small absolute average improvement in decision quality, if it is in *those* decisions, may be far more important than its quantity indicates, and give us some intutition for why those last bits are the hardest/deepest.\n(Why do humans have such large brains, when animals like chimpanzees do so many ordinary activities seemingly as well with a fraction of the expense? Why is language worthwhile? Perhaps because of considerations like these. We may be at our most human while filling out the paperwork for life insurance.)\n\n[Reasons for doubt.]{.marginnote} The pretraining thesis, while logically impeccable---how is a model supposed to solve all possible trick questions without understanding, just *guessing*?---never struck me as convincing, an argument admitting neither confutation nor conviction.\nIt feels too much like a magic trick: \"here's some information theory, here's a human benchmark, here's how we can encode all tasks as a sequence prediction problem, hey presto---Intelligence!\"\nThere are lots of algorithms which are Turing-complete or 'universal' in some sense; there are lots of algorithms like AIXI which solve AI in some theoretical sense (Schmidhuber & company have many of these cute algorithms such as 'the fastest possible algorithm for all problems', with the minor catch of some constant factors which require computers bigger than the universe).\n\nWhy think pretraining or sequence modeling is not another one of them?\nSure, *if* the model got a low enough loss, it'd have to be intelligent, but how could you prove that would happen in practice?\n(Training char-RNNs was fun, but they hadn't exactly revolutionized deep learning.)\nIt might require more text than exists, countless petabytes of data for all of those subtle factors like logical reasoning to represent enough training signal, amidst all the noise and distractors, to train a model.\nOr maybe your models are too small to do more than absorb the simple surface-level signals, and you would have to scale them 100 orders of magnitude for it to work, because the scaling curves didn't cooperate.\nOr maybe your models are fundamentally broken, and stuff like abstraction require an entirely different architecture to work at all, and whatever you do, your current models will saturate at poor performance.\nOr it'll train, but it'll spend all its time trying to improve the surface-level modeling, absorbing more and more literal data and facts without ever ascending to the higher planes of cognition as planned.\nOr...\n\n
\n> 'The possibilities of developing an atomic weapon and the desirability of doing it secretly were discussed at a Princeton University conference in which I participated in March 1939...[Bohr](!W \"Niels Bohr\") said this rare variety could not be separated from common uranium except by turning the country into a gigantic factory. Bohr was worried that this could be done and that an atomic bomb could be developed---but he hoped that neither could be accomplished. Years later, when Bohr came to Los Alamos, I was prepared to say, \"You see . . .\" But before I could open my mouth, he said: **\"You see, I told you it couldn't be done without turning the whole country into a factory. You have done just that.\"**'\n>\n> [Edward Teller](!W)^[pg210--211, \"The Quiet Enemy\", [_The Legacy of Hiroshima_](/doc/radiance/1962-teller-thelegacyofhiroshima.pdf), Teller 1962.]\n
\n\nBut apparently, it would've worked fine.\nEven RNNs probably would've worked---Transformers are nice, but they seem mostly be about efficiency.^[Another way of interpreting the various papers about how Transformers are actually like RNNs or are [actually Hopfield networks](https://arxiv.org/abs/2008.02217 \"'Hopfield Networks is All You Need', Ramsauer et al 2020\") is to take that as indicating that what is important about them is not any inherent new capability compared to older architectures, but some lower-level aspect like being more efficiently trainable on contemporary hardware.]\n(Training large RNNs is much more expensive, and doing BPTT over multiple nodes is much harder engineering-wise.)\nIt just required more compute & data than anyone was willing to risk on it until a few true-believers were able to get their hands on a few million dollars of compute.\n\n#. **Q:** Did anyone predict, quantitatively, that this would happen where it did?\n\n **A:** Not that I know of.\n\n#. **Q:** What would future scaled-up models learn?\n\n GPT-2-1.5b had a cross-entropy WebText validation loss of ~3.3 (based on the perplexity of ~10 in [Figure 4](/doc/ai/nn/transformer/gpt/2019-radford-figure4-gpt2validationloss.png \"Figure 4: The performance of LMs trained on WebText as a function of model size (from https://cdn.openai.com/better-language-models/language_models_are_unsupervised_multitask_learners.pdf#page=9)\"){.invert}, and log~2~(10) = 3.32). GPT-3 halved that loss to ~1.73 judging from [Brown et al 2020](/doc/ai/nn/transformer/gpt/2020-brown-figure31-gpt3scaling.png \"Figure 3.1: Smooth scaling of performance with compute. Performance (measured in terms of cross-entropy validation loss) follows a power-law trend with the amount of compute used for training. The power-law behavior observed in Kaplan et al 2020 continues for an additional two orders of magnitude with only small deviations from the predicted curve. For this figure, we exclude embedding parameters from compute and parameter counts. (Brown et al 2020). Cross-validation loss extrapolation: $L(oss) = 2.57 · C(ompute in petaflop-s/days) ^ −0.048$\"){.invert} and using the scaling formula (2.57 × (3.64 × 10^3^)^\\−0.048^). For a hypothetical GPT-4, if the scaling curve continues for another 3 orders or so of compute (100--1000×) before crossing over and hitting harder diminishing returns, the cross-entropy loss will drop to ~1.24 (2.57 × (3.64 × (10^3^ × 10^3^))^\\−0.048^).\n\n If GPT-3 gained so much meta-learning and world knowledge by dropping its absolute loss ~50% when starting from GPT-2's level, what capabilities would another ~30% improvement over GPT-3 gain? (Cutting the loss that much would still not reach human-level, as far as I can tell.[^human-perplexity]) What would a drop to ≤1, perhaps using wider context windows or recurrency, gain?\n\n **A:** I don't know.\n#. **Q:** Does anyone?\n\n **A:** Not that I know of.^[As of December 2020, half a year later, almost no researcher has been willing to go on record as saying what specific capabilities they predict future 1t, 10t, or 100t models will have or not have, and at what size which missing capabilities will emerge---just as no one is on record successfully predicting GPT-2 or GPT-3's specific capabilities.]\n\n[^human-perplexity]: How do these absolute prediction performances compare to humans? It's hard to say. The only available benchmarks for perplexity for humans/GPT-2/GPT-3 appear to be WebText, [Penn Tree Bank](/doc/cs/algorithm/1993-marcus.pdf \"'Building a Large Annotated Corpus of English: The Penn Treebank', Marcus et al 1993\") (PTB; based on the [Brown Corpus](!W)), [1 Billion Word](https://arxiv.org/abs/1312.3005 \"'One Billion Word Benchmark for Measuring Progress in Statistical Language Modeling', Chelba et al 2013\") (1BW), and [LAMBADA](https://arxiv.org/abs/1606.06031 \"'The LAMBADA dataset: Word prediction requiring a broad discourse context', Paperno et al 2016\"). But coverage is spotty.\n\n I found no human benchmarks for WebText or Penn Tree Bank, so I can't compare the human vs GPT-2/GPT-3 perplexities ([GPT-2 PTB](https://cdn.openai.com/better-language-models/language_models_are_unsupervised_multitask_learners.pdf#page=5 \"Language Models are Unsupervised Multitask Learners: Table 3.Zero-shot results on many datasets. No training or fine-tuning was performed for any of these results. PTB and WikiText-2 results are from (Gong et al 2018). CBT results are from (Bajgar et al 2016). LAMBADA accuracy result is from (Hoang et al 2018) and LAMBADA perplexity result is from (Grave et al 2016). Other results are from (Dai et al 2019).\"): 35.7; [GPT-3 PTB](https://arxiv.org/pdf/2005.14165.pdf#page=11&org=openai): 20.5).\n\n [GPT-2](https://cdn.openai.com/better-language-models/language_models_are_unsupervised_multitask_learners.pdf#page=5) was benchmarked at 43 perplexity on the 1 Billion Word (1BW) benchmark vs a (highly extrapolated) [human perplexity of 12](/doc/ai/scaling/2017-shen.pdf \"'Estimation of gap between current language models and human performance', Shen et al 2017\") (which interestingly extrapolates, using 2012 LSTM RNNs, that \"10 to 20 more years of research before human performance is reached\"), but that may be an unfair benchmark (\"Our model is still significantly worse than prior work on the One Billion Word Benchmark ([Chelba et al 2013](https://arxiv.org/abs/1312.3005 \"'One Billion Word Benchmark for Measuring Progress in Statistical Language Modeling', Chelba et al 2013\")). This is likely due to a combination of it being both the largest dataset and having some of the most destructive pre-processing---1BW's sentence level shuffling removes all long-range structure.\") and 1BW was dropped from the GPT-3 evaluation due to data contamination (\"We omit the 4 Wikipedia-related tasks in that work because they are entirely contained in our training data, and we also omit the one-billion word benchmark due to a high fraction of the dataset being contained in our training set.\").\n\n LAMBADA was benchmarked at a [GPT-2 perplexity](https://cdn.openai.com/better-language-models/language_models_are_unsupervised_multitask_learners.pdf#page=5) of 8.6, and a [GPT-3 perplexity](https://arxiv.org/pdf/2005.14165.pdf&org=openai#page=12) of 3.0 (zero-shot) / 1.92 (few-shot). [OA claims](https://openai.com/research/better-language-models \"Better Language Models and Their Implications: We’ve trained a large-scale unsupervised language model which generates coherent paragraphs of text, achieves state-of-the-art performance on many language modeling benchmarks, and performs rudimentary reading comprehension, machine translation, question answering, and summarization—all without task-specific training.\") in their GPT-2 blog post (but not the paper) that human perplexity is 1--2, but provides no sources and I couldn't find any. (The authors might be guessing based on how LAMBADA was constructed: examples were filtered by whether two independent human raters provided the same right answer, which lower bounds how good humans must be at predicting the answer.)\n\n So overall, it looks like the best guess is that GPT-3 continues to have somewhere around twice the absolute error of a human. This implies it will take a large (yet, far from impossible) amount of compute to fully close the remaining gap with the current scaling laws. If we irresponsibly extrapolate out the WebText scaling curve further, assume GPT-3 has twice the error of a human at its current WebText perplexity of 1.73 (and so humans are ~0.86), then we need 2.57 · (3.64 · (10^3^ · _x_))^\\-0.048^ = 0.86, where _x_ = 2.2e6 or 2,200,000× the compute of GPT-3. (This would roughly equal the cost to the USA of invading Iraq.)\n\n When is that feasible?\n\n If we imagine that [peak AI compute usage doubles every 3.4 months](https://openai.com/research/ai-and-compute \"AI and Compute: We’re releasing an analysis showing that since 2012, the amount of compute used in the largest AI training runs has been increasing exponentially with a 3.4-month doubling time (by comparison, Moore’s Law had a 2-year doubling period). Since 2012, this metric has grown by more than 300,000× (a 2-year doubling period would yield only a 7× increase). Improvements in compute have been a key component of AI progress, so as long as this trend continues, it’s worth preparing for the implications of systems far outside today’s capabilities.\"), then 2.2e6 would be 22 doublings away---or 6.3 years, in 2027. Most people believe that compute trend must break down soon, and that sort of prediction is a good reason why!\n\n Going the other direction, Hernandez & Brown 2020's estimate is that, net of hardware & algorithmic progress, the cost of a fixed level of performance halves every 16 months; so if GPT-3 cost ~[$5]($2020)m in early 2020, then it'll cost [$2.5]($2020)m around mid-2021, and so on. Similarly, a GPT-human requiring 2.2e6× more compute would presumably cost on the order of [$10]($2020) trillion in 2020, but after 14 halvings (18 years) would cost [$1]($2020)b in 2038.\n\n# Prospects\n\n
\n> In the problem of decoding, the most important information which we can possess is the knowledge that the message which we are reading is not gibberish...In a similar way, when we consider a problem of nature such as that of atomic reactions and atomic explosives, the largest single item of information which we can make public is that they exist. Once a scientist attacks a problem which he knows to have an answer, his entire attitude is changed. He is already some 50% of his way toward that answer...**the one secret concerning the atomic bomb which might have been kept and which was given to the public and to all potential enemies without the least inhibition, was that of the possibility on its construction.** Take a problem of this importance and assure the scientific world that it has an answer; then both the intellectual ability of the scientists and the existing laboratory facilities are so widely distributed that the quasi-independent realization of the task will be a matter of merely a few years anywhere in the world.\n>\n> [Norbert Wiener](!W), pg124--125, _[The Human Use of Human Beings](!W)_ (emphasis added)\n
\n\n
\n> People who work in machine learning simply didn't think that neural networks could do much. People didn't believe large neural networks could be trained...The ideas were all there, the thing that was missing was a lot of supervised data and a lot of compute. Once you have [those two], then there is a third thing is that is needed---and that is *conviction*. Conviction that if you take the right stuff, which already exists, and apply and mix it with a lot of data and a lot of compute, that it will in fact work. And so that was the missing piece.\n>\n> [Ilya Sutskever](https://www.youtube.com/13CZPWmke6A?t=950#org=openai \"Ilya Sutskever: Deep Learning | Lex Fridman Podcast #94\")[^Zaremba]\n
\n\n[^Zaremba]: See also [Sutskever's DRL talk](https://www.youtube.com/watch?v=w3ues-NayAs?t=712#openai \"If you want to solve a hard problem in reinforcement learning, you just scale. It's just gonna work just like supervised learning. it's the same, the same story exactly. It was kind of hard to believe that supervised learning can do all those things, but it's not just vision, it's everything and the same thing seems to hold for reinforcement learning provided you have a lot of experience.\"), and [Wojciech Zaremba's](!W \"Wojciech Zaremba\") [comments about OA5](https://www.youtube.com/watch?v=429QC4Yl-mA&t=1157s \"What could make AI conscious? with Wojciech Zaremba, co-founder of OpenAI (2021-06-02)\") ([transcript](https://wandb.ai/wandb_fc/gradient-dissent/reports/What-could-make-AI-conscious-with-Wojciech-Zaremba-co-founder-of-OpenAI--Vmlldzo3NDk3MDI)):\n\n > `Lukas`: \"How much of the work then on Dota was, you felt, like fundamentally moving ML forward and how much of it was Dota-specific or can you even pull those apart?\"\n >\n > `Wojciech`: \"I think there was a decent amount of Dota-specific work. And then I think it was more than optimal, but also simultaneously hard. So I remember at the beginning of Dota project, it was actually unclear how to approach it.\n >\n > People are saying that contemporary reinforcement learning will have no chance in solving this problem. And people looked into off-policy matters, on-policy matters, [evolutionary strategies](https://arxiv.org/abs/1703.03864#openai \"'Evolution Strategies as a Scalable Alternative to Reinforcement Learning', Salimans et al 2017\"). The thing that became quite surprising is that [methods that already exist](https://arxiv.org/abs/1707.06347#openai \"'PPO: Proximal Policy Optimization Algorithms', Schulman et al 2017\"), with appropriate scale work extremely well. So that was a big surprise. And I remember some people even before Dota time at OpenAI, saying that maybe reinforcement learning is a dead end. And all of a sudden it's a very different story now.\"\n >\n > `Lukas`: \"For sure.\"\n\nWhat can we expect from future DL work?\nWill GPT-3 kickstart an arms race where soon we will be discussing, blase, what would seem now like ludicrously farfetched schemes like bidirectional multimodal Transformer 100× the size trained on 100× the data (video/text/PDFs-as-images/photo/robotics) with supplementary supervised learning as the backbone of a MuZero-like learning+planning DRL agent running on thousands of tasks (such as coding) simultaneously?\n\nThe existence of [the hardware overhang](https://www.lesswrong.com/posts/N6vZEnCn6A95Xn39p/are-we-in-an-ai-overhang \"'Are we in an AI overhang?', Jones 2020\") implies that the limiting factor here is less hardware than human: will any organization treat GPT-3 as a Sputnik moment and invest aggressively in scaling programs?\nIs there a GPT-4-equivalent brewing away inside DeepMind or Google Brain's TPU pods now?\nThey aren't stupid, they have the hardware, they have the budgets, they have the people.\n\nBut I think they lack a vision.\nAs far as I can tell: they do not have any such thing, because Google Brain & DeepMind do not believe in the scaling hypothesis the way that Sutskever, Amodei and others at OA do.\nJust read through machine learning Twitter to see the disdain for the scaling hypothesis.\n(A quarter year on from GPT-3 and counting, can you name a single dense model as large as the 17b Turing-NLG---never mind larger than GPT-3?)\n\nGoogle Brain is entirely too practical and short-term focused to dabble in such esoteric & expensive speculation, although Quoc V. Le's group occasionally surprises you.\nThey'll dabble in [mixture-of-expert models](/doc/ai/scaling/mixture-of-experts/index) like [GShard](https://arxiv.org/abs/2006.16668#google \"'GShard: Scaling Giant Models with Conditional Computation and Automatic Sharding', Lepikhin et al 2020\"), but mostly because they expect to be likely to be able to deploy it or something like it to production in Google Translate.^[Production services, especially *free* production services, usually lag long after the unpublished SOTA inside the most cutting-edge lab. The second is the only thing that matters for predicting AI progress or AI risk, of course, but people will insist on measuring AI progress by bizarre metrics like what an arbitrary free service could do last year. As a rule of thumb, assume that: if you are using a free service with no login, the quality is *at least* 2 years behind SOTA; free with a login, >1.5 years; paid service, >1 year; & recently-released research paper, >6 months.]\n\nDeepMind^[Particularly [Demis Hassabis](!W); I'm not sure about [Shane Legg's](!W \"Shane Legg\") [current views](http://www.vetta.org/2009/12/tick-tock-tick-tock-bing/ \"Tick, tock, tick, tock... BING\"), although given the accuracy of his [2009 predictions](http://www.vetta.org/2009/12/the-teenies/ \"'The Teenies', Shane Legg 2009-12-28\") while founding DeepMind & his [2018 comments](https://web.archive.org/web/20210426084422/https://www.stuff.co.nz/technology/103500435/google-deepmind-founder-and-leader-in-artificial-intelligence-returns-to-hamilton \"Google DeepMind founder and leader in artificial intelligence returns to Hamilton\"), he probably hasn't much changed his views that AI will be empowered by the (realized) exponential compute gains or his [AGI forecast of ~2028](http://www.vetta.org/2010/12/goodbye-2010/ \"'Goodbye 2010', Shane Legg 2010-12-10\"). (This is consistent with the latest [Metaculus](https://www.metaculus.com/questions/questions/3479/when-will-the-first-artificial-general-intelligence-system-be-devised-tested-and-publicly-known-of/ \"When will the first Artificial General Intelligence system be devised, tested, and publicly known of?\") [forecasts](https://www.metaculus.com/questions/questions/1394/will-ai-progress-surprise-us/ \"Will AI progress surprise us?\").)] holds what we might call the \"weak scaling hypothesis\": they believe that AGI will require us to \"find the right algorithms\" effectively replicating a mammalian brain module by module, and that while these modules will be extremely large & expensive by contemporary standards (which is why compute is important, to give us \"a more powerful tool with which to hunt for the right algorithms\"), they still need to be invented & finetuned piece by piece, with little risk or surprise until the final assembly.\nEach piece, however, itself can scale: there's no magical intelligence gland or quantum woo which creates a bright line between humans and, say, chimpanzees or rodents.\n(As much as we humans extravagantly admire our own capabilities like language or logic, those are relatively minor flourishes on the basic brain---each organism solves the same basic problems, like exploration, long-term memory, learning world-models, associating rewards with specific actions, meta-learning, etc.)\nAs such, once you have a rat-level AGI, a human-level AGI is just more so.\n(And rats are a lot easier to experiment on.)\nThat is how you get DM contraptions like [Agent57](https://www.deepmind.com/blog/agent57-outperforming-the-human-atari-benchmark \"'Agent57: Outperforming the Atari Human Benchmark', Badia et al 2020\") which throw the kitchen sink at the wall to see what sticks, and why they place such emphasis on neuroscience as inspiration and cross-fertilization for reverse-engineering the brain.\n(See also Sam Altman's [podcast interview comments](https://audio.hbr.org/exponential-view/20201006152648-S5E01_HowGPT-3IsShapingOurAIFuture.mp3?listeningSessionID=0CD_382_124__cc0756698c5c760194dea321b07a9b55454e0fe1#t=2205 \"'How GPT-3 Is Shaping Our AI Future' with Sam Altman/Azeem Azhar (The Exponential View), Wednesday 7 October 2020\") on OA's advantage vs unnamed rivals with more compute is because the lack of compute makes them stay \"small and focused\"---\"for sure\" like a startup approach.)\nWhen someone seems to have come up with a scalable architecture for cracking a hard problem, like AlphaZero or AlphaStar, they are willing to pour on the gas to make it scale, but otherwise, incremental refinement on ALE and then [DMLab-30](https://arxiv.org/abs/1612.03801#deepmind \"'DeepMind Lab', Beattie et al 2016\") is the game plan.\nThey have been biting off and chewing pieces of the brain for a decade, and it'll probably take another decade or two of steady chewing if all goes well.\nBecause they have locked up so much talent and have so much proprietary code and believe all of that is a major moat to any competitor trying to replicate the complicated brain, they are fairly easygoing.\nYou will not see DM 'bet the company' on any moonshot; Google's cashflow isn't going anywhere (and [DM's budget](/newsletter/2020/06#deepmind-budget \"‘June 2020 News § Companies House’, Branwen 2019\")), and slow and steady wins the race.\n\nGoing beyond that, most other research labs like Tesla or FAIR are irrelevant and uninterested.\nChinese AI companies are a question mark: past the language barrier, I seem to discern interest in AGI & little of the reflexive Western opposition, and companies like Baidu occasionally release important research (such as the early scaling paper [Hestness et al 2017](https://arxiv.org/abs/1712.00409#baidu \"Deep Learning Scaling is Predictable, Empirically\")), but overall, Chinese AI may be overestimated, and they seem to suffer from a kind of Dutch disease---funding for surveillance technology, and for narrow e-commerce niches, is so plentiful that other areas are neglected.\n\nOA, lacking anything like DM's long-term funding from Google or its enormous headcount, is making a startup-like bet that they know an important truth which is a secret: \"the scaling hypothesis is true!\"\nSo, simple DRL algorithms like PPO on top of large simple architectures like RNNs or Transformers can emerge, exploiting the blessings of scale, and meta-learn their way to powerful capabilities, enabling further funding for still more compute & scaling, in a virtuous cycle.\nThis is why OA had to revise its corporate form: lacking any enormous endowment or extremely deep-pocketed patron like Google, where does it get the money to scale (or hire machine learning engineer/researchers who can command salaries in the millions)?\nOA has to *earn* the necessary money, so in a move like Mozilla Foundation owning Mozilla Corporation (to sell Firefox search engine placement), or the Hershey orphanage owning Hershey Chocolate or the Girl Scouts licensing their cookies, OpenAI switched from a pure nonprofit funded by donations to a nonprofit which owns a for-profit subsidiary/startup, \"OpenAI LP\", which can take investments and engage in for-profit activities.\nOA LP, while controlled by OA, can then shoot for the moon.\nAnd if OA is wrong to trust in the [God of Straight Lines On Graphs](https://slatestarcodex.com/2018/11/26/is-science-slowing-down-2/ \"Is Science Slowing Down?\"), well, they never could compete with DM directly using DM's favored approach, and were always going to be an also-ran footnote, so they have no regret.\n\nWhile all of this hypothetically can be replicated *relatively* easily (never underestimate the amount of tweaking and special sauce it takes) by competitors if they wished (the necessary amounts of compute budgets are still trivial in terms of Big Science or other investments like AlphaGo or AlphaStar or Waymo, after all), said competitors lack the very most important thing, which no amount of money or GPUs can ever cure: the courage of their convictions.\nThey are too hidebound and deeply philosophically wrong to ever admit fault and try to overtake OA until it's too late.\nHow can we talk seriously about any kind of military Manhattan Project when the US military [doesn't even let its developers use Tensorflow or PyTorch](https://warontherocks.com/2020/10/trust-algorithms-the-army-doesnt-even-trust-its-own-ai-developers/ \"Trust Algorithms? The Army Doesn’t Even Trust Its Own AI Developers\"), or about government projects in the shadow of coronavirus?\nThis might seem absurd (surely the Bitter Lesson/scaling hypothesis have now earned enough prior probability to be taken seriously and receive major research investments to test how far they can go, especially given how important the implications are), but look at the repeated criticism of OA *every time* they release a new example of the scaling hypothesis, from GPT-1 to Dactyl to OA5 to GPT-2 to iGPT to GPT-3...\nTo paraphrase St Augustine, most peoples' reaction to the Bitter Lesson or scaling hypothesis is \"grant me scale & compute---but not yet\".^[When faced with the choice between having to admit all their fancy hard work is a dead-end, swallow the bitter lesson, and start budgeting tens of millions of compute, or instead writing a disdainful tweet explaining how, \"*actually*, GPT-3 shows that scaling is a dead end, it's an environmental catastrophe, and it's just imitation intelligence anyway\"---most people will get busy on the tweet!]\n\nA critical indicator will be whether organizations beyond 'the usual suspects' (Microsoft [ZeRO-2](https://www.microsoft.com/en-us/research/blog/zero-2-deepspeed-shattering-barriers-of-deep-learning-speed-scale/ \"'ZeRO-2 & DeepSpeed: Shattering barriers of deep learning speed & scale', Team 2020\") team has reached [1t-scale training](https://www.microsoft.com/en-us/research/blog/deepspeed-extreme-scale-model-training-for-everyone/ \"'DeepSpeed: Extreme-scale model training for everyone', Team et al 2020\"), but there is also Nvidia, Salesforce, Allen, Google DM/GB, Connor/EleutherAI, Facebook FAIR) start participating or if they continue to dismiss scaling.\nAt least as of 2020-10-26, 152 days later, no model has come near GPT-3, and indeed, no model has even exceeded Turing-NLG's 17b.^[A mixture-of-expert model like GShard or an embedding like DynamicEmbedding is not comparable to 'dense' models like GPT-3, as it's always been cheap & easy to train models with billions of 'parameters' in some sense, like extremely large embeddings; however, these parameters do little, and are more like a few hundred shallow models glued back-to-back. They probably do not learn the same interesting things that a dense model would with the same nominal parameter count.]\n\n# Critiquing The Critics\n\n\n\n[Keeping track.]{.marginnote} GPT-3 in 2020 makes as good a point as any to take a look back on the past decade.\nIt's remarkable to reflect that someone who started a PhD because they were excited by these new \"ResNets\" would still not have finished it by now---that is how recent even resnets are, never mind Transformers, and how rapid the pace of progress is.\nIn 2010, one could easily fit everyone in the world who genuinely believed in deep learning into a moderate-sized conference room (assisted slightly by the fact that 3 of them were busy founding [DeepMind](https://en.wikipedia.org/wiki/DeepMind)).\nSomeone interested in machine learning in 2010 *might* have read about some interesting stuff from weirdo diehard connectionists in recognizing hand-written digits using all of 1--2 million parameters, or some modest neural tweaks to standard voice-recognition hidden Markov models.\nIn 2010, who would have predicted that over the next 10 years, deep learning would undergo a Cambrian explosion causing a mass extinction of alternative approaches throughout machine learning, that models would scale up to 175,000 million parameters, and that these enormous models would just spontaneously develop all these capabilities?\n\nNo one. That is, no one aside from a few diehard connectionists written off as willfully-deluded old-school fanatics by the rest of the AI community (never mind the world), such as [Moravec](https://jetpress.org/volume1/moravec.htm \"'When will computer hardware match the human brain?', Moravec 1998\"), Schmidhuber, [Sutskever](https://www.youtube.com/watch?v=13CZPWmke6A \"'Ilya Sutskever: Deep Learning | AI Podcast #94 with Lex Fridman', 2020-05-08\"), Legg, & Amodei?\nOne of the more shocking things about looking back is realizing how unsurprising and easily predicted all of this was if you listened to the right people.\nIn 1998, 22 years ago, Moravec noted that AI research could be deceptive, and hardware limits meant that \"intelligent machine research did not make steady progress in its first 50 years, it marked time for 30 of them!\", predicting that as Moore’s law continued, \"things will go much faster in the next 50 years than they have in the last 50.\"\nMoravec further observed that part of the reason for rapid progress was the hardware overhang: while supercomputers of the necessary power would exist long before the connectionist revolution began, no one would be allowed to use them[^Jim-Gray], as they would be devoted to 'more important' (prestigious) hard STEM work, like \"physics simulations\" (ie. climate simulations & nuclear bombs)^[Strikingly, as of 2020, this is *still* true: eg. the only deep learning research I have seen done on [Summit](!W \"Summit (supercomputer)\") were [materials](https://arxiv.org/abs/1909.11150 \"'Exascale Deep Learning for Scientific Inverse Problems', Laanait et al 2019\") [science](https://arxiv.org/abs/2005.00223 \"'Pushing the limit of molecular dynamics with ab initio accuracy to 100 million atoms with machine learning', Jia et al 2020\") & [biology](https://arxiv.org/abs/2007.06225 \"'ProtTrans: Towards Cracking the Language of Life's Code Through Self-Supervised Deep Learning and High Performance Computing', Elnaggar et al 2020\"). (In double-checking Arxiv, I did find one non-STEM paper using Summit resources: [Lin et al 2019](https://arxiv.org/abs/1910.00932#google \"Training Kinetics in 15 Minutes: Large-scale Distributed Training on Videos\"), focusing on systems engineering in training a video classification model.)], and \"AI research must wait for the power to become more affordable.\"\nAffordable meaning a workstation roughly ~[$1000]($1998); sufficiently cheap compute to rival a human would arrive sometime in the 2020s, with the 2010s seeing affordable systems in the lizard--mouse range.\nAs it happens, the start of the DL revolution is typically dated to [AlexNet](!W) in 2012, by a grad student[^Norvig] using 2 GTX 580 3GB GPUs (launch list price of... [$500]($2010), for a system build cost of perhaps [$1500]($2012)).\n2020 saw GPT-3 arrive, and as discussed before, there are many reasons to expect the cost to fall, in addition to the large hardware compute gains that are being forecast for the 2020s despite the general deceleration of Moore’s law.^[[Jeff Dean](https://arxiv.org/abs/1911.05289#google \"'The Deep Learning Revolution and Its Implications for Computer Architecture and Chip Design', Dean 2019\") notes, \"It is perhaps unfortunate that just as we started to have enough computational performance to start to tackle interesting real-world problems and the increased scale and applicability of machine learning has led to a dramatic thirst for additional computational resources to tackle larger problems, the computing industry as a whole has experienced a dramatic slowdown in the year-over-year improvement of general purpose CPU performance.\" Under the computational view, this is not a coincidence: compute, not algorithms, are the critical factor; biological systems often come within orders of magnitude, or less, of the theoretical optimum for a task; and the closer one comes to optimal, the slower progress becomes; so, just as artificial computation finally starts doing \"interesting real-world problems\", it necessarily is approaching its limits. (It could have been otherwise: Moore’s law could have stopped short by many orders of magnitude of biological efficiency, or surpassed it by many orders, with no temporal coincidence, and AI happened for other reasons.)]\n\n[^Jim-Gray]: This seems to be a bit of a blind spot by commentators: the assumption that if the necessary resource *exists*, then ti will be *used*. For example, Jim Gray (d. 2007) in June 1999 [pokes a bit of fun](https://www.microsoft.com/en-us/research/wp-content/uploads/2016/02/ms_tr_99_50_turingtalk.pdf#page=11 \"What Next? A Dozen Information-Technology Research Goals: 3. Turing's vision of machine intelligence\") at Turing's connectionist hardware argument by noting that (using an optimistic lower bound on human brain computational power):\n\n > Desktop machines should be about as intelligent as a spider or a frog, and supercomputers ought to be nearing human intelligence...So, we should start seeing intelligence in these supercomputers any day now (just kidding)...[but we do not because] we are missing something *very* fundamental. Clearly, the software and databases we have for our super-computers is not on a track to pass the Turing Test in the next decade. Something quite different is needed. Out-of-the-box, radical thinking is needed.\n >\n > We have been handed a puzzle: genomes and brains work. But we are clueless what the solution is. Understanding the answer is a wonderful long-term research goal.\n\n With the benefit of hindsight, we can say that it is true that supercomputers in 1999 could have been showing far more impressive levels of intelligence than they were, and that it was also true that the software being run on the supercomputers in 1999 were never going to lead to meaningful AI progress, and that there is no particular contradiction or mystery---it was simply that no one was trying. No supercomputer owner was going to let it be tied up for years doing the minor-yet-critical iteration to make connectionist approaches like RNNs or CNNs work. Thus, something quite different & radical was indeed needed---but we already knew what the solution looked like.\n[^Norvig]: [Peter Norvig](!W) [offers an example](https://wandb.ai/wandb_fc/gradient-dissent/reports/Peter-Norvig-Google-s-Director-of-Research-Singularity-is-in-the-eye-of-the-beholder--Vmlldzo2MTYwNjk?galleryTag=gradient-dissent \"Peter Norvig, Google’s Director of Research—Singularity is in the eye of the beholder: We're thrilled to have Peter Norvig who join us to talk about the evolution of deep learning, his industry-defining book, his work at Google, and what he thinks the future holds for machine learning research (2020-11-20)\") of what happens when grad students *can't* afford the necessary computing power to make neural nets work:\n\n > [**Lukas Biewald](!W)**: When you look at deep learning, it sort of feels like that came suddenly, but a lot of those techniques were around, in fact in your book, I remember quite far back. Do you think that the field missed something, or was it just not possible to run at the scale necessary to show that these neural network techniques were working better than people expected in the early aughts?\n >\n > **Peter Norvig**: Yeah. I mean, if you say suddenly, right, we've got a sudden leap in computer vision and image net after Hinton had been trying the same thing for 30 years, right?...And then it finally worked. And I think the biggest difference was the computing power. Definitely there were advances in data. So we could do [ImageNet](!W) because [Fei-Fei Li](!W) and others gathered this large database, and that was really important. There are certainly differences in the algorithm, right? We've got a slightly different [squashing function](!W \"Activation function\"). Instead of shaped like this \\[[sigmoid](!W \"Sigmoid function\")\\], it's shaped like this \\[[ReLU](!W)\\]. I mean, I don't know how big a deal that was, but we learned how to do [stochastic gradient descent](!W) a little bit better. We figured that [dropout](!W \"Dilution (neural networks)\") gave you a little bit better robustness.\n >\n > So there were small things, but I think probably the biggest was the computing power. And I mean, I certainly remember [Geoff Hinton](!W) came to Berkeley when I was a grad student in 1981, I think, when he talked about these neural nets. And we fellow grad students thought that was so cool. So we said, 'Let's go back into the lab and implement it.'\n >\n > And of course, there was absolutely nothing you could download, so we had to build it all from scratch. And we got it to do exclusive or \\[[XOR](!W)\\], and then we got it to do something a little bit more complicated. And it was exciting. And then we gave it the first real problem, and it ran overnight, and it didn't converge, and we let it run one more day, and it still didn't converge. And then we gave up, and we went back to our sort of knowledge-based systems approach. But if we had the computing power of today, it probably would have converged after 5 seconds.\n\n By my estimate, Norvig's attempt used the equivalent of 0.8 *milliseconds* of contemporary GPU-time.\n\n (In ~1981, an expensive PC costing the equivalent of >[$5,000]($2020), of the sort a high-powered AI lab might allocate 1 apiece to grad students, might have an additional [Intel 8087](!W) floating-point [coprocessor](!W) capable of 50,000 FP64 FLOPS; conservatively assuming that 'overnight' + 'one more day' ≤ 2 days, then Norvig's experiment used 2d × 24h × 60m × 60s × 50,000 = 8×10^9^ FLOPS; a 2020 Nvidia [A100](!W \"Ampere (microarchitecture)\") GPU nominally priced ~[$10,000]($2020) boasts 9.7 FP64 TFLOPS or 9,700,000,000,000 FLOPS (and far more in the more useful low-precision regimes like FP32, but 1981 ML didn't know that); thus, 8×10^9^ / 9.7×10^12^ = 8×10^−4^ seconds = 0.8 milliseconds.)\n\nThe accelerating pace of the last 10 years should wake anyone from their dogmatic slumber and make them sit upright.\nAnd there are 28 years left in Moravec's forecast...\n\nThe temptation, that many do not resist so much as revel in, is to give in to a _déformation professionnelle_ and dismiss any model as \"just\" this or that(\"just billions of IF statements\" or \"just a bunch of multiplications\" or \"just millions of memorized web pages\"), missing the forest for the trees, as Moravec commented of chess engines:\n\n> The event was notable for many reasons, but one especially is of interest here. Several times during both matches, Kasparov reported signs of mind in the machine. At times in the second tournament, he worried there might be humans behind the scenes, feeding Deep Blue strategic insights!...In all other chess computers, he reports a mechanical predictability stemming from their undiscriminating but limited lookahead, and absence of long-term strategy. In Deep Blue, to his consternation, he saw instead an \"alien intelligence.\"\n>\n> ...Deep Blue's creators know its *quantitative* superiority over other chess machines intimately, but lack the chess understanding to share Kasparov's deep appreciation of the difference in the *quality* of its play. I think this dichotomy will show up increasingly in coming years. Engineers who know the mechanism of advanced robots most intimately will be the last to admit they have real minds. From the inside, robots will indisputably be machines, acting according to mechanical principles, however elaborately layered. Only on the outside, where they can be appreciated as a whole, will the impression of intelligence emerge. A human brain, too, does not exhibit the intelligence under a neurobiologist's microscope that it does participating in a lively conversation.\n\nBut of course, if we ever succeed in AI, or in reductionism in general, it *must be by reducing Y to 'just X'*.\nShowing that some task requiring intelligence can be solved by a well-defined algorithm with no 'intelligence' is precisely what success must look like!\n(Otherwise, the question has been thoroughly begged & the problem has only been pushed elsewhere; computer chips are made of transistors, not especially tiny homunculi.)\n\n
\n> As long as the AI [OA5] can explore, it will learn, given enough time...We just kept waiting for the magic to run out. We kept waiting to hit a wall, and we never seemed to hit a wall.\n>\n> [Greg Brockman](https://qz.com/1311732/openai-built-gaming-bots-that-can-work-as-a-team-with-inhuman-precision \"OpenAI built gaming bots that can work as a team with inhuman precision\")\n
\n\n
\n> Give it the compute, give it the data, and it will do amazing things. This stuff is like---it's like *alchemy*!\n>\n> [Ilya Sutskever](https://www.newyorker.com/magazine/2019/10/14/can-a-machine-learn-to-write-for-the-new-yorker \"Can a Machine Learn to Write for The New Yorker? Extraordinary advances in machine learning in recent years have resulted in A.I.s that can write for you.\"), summer 2019\n
\n\n\n\n[Hindsight is 20⁄20.]{.marginnote} Even in 2015, [all the experts](https://news.ycombinator.com/item?id=9109140) assured us that AGI the scaling hypothesis seemed highly dubious: you needed something to scale, after all, and it was all too easy to look at flaws in existing systems and imagine that they would never go away and progress would sigmoid any month now, soon.\nLike the genomics revolution where a few far-sighted seers extrapolated that the necessary _n_ for GWASes would increase exponentially & deliver powerful PGSes soon, while sober experts wrung their hands over \"missing heritability\" & the miraculous complexity of biology & scoff about how such _n_ requirements proved GWAS was a failed paradigm, the future arrived at first slowly and then quickly.\nYet, here we are: all honor to the fanatics, shame and humiliation to the critics!^[Now that GPT-3's few-shot and [T5 finetuning](https://arxiv.org/abs/2003.08380#google \"'TTTTTackling WinoGrande Schemas', Lin et al 2020\") have begun to make people like Gary Marcus feel slightly nervous about WinoGrande, they have [begun preparing](https://arxiv.org/abs/2004.13831 \"'A Review of Winograd Schema Challenge Datasets and Approaches', Vid Kocijan, Thomas Lukasiewicz, Ernest Davis, Gary Marcus, Leora Morgenstern 2020\") [their excuses](https://arxiv.org/abs/2201.02387 \"'The Defeat of the Winograd Schema Challenge', Kocijan et al 2022\") for why Winograd schemas [weren't *really*](/modus \"'One Man’s Modus Ponens', Branwen 2012\") good measures of commonsense reasoning/intelligence (because intelligence, of course, is whatever AI can't do yet).]\nIf only one could go back 10 years, or even 5, to watch every AI researchers' head explode reading this paper...\nUnfortunately, few heads appear to be exploding now, because human capacity for hindsight & excuses is boundless (\"I can get that much with finetuning, anyway I predicted it all along, how boring\") and, unfortunately, [\"there is no fire alarm\"](https://intelligence.org/2017/10/13/fire-alarm/ \"Yudkowsky 2017\") for AGI.\n(If you are still *certain* that there is near-zero probability of AGI in the next few decades, why?\nDid you predict---in writing---capabilities like GPT-3?\nIs this how you expect AI failure to look in the decades beforehand?\nWhat specific task, what specific number, would convince you otherwise?\nHow would the world look different than it does now if these crude prototype insect-brain-sized DL systems were not on a path to success?)\n\n[Authority without accountability.]{.marginnote} What should we think about the experts?\nProjections of failure were made by eminent, respectable, serious people.\nThey spoke in considered tones of why AI hype was excessive and might trigger an \"AI winter\", and the fundamental flaws of fashionable approaches and why brute force could not work.\nThese statements were made routinely in 2014, 2015, 2016... And they were wrong.\nI am aware of few issuing a _mea culpa_ or reflecting on it.^[[Feynman](https://history.nasa.gov/rogersrep/v2appf.htm \"Appendix F: Personal Observations on the Reliability of the Shuttle\"): \"There are several references to previous flights; the acceptance and success of these flights are taken as evidence of safety. But erosion and blowby are not what the design expected. They are warnings that something is wrong. The equipment is not operating as expected, and therefore there is a danger that it can operate with even wider deviations in the unexpected and not thoroughly understood way. The fact that this danger did not lead to catastrophe before is no guarantee that it will not the next time, unless it is completely understood.\"]\nIt is a puzzling failure, and I've [reflected on it before](/newsletter/2019/13#what-progress \"‘2019 News § What Progress?’, Branwen 2019\").\n\n[Phatic, not predictive.]{.marginnote} There is, however, a certain tone of voice the bien pensant all speak in, whose sound is the same whether right or wrong; a tone shared with many statements in January to March of this year; a tone we can also find in a 1940 _Scientific American_ article authoritatively titled, [\"Don't Worry---It Can't Happen\"](/doc/existential-risk/1940-sciam-harrington-nuclearweapons-dontworryitcanthappen.pdf), which advised the reader to not be concerned about it any longer \"and get sleep\".\n('It' was the atomic bomb, about which certain scientists had stopped talking, raising public concerns; not only could it happen, the British bomb project had already begun, and 5 years later it did happen.)\n\n[The iron law of bureaucracy: Cathedral gothic.]{.marginnote} This tone of voice is the voice of [authority](https://srconstantin.wordpress.com/2016/10/20/ra/ \"'Ra', Sarah Constantin 2016\"). \\\nThe voice of authority insists on calm, and people not \"panicking\" (the chief of sins). \\\nThe voice of authority assures you that it won't happen (because it can't happen). \\\nThe voice utters simple arguments about why the status quo will prevail, and considers only how the wild new idea could fail (and not all the possible options). \\\nThe voice is not, and does not deal in, uncertainty; things will either happen or they will not, and since it will not happen, there is no need to take any precautions (and you should not worry because it can't happen). \\\nThe voice does not believe in drawing lines on graphs (it is rank numerology). \\\nThe voice does not issue any numerical predictions (which could be falsified). \\\nThe voice will not share its source code (for complicated reasons which cannot be explained to the laity). \\\nThe voice is opposed to unethical things like randomized experiments on volunteers (but will overlook the insult). \\\nThe voice does not have a model of the future (because a model implies it does not already know the future). \\\nThe voice is concerned about its public image (and unkind gossip about it by other speakers of the voice). \\\nThe voice is always sober, respectable, and credentialed (the voice would be pleased to write an op-ed for your national magazine and/or newspaper). \\\nThe voice speaks, and is not spoken to (you cannot ask the voice what objective fact would change its mind). \\\nThe voice never changes its mind (until it does). \\\nThe voice is never surprised by events in the world (only disappointed). \\\nThe voice advises you to go back to sleep (right now).\n\nWhen someone speaks about future possibilities, what is the tone of their voice?\n\n[null](/scaling-hypothesis#blessings-of-scale){style=\"display:none;\"} \n[null](/scaling-hypothesis \"'The Scaling Hypothesis', Branwen 2020\"){style=\"display:none;\"} \n\n\n\n# Appendix\n## It From Byte\n\n
\n> Powerful generative models like GPT-3 learn to imitate agents and thus become agents when prompted appropriately. This is an inevitable consequence of training on huge amounts of human-generated data. This can be a problem.\n>\n> Is human data (or moral equivalents like DRL agents) *necessary*, and other kinds of data, such as physics data, free of this problem? (And so a safety strategy of filtering data could reduce or eliminate hidden agency.)\n>\n> I argue no: agency is not discrete or immaterial, but an ordinary continuum of capability, useful to a generative model in many contexts beyond those narrowly defined as 'agents', such as in the \"intentional stance\" or variational approaches to solving physics problems.\n>\n> Thus, a very wide range of problems, at scale, may surprisingly induce emergent agency.\n
\n\nI have previously argued that GPT-3 clearly shows agency because it is doing offline imitation learning (behavioral cloning, specifically) from the human-generated text data, and so it learns generative models of many agents, real or fictional.\nThese generative models offer agentic capabilities, because they can be used to prompt the model to ['roleplay'](/gpt-3#roleplaying)---plan & take action which will steer environments into small goal regions of state-space; and this is not merely hypothetical, or confined to text transcripts of actions & results in its internal simulated environments, but given effectors, like in the case of [SayCan](https://arxiv.org/abs/2204.01691#google \"‘Do As I Can, Not As I Say (SayCan): Grounding Language in Robotic Affordances’, Ahn et al 2022\"), a language model will in fact do such things in the real world.\n\nThat such systems may never have 'experienced the real world' or been trained deliberately on exact action sequences of malicious agents doesn't mean that they cannot generalize or imitate.\nA sufficiently accurate simulation of an agent just *is* an agent.\n(One can set up a prompt for GPT-3 to imitate Adolf Hitler and ask him how to regain power & resume exterminating the Jews and get back a semi-coherent high-level plan; this is unfortunate, and the simulacra need not even be of a real person---evil fictional characters plan evil things just as easily, because it's not hard to imagine what horrible things they *would* want to do.)\nThis doesn't seem all that different from accepted instances of reinforcement learning, like behavior learning or offline reinforcement learning: if you train on data from agents, whether humans or logged data from DRL agents, then the question is how would you *not* learn from all these examples how to act & be capable of pursuing goals?\nPresumably only if you were a stupid model, too small or given too little data.\n\nIf these are not 'agents', I don't know what is; or at least if critics insist on some sort of definition of 'agent' which excludes these, I think perhaps we should then abandon the word 'agent' entirely---because if giving a SayCan robot an instruction to 'fetch a can of Coke and bring it to me', with it using image inputs to construct step-by-step plans to find, possess, and return with the can, and successfully doing so often in real life on a real robot, does not count as an 'agent', then we need a word for such non-agent systems, so we can discuss their dangers.\n(If we define them as sub-agents because of lack of appendages and thus define all models as harmless non-agents, this is an unacceptable equivocation given the extreme carelessness and insouciance people display in hooking up their models the first chance they get to humans, APIs, search engines, or robots---hardly had the OpenAI GPT-3 API been launched in July 2020 than people were showing off using its basic HTML/CSS/JS abilities to drive web browsers, and large LM model developers like LaMDA or Adept display an unseemly eagerness to let it query arbitrary URLs without their paper even bothering to specify it was live.\nThe AI box hadn't even been invented before everyone decided to let their AI out of the box to be slightly more useful, as should come as no surprise---after all, [tool AIs *want* to be agent AIs](/tool-ai \"‘Why Tool AIs Want to Be Agent AIs’, Branwen 2016\").)\n\nBut one might wonder how far this goes: do we have agent AIs emerging from our tool AIs *only* because we trained them on so much agent-generated data? If we scrapped human text corpuses, full of text about humans planning and taking actions and obtaining goals, or video datasets stuffed full of agents doing stuff, and if we deleted image datasets as well because they are just snapshots of videos and depict agents & actions & environments full of traces of agency, would we then have a model which is now just a (relatively) safe tool AI, with no agency lurking?\n\nI would still say that there's a possibility, and maybe not even that small one: agency is not a discrete thing, but a continuum, which is a convergent instrumental drive / emergent capability because it is useful even for understanding \"non-agentic\" things.\n\n### All Is Atoms & Void\n\nFirst, there cannot be any principled, hard-and-fast, necessary distinction between data which is 'agentic' and data which is merely 'natural'.\nThis is because there is no such distinction in reality either: all 'agency' is constructed of non-agentic bits like atoms. There is no agency-particle, no pineal gland granting access to '*Genuine* Decision-Making™'.\nAn agentic human is made out of the same fundamental things as a clump of dust swirling in space, or rock, or a computer.\nIt must be the case that one could, starting only from simulations of (possibly a lot of) atoms, nothing but the most raw physics equations and atoms & the void, and eventually recapitulate the history of the universe and observe things like the origin of life and humans. Thus, one turns non-agentic data (physics equations) into agentic data.\n\nOK, but barring a hypercomputer, that is unlikely to happen.\nIf we consider realistic levels of compute, like contemporary NNs, trained on less-than-everything-in-the-universe & apparently harmless data like, say, the hydrology of rivers flowing downhill (eg. for flood prevention), or the trajectory of the solar system, surely none of that agency will evolve---no amount of modeling the chaotic dynamics of Pluto will give you any help in modeling the dynamics of astronomy infighting about whether Pluto is a planet, right?\n\n### Intentional Interpretive Stance\n\nHere again I differ, and invoke [Daniel Dennett's](https://en.wikipedia.org/wiki/Daniel_Dennett) [intentional stance](https://en.wikipedia.org/wiki/Intentional_stance).\nHumans do, in fact, model natural systems like these as agents.\nWe find such teleological explanations indispensable for intuition and shortcut reasoning across many natural systems.\n\n#### Variational Interpretations\n\n[Janus comments](https://www.lesswrong.com/posts/vJFdjigzmcXMhNTsx/simulators), apropos of their emphasis on what I might call a 'world-modeling-centric' intuition for GPT-3 vs my 'agent-centric' view that:\n\n> For example, Gwern has said that anyone who uses GPT for long enough begins to think of it as an agent who only cares about roleplaying a lot of roles. That framing seems unnatural to me, comparable to thinking of physics as an agent who only cares about evolving the universe accurately according to the laws of physics. At best, the agent is an epicycle; but it is also compatible with interpretations that generate dubious predictions.\n\nI embrace that description: it is in fact natural and not an elaborate epicycle on a geocentric model of the world, but rather, heliocentrism---powerful, and useful, and simpler.\nThat it (also like heliocentrism[^Wittgenstein]) may feel counterintuitive is unfortunate, but its virtues are proven.\n\n[^Wittgenstein]: As my favorite Wittgenstein anecdote goes, heliocentrism strikes everyone as false because things just don't *look* as if the Earth whirls at astronomical velocities around a star, but as if the Earth is perfectly still and everything else whirls around it (Anscombe 1963, _An Introduction to Wittgenstein’s Tractatus_):\n\n > The general method that Wittgenstein does suggest is that of 'shewing that a man has supplied no meaning [\"no reference\"?] for certain signs in his sentences'. I can illustrate the method from Wittgenstein's later way of discussing problems. He once greeted me with the question: 'Why do people say that it was natural to think that the sun went round the earth rather than that the earth turned on its axis? I replied: 'I suppose, because it looked as if the sun went round the earth.' 'Well,' he asked, 'what would it have looked like if it *had* looked as if the earth turned on its axis?'\n >\n > This question brought it out that I had hitherto given no relevant meaning to 'it looks as if' in 'it looks as if the sun goes round the earth'. My reply was to hold out my hands with the palms upward, and raise them from my knees in a circular sweep, at the same time leaning backwards and assuming [a dizzy](https://twitter.com/Brummo/status/1320138187763691520 \"'Here’s another stabilized sky timelapse, this time at Crater Lake, Oregon. The water was still for most of it, which created a nice mirror for the stars. I also got my astro-modified camera working, which provides more vibrancy in the nebulae in the Milky Way. #EppurSiMuove', Eric Brummel 2020-10-24\") [expression](https://www.youtube.com/watch?v=h714VOr-6nY \"'Star Timelapse Revealing the Earth’s Rotation', Alex Rivest 2014-12-11\"). 'Exactly!' he said.\n\nWe err if an intentional stance leads us engage in the pathetic fallacy and say that the river-spirit wants to reunite with the ocean (and we must offer sacrifices lest the dikes breach), but we are correct when we say that the river tries to find the optimal path which minimizes its gravitational or [free energy](https://en.wikipedia.org/wiki/Principle_of_minimum_energy).\nIt is both true, predictively useful, and mathematically equivalent to the other way of formulating it, in terms of 'forward' processes computing step by step, atom by atom, and at the getting the same answer---but typically much easier to solve.\n(Ted Chiang's [\"Story Of Your Life\"](/story-of-your-life \"‘‘Story Of Your Life’ Is Not A Time-Travel Story’, Branwen 2012\") tries to convey this perspective via fiction.)\nAnd this shortcut is a trick we can use universally, for everything from a river flowing downhill to the orbit of a planet to the path of photon through water [minimizing travel time](https://en.wikipedia.org/wiki/Fermat%27s_principle) to evolutionary dynamics: instead of trying to understand it step by step, treat the system as a whole via the [variational principle](https://en.wikipedia.org/wiki/Variational_principle) as 'wanting' to minimize (or maximize) some simple global quantity (a reward), and picking the sequence of actions that does so.\n(\"The river *wants* to minimize its height, so without simulating it down to the individual water currents, I can look at the map and see that it should 'choose' to go left, then right, and then meander over this flat slightly-sloping part. Ah, looks like I was right.\")\nThen, into this modular trick, just plug in the system and quantity in question, and think like an agent...^[This connection is more than superficial---a lot of RL work draws on formal analogies to physics and variational principles.]\n\nUh oh. 'Predictively useful', 'shortcut', 'much easier', 'universally'---all properties a neural net loves.\nAll natural to it.\nWhy would it try to solve each heterogeneous problem with a separate, computationally-expensive, bag of tricks, when there's one weird trick AI safety researchers hate, like adopting teleological and variational reasoning?\n\n#### Inducing Emergence Is Expensive\n\nOf course, this frame can be more expensive than solving a problem directly.\nVariational approaches are powerful but counterintuitive, and there are often many simpler approximations or memorization that a model can do.\nFor a *single* problem like modeling the orbit of Pluto, it is unlikely that any variational approach would be learned. Why would it, when there is only 1 system and 1 quantity minimized, so they can just be assumed?\nThis is similar to other [model capabilities induced by pretraining](#why-does-pretraining-work): things like induction heads or meta-learning or counting or reasoning need to pay their way, and are not superior to alternatives right from the start.\nThey need rich enough models to compute them feasibly, enough data to force them out of easier solutions (which will fail on a few rare datapoints), and enough training (to work through all the possibilities to converge on the better capabilities).\n\n#### What Can Induce Agency Emergence?\n\nUnfortunately, this is an empirical matter.\nHow many datasets? How big is each dataset? How diverse do they have to be? What even is a 'dataset', since we can always lump or split it?\nWe struggle to predict when a capability will develop in GPT-3, so we definitely can't say a priori that \"Pluto is safe to model, but then tossing in a few thousand exoplanet solar systems would begin to elicit a define-system/plug-in-reward/maximize module and bring back agency\".\n\n##### Cellular Automatons\n\nIt would also be hard to say at all what mathematical or physical systems exhibit the right kinds of maximizing behavior which can be generalized to an intentional stance.\nDoes the ultra-abstract & simple [cellular automaton](https://en.wikipedia.org/wiki/Cellular_automaton) [Conway's Game of Life](https://en.wikipedia.org/wiki/Conway%27s_Game_of_Life) (GoL) induce an intentional stance?\nIt has no agents, no biology, no evolution in the usual sense, but it does have many small patterns which can be useful chunked and a specific GoL understood that way.\nHumans, of course, look at a GoL as a bunch of small entities like 'gliders', but a NN given randomly-initialized boards may see the same thing, because most GoL patterns will die out or reach fixed-points like [gliders](https://en.wikipedia.org/wiki/Glider_(Conway%27s_Life)) or [still-life](https://en.wikipedia.org/wiki/Still_life_(cellular_automaton)) patterns.\nAnd once you are talking about gliders wandering out unless they run into a still-life block which kills them, you are much of the way to an intentional stance---not modeling a glider as the inexorable outcome of applying this and that rule about the local-neighborhood to a million cells of equal importance, but as a specific entity of interest against an ignored background of dead cells, which will travel around and shoot off to infinity or meet its doom.\nSo, I wouldn't want to bet too much on GoL being unable to induce any transfer.\n\n##### Turing Machine\n\nCan we go even broader?\nHow about, not natural physics systems, nor specific abstractions of interest to humans (GoL is especially interesting among cellular automatons, and we ignore the large space of CA rules which define a CA but which does nothing interesting), but all Turing machines, let's say random rules with some length-biased sample of random programs which we dovetail & treat available tapes as a sequence prediction problem.\nThere is no more general computable setting, after all.\n\n###### Single TM\n\nWould training on a random Turing machine risk the possibility of agency?\n\nMaybe not.\nFor a single TM, this might foster some capabilities like instruction-following (for the same reason that pretraining on source code, especially source code augmented with state logs, is a powerful prior for many tasks), but it does not seem to have any of the traits that would induce agency.\nThere is nothing that random TM programs try to minimize or maximize; they simply run.\nThey don't try to maximize run time length (terminating or non-terminating), or write as few or as many places on the tape as possible, or to achieve particular patterns.\nA model would simply learn the TM rules and attempt to approximate it as best as it can given its own limited feedforward neural net resources; eventually, if it can work iteratively or recurrently, it would learn the rules and generalize perfectly, and no further learning occurs.\nClassifying TM programs by whether they halt doesn't help: yes, the Busy Beaver 'wants' to maximize something, but that's just by definition as the longest terminating program, there are many more TM programs which are 'happy' to halt very quickly.\nSo predicting halting status may learn things, but also still nothing that prima facie looks like agency.\n\n###### TM Meta-Learning\n\nThis might be due to there being only a single TM, making it analogous to training only on Pluto.\nPerhaps the right setting would be training over *many* TM rules (and programs within each one).\nThis is what a researcher would be more interested in, since few TMs are of any intrinsic interest, nor do we know the One True Turing Machine™; we'd rather have a neural network which is learning to learn TMs, or meta-learning, and training a NN over many environments drawn from a distribution is the easiest way to induce meta-learning.\nSo what if we trained a model to do sequence prediction of a random TM + random program, without reuse?\nIf single random Turing machines are harmless, how about all of them?\n\nHm, well...\nIt's worth noting how Alan Turing introduced the Turing machine formalism: as a general setting in which a *man* read and executed sets of rules about how to mark up a paper tape.\nSo even in the original formulation of computers as tools which merely do what they are programmed to do, we have a homunculus at the center!\nThis homunculus could do (and given different instructions, would) anything to the tape, but he wants to follow the current set of instructions accurately, until he's done.\nIn each draw from the TM+program distribution, he is following a different set of instructions, and now the model is attempting to infer what he wants, to as quickly as possible begin predicting the sequence accurately by recomputing it.\n\nThis provides our modularity, and a particular computation executed, and strong optimization pressure to rapidly 'read' the history and infer what the new rules must be.\nThat may not have a clean reward-maximizing interpretation, but it *does* sound a lot like what anyone does with an agent of any kind: the inverse reinforcement learning problem of inferring the reward function can be arbitrarily hard, and until we succeed at that, we instead infer local rules & patterns, which target particular outcomes (regions of state-space).\nYou may not know why your neighbor does that weird thing he does, but you can infer that he will do it, and not another agent, not even his evil identical twin.\nIs inferring TM rules the simplest & most rudimentary possible 'theory of mind'?\nMaybe. In which case, there is no escape from the possibility of agency anywhere.\n\n### Ambient Agency\n\nAgency may be like [Turing-completeness](/turing-complete \"‘Surprisingly Turing-Complete’, Branwen 2012\"): even in settings free of selection or optimization, it is a capability too useful and too convergent to guarantee its absence.\nThe broader and more powerful a system is, the more the next feature or next piece of data may push it over the edge, and it becomes harder to engineer a system *without* that aspect.\n\nAgency can be learned from data generated by agents, who generate extremely selective data.\nOr if you carefully remove all that, it may come from the selection of non-human data.\nOr it may be implicit in the dynamics of a replicator system.\nOr it may be one of the countless physical systems which have such interpretations which are computationally more efficient and thus any NN which is optimized to balance realizable compute with accuracy will be pushed to such interpretations.\nOr it may be a good simplification of systems with macro-statistics where the detailed micro-state adds little.\nOr it may stem from simply meta-learning of rule induction on TMs, because agents may follow complex sets of policies which are learnable but the motivating reward-function is an under-determined blackbox.\n\nOr... like squashing Turing-completeness, as soon as one hole in the sinking ship is patched, you notice another leak spring up.\nYou can't keep a good idea down.\nAll you can do is make a complex system that doesn't display agency as far as *you* can tell; unfortunately, much like Turing-completeness (or security vulnerabilities), that there is no overt agency doesn't mean it is not there.\nThe model won't tell you, it is just getting on with the job of lowering its loss.\n(\"Sampling can show the presence of knowledge, but not the absence.\")\n\nI do not have any solutions to this, other than to advise yet again to abandon the seductive, convenient, but wrong idea that tool AIs (under any branding, be it 'tool AIs' or 'physics generative models' or 'world simulators'), cannot or will not be agent AIs.\nThey may well be, and the better they get, the more likely it is, and tampering with data is not a solution.\n", "id": "b5859c2575b9854cc1cb1d9486ef8542"} {"source": "gwern", "url": "https://www.gwern.net/Tanks.page", "title": "The Neural Net Tank Urban Legend", "authors": "Gwern Branwen", "date_published": "n/a", "text": "---\ntitle: The Neural Net Tank Urban Legend\ndescription: AI folklore tells a story about a neural network trained to detect tanks which instead learned to detect time of day; investigating, this probably never happened.\ncreated: 2011-09-20\nmodified: 2019-08-14\nstatus: finished\nprevious: /tool-ai\nnext: /hyperbolic-time-chamber\nconfidence: highly likely\nimportance: 4\ncssExtension: drop-caps-kanzlei\n...\n\n
\n> A cautionary tale in artificial intelligence tells about researchers training an neural network (NN) to detect tanks in photographs, succeeding, only to realize the photographs had been collected under specific conditions for tanks/non-tanks and the NN had learned something useless like time of day. This story is often told to warn about the limits of algorithms and importance of data collection to avoid \"dataset bias\"/\"data leakage\" where the collected data can be solved using algorithms that do not generalize to the true data distribution, but the tank story is usually never sourced.\n>\n> I collate many extent versions dating back a quarter of a century to 1992 along with two NN-related anecdotes from the 1960s; their contradictions & details indicate a classic \"urban legend\", with a probable origin in a speculative question in the 1960s by Edward Fredkin at an AI conference about some early NN research, which was then classified & never followed up on.\n>\n> I suggest that dataset bias is real but exaggerated by the tank story, giving a misleading indication of risks from deep learning and that it would be better to not repeat it but use real examples of dataset bias and focus on larger-scale risks like AI systems optimizing for wrong utility functions.\n
\n\nDeep learning's rise over the past decade and dominance in image processing tasks has led to an explosion of applications attempting to infer high-level semantics locked up in raw sensory data like photographs.\nConvolutional neural networks are now applied to not just ordinary tasks like [sorting cucumbers by quality](https://cloud.google.com/blog/products/gcp/how-a-japanese-cucumber-farmer-is-using-deep-learning-and-tensorflow \"'How a Japanese cucumber farmer is using deep learning and TensorFlow', Sato 2016\") but everything from predicting the best Go move to [where in the world](https://arxiv.org/abs/1602.05314#deepmind \"'PlaNet - Photo Geolocation with Convolutional Neural Networks', Weyand et al 2016\") it was taken to whether a photograph is [\"interesting\"](https://ai.googleblog.com/2018/05/automatic-photography-with-google-clips.html \"Automatic Photography with Google Clips\") or [\"pretty\"](https://ai.googleblog.com/2017/07/using-deep-learning-to-create.html \"Using Deep Learning to Create Professional-Level Photographs\"), not to mention supercharging traditional tasks like radiology interpretation or facial recognition which have reached levels of accuracy that could only be dreamed of decades ago.\nWith this approach of \"neural net *all the things*!\", the question of to what extent the trained neural networks are useful in the real world and will do what we *want* it to do & not what we *told* it to do has taken on additional importance, especially given the possibility of neural networks learning to accomplish extremely inconvenient things like inferring individual human differences such as criminality or homosexuality (to give two highly controversial recent examples where the meaningfulness of claimed success have been severely questioned).\n\nIn this context, a cautionary story is often told of incautious researchers decades ago who trained a NN for the military to find images of tanks, only to discover they had trained a neural network to detect something else entirely (what, precisely, that something else was varies in the telling).\nIt would be a good & instructive story... if it were true.\nIs it?\n\nAs it would be so useful a cautionary example for AI safety/alignment research, and was cited to that effect by Eliezer Yudkowsky but only to a secondary source, I decided to make myself useful by finding a proper primary source for it & see if there were more juicy details worth mentioning.\nMy initial attempt failed, and I & several others failed for over more than half a decade to find any primary source (just secondary sources citing each other).\nI began to wonder if it was even real.\n\nTrying again more seriously, I conclude that, unfortunately, it is definitely not real as usually told: it is just an urban legend/leprechaun; and in fact, the seed of the story *could not* have run into the issue the tank story warns about, because they correctly constructed their training dataset to avoid such issues.\nMore broadly, considering that issue in contemporary deep learning, the issue it cautions against is real but not that important and conflated with more dangerous safety/alignment problems.\n\n# Did It Happen?\n\n## Versions of the Story\n\nDrawing on [the usual suspects](/search \"'Internet Search Tips', Branwen 2018\") (Google/Google Books/Google Scholar/Libgen/LessWrong/Hacker News/Twitter) in [investigating leprechauns](/leprechaun \"'Leprechaun Hunting & Citogenesis', Branwen 2014\"), I have compiled a large number of variants of the story; below, in reverse chronological order by decade, letting us trace the evolution of the story back towards its roots:\n\n### 2010s\n\nHeather Murphy, [\"Why Stanford Researchers Tried to Create a 'Gaydar' Machine\"](https://www.nytimes.com/2017/10/09/science/stanford-sexual-orientation-study.html) (NYT), 2017-10-09:\n\n> *So What Did the Machines See?* Dr. Kosinski and Mr. Wang [[Wang & Kosinski 2018](https://files.osf.io/v1/resources/hv28a/providers/osfstorage/59ab119b594d9002537d360c?action=download&version=10&direct#pdf \"Deep neural networks are more accurate than humans at detecting sexual orientation from facial images\"); see also [Leuner 2019](#leuner-2019)/[Kosinski 2021](https://www.nature.com/articles/s41598-020-79310-1 \"Facial recognition technology can expose political orientation from naturalistic facial images\")] say that the algorithm is responding to fixed facial features, like nose shape, along with \"grooming choices,\" such as eye makeup. But it's also possible that the algorithm is seeing something totally unknown. \"The more data it has, the better it is at picking up patterns,\" said Sarah Jamie Lewis, an independent privacy researcher who Tweeted a critique of the study. \"But the patterns aren't necessarily the ones you think that they are.\" [Tomaso Poggio](!W), the director of M.I.T.'s Center for Brains, Minds and Machines, offered a classic parable used to illustrate this disconnect. The Army trained a program to differentiate American tanks from Russian tanks with 100% accuracy. Only later did analysts realized that the American tanks had been photographed on a sunny day and the Russian tanks had been photographed on a cloudy day. The computer had learned to detect brightness. Dr. Cox has spotted a version of this in his own studies of dating profiles. Gay people, he has found, tend to post higher-quality photos. Dr. Kosinski said that they went to great lengths to guarantee that such confounders did not influence their results. Still, he agreed that it's easier to teach a machine to see than to understand what it has seen.\n\n[It is worth noting that [Arcs et al's criticisms](https://medium.com/@blaisea/do-algorithms-reveal-sexual-orientation-or-just-expose-our-stereotypes-d998fafdf477 \"Do algorithms reveal sexual orientation or just expose our stereotypes?\"), such as their 'gay version' photographs, do not appear to have been confirmed by an [independent replication](https://arxiv.org/abs/1902.10739 \"'A Replication Study: Machine Learning Models Are Capable of Predicting Sexual Orientation From Facial Images', Leuner 2019\").]\n\nAlexander Harrowell, [\"It was called a perceptron for a reason, damn it\"](https://www.harrowell.org.uk/blog/2017/09/30/it-was-called-a-perceptron-for-a-reason-damn-it/), 2017-09-30:\n\n> You might think that this is rather like one of the classic optical illusions, but it's worse than that. If you notice that you look at something this way, and then that way, and it looks different, you'll notice something is odd. This is not something our deep learner will do. Nor is it able to identify any bias that might exist in the corpus of data it was trained on...or maybe it is. If there is any property of the training data set that is strongly predictive of the training criterion, it will zero in on that property with the ferocious clarity of Darwinism. In the 1980s, an early backpropagating neural network was set to find Soviet tanks in a pile of reconnaissance photographs. It worked, until someone noticed that the Red Army usually trained when the weather was good, and in any case the satellite could only see them when the sky was clear. The medical school at St Thomas' Hospital in London found theirs had learned that their successful students were usually white.\n\nAn interesting story with a distinct \"family resemblance\" is told about a NN classifying wolves/dogs, by Evgeniy Nikolaychuk, [\"Dogs, Wolves, Data Science, and Why Machines Must Learn Like Humans Do\"](https://medium.com/veon-careers/dogs-wolves-data-science-and-why-machines-must-learn-like-humans-do-213b08036a10), 2017-06-09:\n\n> Neural networks are designed to learn like the human brain, but we have to be careful. This is not because I'm scared of machines taking over the planet. Rather, we must make sure machines learn correctly. One example that always pops into my head is how one neural network learned to differentiate between dogs and wolves. It didn't learn the differences between dogs and wolves, but instead learned that wolves were on snow in their picture and dogs were on grass. It learned to differentiate the two animals by looking at snow and grass. Obviously, the network learned incorrectly. What if the dog was on snow and the wolf was on grass? Then, it would be wrong.\n\nHowever, in his source, [\"'Why Should I Trust You?' Explaining the Predictions of Any Classifier [LIME]\"](https://arxiv.org/abs/1602.04938 \"'\"Why Should I Trust You?\": Explaining the Predictions of Any Classifier', Ribeiro et al 2016\"), Ribeiro et al 2016, they specify of their dog/wolf snow-detector NN that they \"trained this *bad* classifier intentionally, to evaluate whether subjects are able to detect it [the bad performance]\" using LIME for insight into how the classifier was making its classification, concluding that \"After examining the explanations, however, almost all of the subjects identified the correct insight, with much more certainty that it was a determining factor. Further, the trust in the classifier also dropped substantially.\"\nSo Nikolaychuk appears to have misremembered.\n(Perhaps in another 25 years students will be told in their classes of how a NN was once trained by ecologists to count wolves...)\n\n[Redditor mantrap2](https://www.reddit.com/r/MachineLearning/comments/3ailzi/suddenly_a_leopard_print_sofa_appears/csczkqg/) gives on 2015-06-20 this version of the story:\n\n> I remember this kind of thing from the 1980s: the US Army was testing image recognition seekers for missiles and was getting excellent results on Northern German tests with NATO tanks. Then they tested the same systems in other environment and there results were suddenly shockingly bad. Turns out the image recognition was keying off the trees with tank-like minor features rather than the tank itself. Putting other vehicles in the same forests got similar high hits but tanks by themselves (in desert test ranges) didn't register. Luckily a sceptic somewhere decided to \"do one more test to make sure\".\n\nDennis Polis, _God, Science and Mind_, 2012 (pg131, limited Google Books snippet, unclear what ref 44 is):\n\n> These facts refute a Neoplatonic argument for the essential immateriality of the soul, _viz._ that since the mind deals with _universal_ representations, it operates in a specifically immaterial way...So, awareness is not explained by connectionism. The results of neural net training are not always as expected. One team intended to train neural nets to recognize battle tanks in aerial photos. The system was trained using photos with and without tanks. After the training, a different set of photos was used for evaluation, and the system failed miserably---being totally incapable of distinguishing those with tanks. The system actually discriminated cloudy from sunny days. It happened that all the training photos with tanks were taken on cloudy days, while those without were on clear days.^44^ What does this show? That neural net training is mindless. The system had no *idea* of the intent of the enterprise, and did what it was programmed to do without any concept of its *purpose*. As with Dawkins' evolution simulation (p. 66), the goals of computer neural nets are imposed by human programmers.\n\nBlay Whitby, [_Artificial Intelligence: A Beginner's Guide_](https://books.google.com/books?id=TKOfhnUhgS4C) 2012 (pg53):\n\n> It is not yet clear how an artificial neural net could be trained to deal with \"the world\" or any really open-ended sets of problems. Now some readers may feel that this unpredictability is not a problem. After all, we are talking about training not programming and we expect a neural net to behave rather more like a brain than a computer. Given the usefulness of nets in unsupervised learning, it might seem therefore that we do not really need to worry about the problem being of manageable size and the training process being predictable. This is not the case; we really do need a manageable and well-defined problem for the training process to work. A famous AI urban myth may help to make this clearer.\n>\n> The story goes something like this. A research team was training a neural net to recognize pictures containing tanks. (I'll leave you to guess why it was tanks and not tea-cups.) To do this they showed it two training sets of photographs. One set of pictures contained at least one tank somewhere in the scene, the other set contained no tanks. The net had to be trained to discriminate between the two sets of photographs. Eventually, after all that back-propagation stuff, it correctly gave the output \"tank\" when there was a tank in the picture and \"no tank\" when there wasn't. Even if, say, only a little bit of the gun was peeping out from behind a sand dune it said \"tank\". Then they presented a picture where no part of the tank was visible---it was actually completely hidden behind a sand dune---and the program said \"tank\".\n>\n> Now when this sort of thing happens research labs tend to split along age-based lines. The young hairs say \"Great! We're in line for the Nobel Prize!\" and the old heads say \"Something's gone wrong\". Unfortunately, the old heads are usually right---as they were in this case. What had happened was that the photographs containing tanks had been taken in the morning while the army played tanks on the range. After lunch the photographer had gone back and taken pictures from the same angles of the empty range. So the net had identified the most reliable single feature which enabled it to classify the two sets of photos, namely the angle of the shadows. \"AM = tank, PM = no tank\". This was an extremely effective way of classifying the two sets of photographs in the training set. What it most certainly was *not* was a program that recognizes tanks. The great advantage of neural nets is that they find their own classification criteria. The great problem is that it may not be the one you want!\n\n[Thom Blake](https://www.lesswrong.com/posts/PoDAyQMWEXBBBEJ5P/magical-categories4v4a) notes in 2011-09-20 that the story is:\n\n> Probably apocryphal. I haven't been able to track this down, despite having heard the story both in computer ethics class and at academic conferences.\n\n[\"Embarrassing mistakes in perceptron research\"](https://www.webofstories.com/play/marvin.minsky/122), Marvin Minsky, 2011-01-31:\n\n> Like I had a friend in Italy who had a perceptron that looked at a visual... it had visual inputs. So, he... he had scores of music written by Bach of chorales and he had scores of chorales written by music students at the local conservatory. And he had a perceptron---a big machine---that looked at these and those and tried to distinguish between them. And he was able to train it to distinguish between the masterpieces by Bach and the pretty good chorales by the conservatory students. Well, so, he showed us this data and I was looking through it and what I discovered was that in the lower left hand corner of each page, one of the sets of data had single whole notes. And I think the ones by the students usually had four quarter notes. So that, in fact, it was possible to distinguish between these two classes of... of pieces of music just by looking at the lower left... lower right hand corner of the page. So, I told this to the... to our scientist friend and he went through the data and he said: 'You guessed right. That's... that's how it happened to make that distinction.' We thought it was very funny.\n>\n> A similar thing happened here in the United States at one of our research institutions. Where a perceptron had been trained to distinguish between---this was for military purposes---It could... it was looking at a scene of a forest in which there were camouflaged tanks in one picture and no camouflaged tanks in the other. And the perceptron---after a little training---got... made a 100% correct distinction between these two different sets of photographs. Then they were embarrassed a few hours later to discover that the two rolls of film had been developed differently. And so these pictures were just a little darker than all of these pictures and the perceptron was just measuring the total amount of light in the scene. But it was very clever of the perceptron to find some way of making the distinction.\n\n### 2000s\n\n[Eliezer Yudkowsky](https://www.yudkowsky.net/), [2008-08-24](https://www.lesswrong.com/posts/PoDAyQMWEXBBBEJ5P/magical-categories) (similarly quoted in [\"Artificial Intelligence as a Negative and Positive Factor in Global Risk\"](https://intelligence.org/files/AIPosNegFactor.pdf), \"Artificial Intelligence in global risk\" in _Global Catastrophic Risks_ 2011, & \"Friendly Artificial Intelligence\" in _Singularity Hypotheses_ 2013):\n\n> Once upon a time---I've seen this story in several versions and several places, sometimes cited as fact, but I've never tracked down an original source---once upon a time, I say, the US Army wanted to use neural networks to automatically detect camouflaged enemy tanks. The researchers trained a neural net on 50 photos of camouflaged tanks amid trees, and 50 photos of trees without tanks. Using standard techniques for supervised learning, the researchers trained the neural network to a weighting that correctly loaded the training set---output \"yes\" for the 50 photos of camouflaged tanks, and output \"no\" for the 50 photos of forest. Now this did not prove, or even imply, that new examples would be classified correctly. The neural network might have \"learned\" 100 special cases that wouldn't generalize to new problems. Not, \"camouflaged tanks versus forest\", but just, \"photo-1 positive, photo-2 negative, photo-3 negative, photo-4 positive...\" But wisely, the researchers had originally taken 200 photos, 100 photos of tanks and 100 photos of trees, and had used only half in the training set. The researchers ran the neural network on the remaining 100 photos, and *without further training* the neural network classified all remaining photos correctly. Success confirmed! The researchers handed the finished work to the Pentagon, which soon handed it back, complaining that in their own tests the neural network did no better than chance at discriminating photos. It turned out that in the researchers' data set, photos of camouflaged tanks had been taken on cloudy days, while photos of plain forest had been taken on sunny days. The neural network had learned to distinguish cloudy days from sunny days, instead of distinguishing camouflaged tanks from empty forest. This parable---which might or might not be fact---illustrates one of the most fundamental problems in the field of supervised learning and in fact the whole field of Artificial Intelligence...\n\nGordon Rugg, [_Using Statistics: A Gentle Introduction_](https://books.google.com/books?id=S9lsBnV7txoC), 2007-10-01 (pg114--115):\n\n> *Neural nets and genetic algorithms (including the story of the Russian tanks)*: Neural nets (or artificial neural networks, to give them their full name) are pieces of software inspired by the way the human brain works. In brief, you can train a neural net to do tasks like classifying images by giving it lots of examples, and telling it which examples fit into which categories; the neural net works out for itself what the defining characteristics are for each category. Alternatively, you can give it a large set of data and leave it to work out connections by itself, without giving it any feedback. There's a story, which is probably an urban legend, which illustrates how the approach works and what can go wrong with it. According to the story, some NATO researchers trained a neural net to distinguish between photos of NATO and Warsaw Pact tanks. After a while, the neural net could get it right every time, even with photos it had never seen before. The researchers had gleeful visions of installing neural nets with miniature cameras in missiles, which could then be fired at a battlefield and left to choose their own targets. To demonstrate the method, and secure funding for the next stage, they organised a viewing by the military. On the day, they set up the system and fed it a new batch of photos. The neural net responded with apparently random decisions, sometimes identifying NATO tanks correctly, sometimes identifying them mistakenly as Warsaw Pact tanks. This did not inspire the powers that be, and the whole scheme was abandoned on the spot. It was only afterwards that the researchers realised that all their training photos of NATO tanks had been taken on sunny days in Arizona, whereas the Warsaw Pact tanks had been photographed on grey, miserable winter days on the steppes, so the neural net had flawlessly learned the unintended lesson that if you saw a tank on a gloomy day, then you made its day even gloomier by marking it for destruction.\n\nN. Katherine Hayles, \"Computing the Human\" (_Inventive Life: Approaches to the New Vitalism_, Fraser et al 2006; pg424):\n\n> While humans have for millennia used what Cariani calls 'active sensing'---'poking, pushing, bending'---to extend their sensory range and for hundreds of years have used prostheses to create new sensory experiences (for example, microscopes and telescopes), only recently has it been possible to construct evolving sensors and what [Cariani (1998: 718)](/doc/transhumanism/1998-cariani.pdf \"Epistemic Autonomy through Adaptive Sensing\") calls 'internalized sensing', that is, \"bringing the world into the device\" by creating internal, analog representations of the world out of which internal sensors extract newly-relevant properties'.\n>\n> ...Another conclusion emerges from Cariani's call (1998) for research in sensors that can adapt and evolve independently of the epistemic categories of the humans who create them. The well-known and perhaps apocryphal story of the neural net trained to recognize army tanks will illustrate the point. For obvious reasons, the army wanted to develop an intelligent machine that could discriminate between real and pretend tanks. A neural net was constructed and trained using two sets of data, one consisting of photographs showing plywood cutouts of tanks and the other actual tanks. After some training, the net was able to discriminate flawlessly between the situations. As is customary, the net was then tested against a third data set showing pretend and real tanks in the same landscape; it failed miserably. Further investigation revealed that the original two data sets had been filmed on different days. One of the days was overcast with lots of clouds, and the other day was clear. The net, it turned out, was discriminating between the presence and absence of clouds. The anecdote shows the ambiguous potential of epistemically autonomous devices for categorizing the world in entirely different ways from the humans with whom they interact. While this autonomy might be used to enrich the human perception of the world by revealing novel kinds of constructions, it also can create a breed of autonomous devices that parse the world in radically different ways from their human trainers.\n>\n> A counter-narrative, also perhaps apocryphal, emerged from the 1991 Gulf War. US soldiers firing at tanks had been trained on simulators that imaged flames shooting out from the tank to indicate a kill. When army investigators examined Iraqi tanks that were defeated in battles, they found that for some tanks the soldiers had fired four to five times the amount of munitions necessary to disable the tanks. They hypothesized that the overuse of firepower happened because no flames shot out, so the soldiers continued firing. If the hypothesis is correct, human perceptions were altered in accord with the idiosyncrasies of intelligent machines, providing an example of what can happen when human-machine perceptions are caught in a feedback loop with one another.\n\nLinda Null & Julie Lobur, [_The Essentials of Computer Organization and Architecture_ (third edition)](https://books.google.com/books?id=GKgxDwAAQBAJ), 2003/2014 (pg439--440 in 1st edition, pg658 in 3rd edition):\n\n> Correct training requires thousands of steps. The training time itself depends on the size of the network. As the number of perceptrons increases, the number of possible \"states\" also increases.\n>\n> Let's consider a more sophisticated example, that of determining whether a tank is hiding in a photograph. A neural net can be configured so that each output value correlates to exactly one pixel. If the pixel is part of the image of a tank, the net should output a one; otherwise, the net should output a zero. The input information would most likely consist of the color of the pixel. The network would be trained by feeding it many pictures with and without tanks. The training would continue until the network correctly identified whether the photos included tanks. The U.S. military conducted a research project exactly like the one we just described. One hundred photographs were taken of tanks hiding behind trees and in bushes, and another 100 photographs were taken of ordinary landscape with no tanks. Fifty photos from each group were kept \"secret,\" and the rest were used to train the neural network. The network was initialized with random weights before being fed one picture at a time. When the network was incorrect, it adjusted its input weights until the correct output was reached. Following the training period, the 50 \"secret\" pictures from each group of photos were fed into the network. The neural network correctly identified the presence or absence of a tank in each photo. The real question at this point has to do with the training---had the neural net actually learned to recognize tanks? The Pentagon's natural suspicion led to more testing. Additional photos were taken and fed into the network, and to the researchers' dismay, the results were quite random. The neural net could not correctly identify tanks within photos. After some investigation, the researchers determined that in the original set of 200 photos, all photos with tanks had been taken on a cloudy day, whereas the photos with no tanks had been taken on a sunny day. The neural net had properly separated the two groups of pictures, but had done so using the color of the sky to do this rather than the existence of a hidden tank. The government was now the proud owner of a very expensive neural net that could accurately distinguish between sunny and cloudy days!\n>\n> This is a great example of what many consider the biggest issue with neural networks. If there are more than 10 to 20 neurons, it is impossible to understand how the network is arriving at its results. One cannot tell if the net is making decisions based on correct information, or, as in the above example, something totally irrelevant. Neural networks have a remarkable ability to derive meaning and extract patterns from data that are too complex to be analyzed by human beings. However, some people trust neural networks to be experts in their area of training. Neural nets are used in such areas as sales forecasting, risk management, customer research, undersea mine detection, facial recognition, and data validation. Although neural networks are promising, and the progress made in the past several years has led to significant funding for neural net research, many people are hesitant to put confidence in something that no human being can completely understand.\n\nDavid Gerhard, [\"Pitch Extraction and Fundamental Frequency: History and Current Techniques\"](http://sapyc.espe.edu.ec/evcarrera/DSP/pitch.pdf), Technical Report TR-CS 2003--06, November 2003:\n\n> The choice of the dimensionality and domain of the input set is crucial to the success of any connectionist model. A common example of a poor choice of input set and test data is the Pentagon's foray into the field of object recognition. This story is probably apocryphal and many different versions exist on-line, but the story describes a true difficulty with neural nets.\n>\n> As the story goes, a network was set up with the input being the pixels in a picture, and the output was a single bit, yes or no, for the existence of an enemy tank hidden somewhere in the picture. When the training was complete, the network performed beautifully, but when applied to new data, it failed miserably. The problem was that in the test data, all of the pictures that had tanks in them were taken on cloudy days, and all of the pictures without tanks were taken on sunny days. The neural net was identifying the existence or non-existence of sunshine, not tanks.\n\n[Rice lecture #24, \"COMP 200: Elements of Computer Science\"](https://www.clear.rice.edu/comp200/02spring/Lecture-notes/lec24.txt), 2002-03-18:\n\n> d. Tanks in Desert Storm\n>\n> Sometimes you have to be careful what you train on . . .\n>\n> The problem with neural nets is that you never know what features they're actually training on. For example:\n>\n> The US military tried to use neural nets in Desert Storm for tank recognition, so unmanned tanks could identify enemy tanks and destroy them. They trained the neural net on multiple images of \"friendly\" and enemy tanks, and eventually had a decent program that seemed to correctly identify friendly and enemy tanks.\n>\n> Then, when they actually used the program in a real-world test phase with actual tanks, they found that the tanks would either shoot at nothing or shoot at everything. They certainly seemed to be incapable of distinguishing friendly or enemy tanks.\n>\n> Why was this? It turns out that the images they were training on always had glamour-shot type photos of friendly tanks, with an immaculate blue sky, etc. The enemy tank photos, on the other hand, were all spy photos, not very clear, sometimes fuzzy, etc. And it was these characteristics that the neural net was training on, not the tanks at all. On a bright sunny day, the tanks would do nothing. On an overcast, hazy day, they'd start firing like crazy . . .\n\nAndrew Ilachinski, _Cellular Automata: A Discrete Universe_, 2001 (pg547):\n\n> There is an telling story about how the Army recently went about teaching a backpropagating net to identify tanks set against a variety of environmental backdrops. The programmers correctly fed their multi-layer net photograph after photograph of tanks in grasslands, tanks in swamps, no tanks on concrete, and so on. After many trials and many thousands of iterations, their net finally learned all of the images in their database. The problem was that when the presumably \"trained\" net was tested with other images that were not part of the original training set, it failed to do any better than what would be expected by chance. What had happened was that the input/training fact set was statistically corrupt. The database consisted mostly of images that showed a tank only if there were heavy clouds, the tank itself was immersed in shadow or there was no sun at all. The Army's neural net had indeed identified a latent pattern, but it unfortunately had nothing to do with tanks: it had effectively learned to identify the time of day! The obvious lesson to be taken away from this amusing example is that how well a net \"learns\" the desired associations depends almost entirely on how well the database of facts is defined. Just as Monte Carlo simulations in statistical mechanics may fall short of intended results if they are forced to rely upon poorly coded random number generators, so do backpropagating nets typically fail to achieve expected results if the facts they are trained on are statistically corrupt.\n\n[_Intelligent Data Analysis In Science_](/doc/ai/nn/2000-cartwright-intelligentdataanalysisinscience.pdf), Hugh M. Cartwright 2000, pg126, writes (according to Google Books's snippet view; Cartwright's version appears to be a direct quote or close paraphrase of an earlier 1994 chemistry paper, Goodacre et al 1994):\n\n> ...television programme [_Horizon_](!W \"Horizon (UK TV series)\"); a neural network was trained to attempt to distinguish tanks from trees. Pictures were taken of forest scenes lacking military hardware and of similar but perhaps less bucolic landscapes which also contained more-or-less camouflaged battle tanks. A neural network was trained with these input data and found to differentiate successfully between tanks and trees. However, when a new set of pictures was analysed by the network, it failed to detect the tanks. After further investigation, it was found...\n\nDaniel Robert Franklin & Philippe Crochat, [`libneural` tutorial](https://web.archive.org/web/20001029201251/http://ieee.uow.edu.au/~daniel/software/libneural/BPN_tutorial/BPN_English/BPN_English/node9.html), 2000-03-23:\n\n> A neural network is useless if it only sees one example of a matching input/output pair. It cannot infer the characteristics of the input data for which you are looking for from only one example; rather, many examples are required. This is analogous to a child learning the difference between (say) different types of animals---the child will need to see several examples of each to be able to classify an arbitrary animal... It is the same with neural networks. The best training procedure is to compile a wide range of examples (for more complex problems, more examples are required) which exhibit all the different characteristics you are interested in. It is important to select examples which do not have major dominant features which are of no interest to you, but are common to your input data anyway. One famous example is of the US Army \"Artificial Intelligence\" tank classifier. It was shown examples of Soviet tanks from many different distances and angles on a bright sunny day, and examples of US tanks on a cloudy day. Needless to say it was great at classifying weather, but not so good at picking out enemy tanks.\n\n### 1990s\n\n[\"Neural Network Follies\"](https://neil.fraser.name/writing/tank/), Neil Fraser, September 1998:\n\n> In the 1980s, the Pentagon wanted to harness computer technology to make their tanks harder to attack...The research team went out and took 100 photographs of tanks hiding behind trees, and then took 100 photographs of trees---with no tanks. They took half the photos from each group and put them in a vault for safe-keeping, then scanned the other half into their mainframe computer. The huge neural network was fed each photo one at a time and asked if there was a tank hiding behind the trees. Of course at the beginning its answers were completely random since the network didn't know what was going on or what it was supposed to do. But each time it was fed a photo and it generated an answer, the scientists told it if it was right or wrong. If it was wrong it would randomly change the weightings in its network until it gave the correct answer. Over time it got better and better until eventually it was getting each photo correct. It could correctly determine if there was a tank hiding behind the trees in any one of the photos...So the scientists took out the photos they had been keeping in the vault and fed them through the computer. The computer had never seen these photos before---this would be the big test. To their immense relief the neural net correctly identified each photo as either having a tank or not having one. *Independent testing*: The Pentagon was very pleased with this, but a little bit suspicious. They commissioned another set of photos (half with tanks and half without) and scanned them into the computer and through the neural network. The results were completely random. For a long time nobody could figure out why. After all nobody understood how the neural had trained itself. Eventually someone noticed that in the original set of 200 photos, all the images with tanks had been taken on a cloudy day while all the images without tanks had been taken on a sunny day. The neural network had been asked to separate the two groups of photos and it had chosen the most obvious way to do it---not by looking for a camouflaged tank hiding behind a tree, but merely by looking at the color of the sky...This story might be apocryphal, but it doesn't really matter. It is a perfect illustration of the biggest problem behind neural networks. Any automatically trained net with more than a few dozen neurons is virtually impossible to analyze and understand.\n\n[Tom White](https://twitter.com/dribnet/status/914945926266970112) attributes (in October 2017) to Marvin Minsky some version of the tank story being told in MIT classes 20 years before, ~1997 (but doesn't specify the detailed story or version other than apparently the results were \"classified\").\n\nVasant Dhar & Roger Stein, [_Intelligent Decision Support Methods_](/doc/ai/nn/1997-dhar-intelligentdecisionsupportmethods.pdf), 1997 (pg98, limited Google Books snippet):\n\n> ...However, when a new set of photographs were used, the results were horrible. At first the team was puzzled. But after careful inspection of the first two sets of photographs, they discovered a very simple explanation. The photos with tanks in them were all taken on sunny days, and those without the tanks were taken on overcast days. The network had *not* learned to identify tank like images; instead, it had learned to identify photographs of sunny days and overcast days.\n\nRoyston Goodacre, Mark J. Neal, & Douglas B. Kell, [\"Quantitative Analysis of Multivariate Data Using Artificial Neural Networks: A Tutorial Review and Applications to the Deconvolution of Pyrolysis Mass Spectra\"](/doc/ai/nn/fully-connected/1996-goodacre.pdf), 1994-04-29:\n\n> ...As in all other data analysis techniques, these supervised learning methods are not immune from sensitivity to badly chosen initial data (113). [113: Zupan, J. and J. Gasteiger: _Neural Networks for Chemists: An Introduction_. VCH Verlagsgesellschaft, Weinheim (1993)] Therefore the exemplars for the training set must be carefully chosen; the golden rule is \"garbage in---garbage out\". An excellent example of an unrepresentative training set was discussed some time ago on the BBC television programme _Horizon_; a neural network was trained to attempt to distinguish tanks from trees. Pictures were taken of forest scenes lacking military hardware and of similar but perhaps less bucolic landscapes which also contained more-or-less camouflaged battle tanks. A neural network was trained with these input data and found to differentiate most successfully between tanks and trees. However, when a new set of pictures was analysed by the network, it failed to distinguish the tanks from the trees. After further investigation, it was found that the first set of pictures containing tanks had been taken on a sunny day whilst those containing no tanks were obtained when it was overcast. The neural network had therefore thus learned simply to recognise the weather! We can conclude from this that the training and tests sets should be carefully selected to contain representative exemplars encompassing the appropriate variance over all relevant properties for the problem at hand.\n\nFernando Pereira, [\"neural redlining\", RISKS 16(41), 1994-09-12](http://catless.ncl.ac.uk/risks/16.41.html):\n\n> Fred's comments will hold not only of neural nets but of any decision model trained from data (eg. Bayesian models, decision trees). It's just an instance of the old \"GIGO\" phenomenon in statistical modeling...Overall, the whole issue of evaluation, let alone certification and legal standing, of complex statistical models is still very much open. (This reminds me of a possibly apocryphal story of problems with biased data in neural net training. Some US defense contractor had supposedly trained a neural net to find tanks in scenes. The reported performance was excellent, with even camouflaged tanks mostly hidden in vegetation being spotted. However, when the net was tested on yet a new set of images supplied by the client, the net did not do better than chance. After an embarrassing investigation, it turned out that all the tank images in the original training and test sets had very different average intensity than the non-tank images, and thus the net had just learned to discriminate between two image intensity levels. Does anyone know if this actually happened, or is it just in the neural net \"urban folklore\"?)\n\nErich Harth, [_The Creative Loop: How the Brain Makes a Mind_](/doc/ai/nn/1993-harth-thecreativeloop.pdf), 1993/1995 (pg158, limited Google Books snippet):\n\n> ...55. The net was *trained* to detect the presence of tanks in a landscape. The training consisted in showing the device many photographs of scene, some with tanks, some without. In some cases---as in the picture on page 143---the tank's presence was not very obvious. The inputs to the neural net were digitized photographs;\n\n[Hubert L. Dreyfus](!W) & [Stuart E. Dreyfus](!W), [\"What Artificial Experts Can and Cannot Do\"](https://www.jefftk.com/dreyfus92.pdf), 1992:\n\n> All the \"continue this sequence\" questions found on intelligence tests, for example, really have more than one possible answer but most human beings share a sense of what is simple and reasonable and therefore acceptable. But when the net produces an unexpected association can one say it has failed to generalize? One could equally well say that the net has all along been acting on a different definition of \"type\" and that that difference has just been revealed. For an amusing and dramatic case of creative but unintelligent generalization, consider the legend of one of connectionism's first applications. In the early days of the perceptron the army decided to train an artificial neural network to recognize tanks partly hidden behind trees in the woods. They took a number of pictures of a woods without tanks, and then pictures of the same woods with tanks clearly sticking out from behind trees. They then trained a net to discriminate the two classes of pictures. The results were impressive, and the army was even more impressed when it turned out that the net could generalize its knowledge to pictures from each set that had not been used in training the net. Just to make sure that the net had indeed learned to recognize partially hidden tanks, however, the researchers took some more pictures in the same woods and showed them to the trained net. They were shocked and depressed to find that with the new pictures the net totally failed to discriminate between pictures of trees with partially concealed tanks behind them and just plain trees. The mystery was finally solved when someone noticed that the training pictures of the woods without tanks were taken on a cloudy day, whereas those with tanks were taken on a sunny day. The net had learned to recognize and generalize the difference between a woods with and without shadows! Obviously, not what stood out for the researchers as the important difference. This example illustrates the general point that a net must share size, architecture, initial connections, configuration and socialization with the human brain if it is to share our sense of appropriate generalization\n\nHubert Dreyfus appears to have told this story earlier in 1990 or 1991, as a similar story appears in episode 4 ([German](https://www.youtube.com/watch?v=cG7v9eCq2u4&t=33m49s)) (starting 33m49s) of the BBC documentary series [_The Machine That Changed the World_](!W \"The Machine That Changed the World (miniseries)\"), broadcast 1991-11-08.\nHubert L. Dreyfus, [_What Computers Still Can't Do: A Critique of Artificial Reason_](/doc/ai/1992-dreyfus-whatcomputerstillcantdo.epub), 1992, repeats the story in very similar but not quite identical wording ([Jeff Kaufman](https://www.jefftk.com/p/detecting-tanks \"Detecting Tanks\") notes that Dreyfus drops the qualifying \"legend of\" description):\n\n> ...But when the net produces an unexpected association, can one say that it has failed to generalize? One could equally well say that the net has all along been acting on a different definition of \"type\" and that that difference has just been revealed. For an amusing and dramatic case of creative but unintelligent generalization, consider one of connectionism's first applications. In the early days of this work the army tried to train an artificial neural network to recognize tanks in a forest. They took a number of pictures of a forest without tanks and then, on a later day, with tanks clearly sticking out from behind trees, and they trained a net to discriminate the two classes of pictures. The results were impressive, and the army was even more impressed when it turned out that the net could generalize its knowledge to pictures that had not been part of the training set. Just to make sure that the net was indeed recognizing partially hidden tanks, however, the researchers took more pictures in the same forest and showed them to the trained net. They were depressed to find that the net failed to discriminate between the new pictures of trees with tanks behind them and the new pictures of just plain trees. After some agonizing, the mystery was finally solved when someone noticed that the original pictures of the forest without tanks were taken on a cloudy day and those with tanks were taken on a sunny day. The net had apparently learned to recognize and generalize the difference between a forest with and without shadows! This example illustrates the general point that a network must share our commonsense understanding of the world if it is to share our sense of appropriate generalization.\n\nDreyfus's _What Computers Still Can't Do_ is listed as a revision of his 1972 book, [_What Computers Can't Do: A Critique of Artificial Reason_](https://archive.org/details/whatcomputerscan017504mbp), but the tank story is not in the 1972 book, only the 1992 one.\n(Dreyfus's version is also quoted in the 2017 NYT article and Hillis 1996's _Geography, Identity, and Embodiment in Virtual Reality_, pg346.)\n\nLaveen N. Kanal, [_Artificial Neural Networks and Statistical Pattern Recognition: Old and New Connections_'s](/doc/ai/nn/1991-sethi-artificialneuralnetworksandstatisticalpatternrecognition.pdf) Foreword, discusses some early NN/tank research (predating not just LeCun's convolutions but backpropagation), 1991:\n\n> ...[Frank] Rosenblatt had not limited himself to using just a single Threshold Logic Unit but used networks of such units. The problem was how to train multilayer perceptron networks. A paper on the topic written by Block, Knight and Rosenblatt was murky indeed, and did not demonstrate a convergent procedure to train such networks. In 1962--63 at Philco-Ford, seeking a systematic approach to designing layered classification nets, we decided to use a hierarchy of threshold logic units with a first layer of \"feature logics\" which were threshold logic units on overlapping receptive fields of the image, feeding two additional levels of weighted threshold logic decision units. The weights in each level of the hierarchy were estimated using statistical methods rather than iterative training procedures [L.N. Kanal & N.C. Randall, [\"Recognition System Design by Statistical Analysis\"](/doc/ai/1964-kanal.pdf), Proc. 19th Conf. ACM, 1964]. We referred to the networks as two layer networks since we did not count the input as a layer. On a project to recognize tanks in aerial photography, the method worked well enough in practice that the U.S. Army agency sponsoring the project decided to classify the final reports, although previously the project had been unclassified. We were unable to publish the classified results! Then, enamored by the claimed promise of coherent optical filtering as a parallel implementation for automatic target recognition, the funding we had been promised was diverted away from our electro-optical implementation to a coherent optical filtering group. Some years later we presented the arguments favoring our approach, compared to optical implementations and trainable systems, in an article titled \"Systems Considerations for Automatic Imagery Screening\" by T.J. Harley, L.N. Kanal and N.C. Randall, which is included in the IEEE Press reprint volume titled [_Machine Recognition of Patterns_](/doc/ai/nn/1977-agrawala-machinerecognitionofpatterns.pdf) edited by A. Agrawala 1977^[The paper in question discusses general questions of necessary resolution, computing requirements, optics, necessary error rates, and algorithms, but doesn't describe any implemented systems, much less experiences which resemble the tank story.]. In the years which followed multilevel statistically designed classifiers and AI search procedures applied to pattern recognition held my interest, although comments in my 1974 survey, \"Patterns In Pattern Recognition: 1968--1974\" [IEEE Trans. on IT, 1974], mention papers by Amari and others and show an awareness that neural networks and biologically motivated automata were making a comeback. In the last few years trainable multilayer neural networks have returned to dominate research in pattern recognition and this time there is potential for gaining much greater insight into their systematic design and performance analysis...\n\nWhile Kanal & Randall 1964 matches in some ways, including the image counts, there is no mention of failure either in the paper or Kanal's 1991 reminiscences (rather, Kanal implies it was highly promising), there is no mention of a field deployment or additional testing which could have revealed overfitting, and given their use of binarizing, it's not clear to me that their 2-layer algorithm even *could* overfit to global brightness; the photos also appear to have been taken at low enough altitude for there to be no clouds, and to be taken under similar (possibly controlled) lighting conditions.\nThe description in Kanal & Randall 1964 is somewhat opaque to me, particularly of the 'Laplacian' they use to binarize or convert to edges, but there's more background in their [\"Semi-Automatic Imagery Screening Research Study and Experimental Investigation, Volume 1\"](http://www.dtic.mil/docs/citations/AD0410261), Harley, Bryan, Kanal, Taylor & Grayum 1962 ([mirror](/doc/ai/1962-harley.pdf)), which indicates that in their preliminary studies they were already interested in prenormalization/preprocessing images to correct for altitude and brightness, and the Laplacian, along with silhouetting and \"lineness editing\", noting that \"The Laplacian operation eliminates absolute brightness scale as well as low-spatial frequencies which are of little consequence in screening operations.\"^[Another interesting detail from Harley et al 1962 about their tank study: in discussing designing their computer 'simulation' of their quasi-NN algorithms, their description of the photographs on pg133 makes it sound as if the dataset was constructed from the *same* photographs by using large-scale aerial footage and then cropping out the small squares with tanks and then corresponding small squares without tanks---so they only had to process one set of photographs, and the resulting tank/non-tank samples are inherently matched on date, weather, time of day, lighting, general location, roll of film, camera, and photographer. If true, that would make almost all the various suggested tank problem shortcuts impossible, and would be further evidence that Kanal's project was not & could not have been a true origin of the tank story (although if it was simply *misunderstood* and erroneously critiqued, then it could be a tiny kernel of truth from which the urban legend sprang).]\n\nAn anonymous reader says he heard the story in 1990:\n\n> I was told about the tank recognition failure by a lecturer on my 1990 Intelligent Knowledge Based Systems MSc, almost certainly [Libor Spacek](https://cmp.felk.cvut.cz/~spacelib/ \"Libor Špaček homepage\"), in terms of being aware of context in data sets; that being from (the former) Czechoslovakia he expected to see tanks on a motorway whereas most British people didn't. I also remember reading about a project with DARPA funding aimed at differentiating Russian, European and US tanks where what the image recognition learned was not to spot the differences between tanks but to find trees, because of the US tank photos being on open ground and the Russian ones being in forests; that was during the same MSc course---so very similar to predicting tumours by looking for the ruler used to measure them in the photo---but I don't recall the source (it wasn't one of the books you cite though, it was either a journal article or another text book).\n\n### 1980s\n\n[Chris Brew](https://twitter.com/cbrew/status/920088821823344640) states (2017-10-16) that he \"Heard the story in 1984 with pigeons instead of neural nets\".\n\n### 1960s\n\n[Edward Fredkin](!W), in an email to Eliezer Yudkowsky on 2013-02-26, recounts an interesting anecdote about the 1960s claiming to be the grain of truth:\n\n> By the way, the story about the two pictures of a field, with and without army tanks in the picture, comes from me. I attended a meeting in Los Angeles [at RAND?], about half a century ago [~1963?] where someone gave a paper showing how a random net could be trained to detect the tanks in the picture. I was in the audience. At the end of the talk I stood up and made the comment that it was obvious that the picture with the tanks was made on a sunny day while the other picture (of the same field without the tanks) was made on a cloudy day. I suggested that the \"neural net\" had merely trained itself to recognize the difference between a bright picture and a dim picture.\n\n## Evaluation\n\n### Sourcing\n\nThe absence of any hard citations is striking: even when a citation is supplied, it is invariably to a relatively recent source like Dreyfus, and then the chain ends.\nTypically for a real story, one will find at least one or two hints of a penultimate citation and then a final definitive citation to some very difficult-to-obtain or obscure work (which then is often quite different from the popularized version but still recognizable as the original); for example, another popular cautionary AI urban legend is that the 1956 [Dartmouth workshop](!W) claimed that a single graduate student working for a summer could solve computer vision (or perhaps AI in general), which is a highly distorted misleading description of the [original 1955 proposal's](http://www-formal.stanford.edu/jmc/history/dartmouth/dartmouth.html \"'A Proposal For The Dartmouth Summer Research Project On Artificial Intelligence', McCarthy et al 1955\") realistic claim that \"a 2 month, 10 man study of artificial intelligence\" could yield \"a significant advance can be made in one or more of these problems if a carefully selected group of scientists work on it together for a summer.\"^[This seems entirely reasonable to me, given that hardly any AI research existed at that point. While it's unclear what results were accomplished immediately thanks to the 1956 workshop, many of the attendees would make major discoveries in AI. Attendee [Ray Solomonoff's](!W \"Ray Solomonoff\") wife, Grace Solomonoff ([\"Ray Solomonoff and the Dartmouth Summer Research Project in Artificial Intelligence, 1956\"](https://raysolomonoff.com/dartmouth/dartray.pdf), 2016) describes the workshop as having vivid discussions but was compromised by getting only half its funding (so it didn't last the summer) and attendees showing up sporadically & for short times (\"Many participants only showed up for a day or even less.\"); no agreement was reached on a specific project to try to tackle, although Solomonoff did write a paper there he considered important.]\nInstead, everyone either disavows it as an urban legend or possibly apocryphal, or punts to someone else.\n(Minsky's 2011 version initially seems concrete, but while he specifically attributes the musical score story to a friend & claims to have found the trick personally, he is then as vague as anyone else about the tank story, saying it just \"happened\" somewhere \"in the United States at one of our research institutes\", at an unmentioned institute by unmentioned people at an unmentioned point in time for an unmentioned branch of the military.)\n\n### Variations\n\n
\n> *Question to Radio Yerevan*: \"Is it correct that Grigori Grigorievich Grigoriev won a luxury car at the All-Union Championship in Moscow?\"\n>\n> *Radio Yerevan answered*: \"In principle, yes. But first of all it was not Grigori Grigorievich Grigoriev, but Vassili Vassilievich Vassiliev; second, it was not at the All-Union Championship in Moscow, but at a Collective Farm Sports Festival in Smolensk; third, it was not a car, but a bicycle; and fourth he didn't win it, but rather it was stolen from him.\"\n>\n> [\"Radio Yerevan Jokes\"](https://web.archive.org/web/20140908045019/http://www.bratislavaguide.com/radio-yerevan-jokes) (collected by Allan Stevo)\n
\n\nIt is also interesting that not all the stories imply quite the same problem with the hypothetical NN. Dataset bias/selection effects is not the same thing as overfitting or disparate impact, but some of the story tellers don't realize that.\nFor example, in some stories, the NN fails when it's tested on additional heldout data (overfitting), not when it's tested on data from an entire different photographer or field exercise or data source (dataset bias/distributional shift).\nOr, Alexander Harrowell cites disparate impact in a medical school as if it were an example of the same problem, but it's not---at least in the USA, a NN would be correct in inferring that white students are more likely to succeed, as that is a real predictor (this would be an example of how people play rather fast and loose with claims of \"algorithmic bias\"), and it would not necessarily be the case that, say, randomized admission of more non-white students would be certain to increase the number of successful graduates; such a scenario is, however, possible and illustrates the difference between predictive models & causal models for control & optimization, and the need for experiments/reinforcement learning.\n\nA read of all the variants together raises more questions than it answers:\n\n- Did this story happen in the 1960s, 1980s, 1990s, or during Desert Storm in the 1990s?\n- Was the research conducted by the US military, or researchers for another NATO country?\n- Were the photographs taken by satellite, from the air, on the ground, or by spy cameras?\n- Were the photographs of American tanks, plywood cutouts, Soviet tanks, or Warsaw Pact tanks?\n- Were the tanks out in the open, under cover, or fully camouflaged?\n- Were these photographs taken in forests, fields, deserts, swamps, or all of them?\n- Were the photographs taken in same place but different time of day, same place but different days, or different places entirely?\n- Were there 100, 200, or thousands of photographs; and how many were in the training vs validation set?\n- Was the input in black-and-white binary, grayscale, or color?\n- Was the tell-tale feature either field vs forest, bright vs dark, the presence vs absence of clouds, the presence vs absence of shadows, the length of shadows, or an accident in film development unrelated to weather entirely?\n- Was the NN to be used for image processing or in autonomous robotic tanks?\n- Was it even a NN?\n- Was the dataset bias caught quickly within \"a few hours\", later by a suspicious team member, later still when applied to an additional set of tank photographs, during further testing producing a new dataset, much later during a live demo for military officers, or only after live deployment in the field?\n\nAlmost every aspect of the tank story which *could* vary *does* vary.\n\n### Urban Legends\n\nWe could also compare the tank story with many of the characteristics of [urban legends](!W) (of the sort so familiar from Snopes): they typically have a clear dramatic arc, involve horror or humor while playing on common concerns (distrust of NNs has been a theme from the start of NN research[^victim-of-success]), make an important didactic or moral point, claim to be true while sourcing remains limited to social proof such as the usual \"friend of a friend\" attributions, often try to associate with a respected institution (such as the US military), are transmitted primarily orally through social mechanisms & appear spontaneously & independently in many sources without apparent origin (most people seem to hear the tank story in unspecified classes, conferences, personal discussions rather than in a book or paper), exists in many mutually-contradictory variants often with overly-specific details[^detail] spontaneously arising in the retelling, been around for a long time (it appears almost fully formed in Dreyfus 1992, suggesting incubation before then), sometimes have a grain of truth (dataset bias certainly is real), and the full tank story is \"too good not to pass along\" (even authors who are sure it's an urban legend can't resist retelling it yet again for didactic effect or entertainment).\nThe tank story matches almost all the usual criteria for an urban legend.\n\n[^detail]: Here, the number of photographs and exactly how they were divided into training/validation sets is an oddly specific detail. This is reminiscent of religions or novels, where originally sparse and undetailed stories become elaborated and ever more detailed, with striking details added to catch the imagination. For example, the [Three Magi](!W \"Biblical Magi\") in the Christian Gospels are unnamed, but have been given by later Christians extensive fictional biographies of names ([\"Names for the Nameless in the New Testament\"](/doc/history/1980-metzger.pdf \"Metzger 1971\"); one of [many given names](!W \"List of names for the biblical nameless\")), symbolism, kingdoms, contemporary successors/descendants, martyrdoms & locations of remains...\n[^victim-of-success]: One commenter observes that the NN tank story and ilk appears to almost always be told about neural networks, and wonders why when dataset bias ought to be just as much a problem for other statistical/machine-learning methods like decision trees, which are capable of learning complex nonlinear problems. I could note that these anecdotes also get routinely told about genetic algorithms & evolutionary methods, so it's not purely neural, and it might be that NNs are victims of their own success: particularly as of 2017, NNs are so powerful & flexible in some areas (like computer vision) there is little competition, and so any horror stories will probably involve NNs.\n\n### Origin\n\nSo where does this urban legend come from?\nThe key anecdote appears to be Edward Fredkin's as it precedes all other excerpts except perhaps the research Kanal describes; Fredkin's story does *not* confirm the tank story as he merely speculates that brightness was driving the results, much less all the extraneous details about photographic film being accidentally overdeveloped or robot tanks going berserk or a demo failing in front of Army brass.\n\nBut it's easy to see how Fredkin's reasonable question could have memetically evolved into the tank story as finally fixed into published form by Dreyfus's article:\n\n#. **Setting**: Kanal & Randall set up their very small simple early perceptrons on some tiny binary aerial photos of tanks, in interesting early work, and Fredkin attends the talk sometime around 1960--1963\n#. **The Question**: Fredkin then asks in the Q&A whether the perceptron is not learning square-shapes but brightness\n#. **Punting**: of course neither Fredkin nor Kanal & Randall can know on the spot whether this critique is right or wrong (perhaps that question motivated the binarized results reported in Kanal & Randall 1964?), and the question remains unanswered\n#. **Anecdotizing**: but someone in the audience considers that an excellent observation about methodological flaws in NN research, and perhaps they (or Fredkin) repeats the story to others, who find it useful too, and along the way, Fredkin's *question mark* gets dropped and the *possible* flaw becomes an *actual* flaw, with the punchline: \"...and it turned out their NN were just detecting average brightness!\"\n\n One might expect Kanal & Randall to rebut these rumors, if only by publishing additional papers on their functioning system, but by a quirk of fate, as Kanal explains in his preface, after their 1964 paper, the Army liked it enough to make it classified and then they were reassigned to an entirely different task, killing progress entirely. (Something similar happened to [the best early facial recognition systems](https://www.wired.com/story/secret-history-facial-recognition/ \"The Secret History of Facial Recognition: Sixty years ago, a sharecropper's son invented a technology to identify faces. Then the record of his role all but vanished. Who was Woody Bledsoe, and who was he working for?\").)\n#. **Proliferation**: In the absence of any counternarrative (silence is considered consent), the tank story continues spreading.\n#. **Mutation**: but now the story is incomplete, a joke missing most of the setup to its punchline---*how* did these Army researchers discover the NN had tricked them and what was the brightness difference from? The various versions propose different resolutions, and likewise, appropriate details about the tank data must invented.\n#. [**Fixation**](!W \"Fixation (population genetics)\"): Eventually, after enough mutations, a version reaches Dreyfus, already a well-known critic of the AI establishment, who then uses it in his article/book, virally spreading it globally to pop up in random places thenceforth, and fixating it as an universally-known _ur_-text. (Further memetic mutations can and often will occur, but diligent writers & researchers will 'correct' variants by returning to the Dreyfus version.)\n\nOne might try to write Dreyfus off as a coincidence and argue that the US Army *must* have had so many neural net research programs going that one of the others is the real origin, but one would expect those programs to result in spinoffs, more reports, reports since declassified, etc. It's been half a century, after all. And despite the close association of the US military with MIT and early AI work, tanks do not seem to have been a major focus of early NN research---for example, [Schmidhuber's history](https://arxiv.org/abs/1404.7828#schmidhuber \"'Deep Learning in Neural Networks: An Overview', Schmidhuber 2014\") does not mention tanks at all, and most of my paper searches kept pulling up NN papers about 'tanks' as in vats, such as controlling stirring/mixing tanks for chemistry.\nNor is it a safe assumption that the military always has much more advanced technology than the public or private sectors; often, they can be quite behind or at the status quo.[^NSA]\n\n[^NSA]: One memorable example of this for me was when the Edward Snowden NSA leaks began.\n\n Surely, given previous instances like differential cryptanalysis or public-key cryptography, the NSA had any number of amazing technologies and moon math beyond the ken of the rest of us? I read many of the presentations with great interest, particularly about how they searched for individuals or data---cutting edge neural networks? Evolutionary algorithms? Even more exotic techniques? Nope---regexps, linear models, and random forests. Practical but boring. Nor did any major cryptographic breakthroughs become exposed via Snowden.\n\n Overall, the NSA corpus indicates that they had the abilities you would expect from a large group of patient programmers with no ethics given a budget of billions of dollars to spend on a mission whose motto was \"hack the planet\" using a comprehensive set of methods ranging from physical breakins & bugs, theft of private keys, bribery, large-scale telecommunications tapping, implanting backdoors, purchase & discovery of unpatched vulnerabilities, & standards process subversion. Highly effective in the aggregate but little that people hadn't expected or long speculated about in the abstract.\n\n# Could it Happen?\n\nCould something like the tank story (a NN learning to distinguish solely on average brightness levels) happen in 2017 with state-of-the-art techniques like convolutional neural networks (CNNs)?\n(After all, presumably nobody *really* cares about what mistakes a crude perceptron may or may not have once made back in the 1960s; most/all of the story-tellers are using it for didactic effect in warning against carelessness in contemporary & future AI research/applications.)\nI would guess that while it could happen, it would be considerably less likely now than then for several reasons:\n\n#. a common preprocessing step in computer vision (and NNs in general) is to \"whiten\" the image by standardizing or transforming pixels to a normal distribution; this would tend to wipe global brightness levels, promoting invariance to illumination\n#. in addition to or instead of whitening, it is also common to use aggressive \"data augmentation\": shifting the image by a few pixels in each direction, cropping it randomly, adjusting colors to be slightly more red/green/blue, flipping horizontally, barrel-warping it, adding JPEG compression noise/artifacts, brightening or darkening, etc.\n\n None of these transformations should affect whether an image is classifiable as \"dog\" or \"cat\"^[Although there are occasional exceptions where a data augmentation *doesn't* preserve important semantics: you wouldn't want to use horizontal flips with street signs.], the reasoning goes, so the NN should learn to see past them, and generating variants during training provides additional data for free. Aggressive data augmentation would make it harder to pick up global brightness as a cheap trick.\n#. CNNs have built-in biases (compared to fully-connected neural networks) towards edges and other structures, rather than global averages; convolutions want to find edges and geometric patterns like little squares for tanks. (This point is particularly germane in light of the brain inspiration for convolutions & Dreyfus & Dreyfus 1992's interpretation of the tank story.)\n#. image classification CNNs, due to their large sizes, are often trained on large datasets with many classes to categorize images into (canonically, ImageNet with 1000 classes over a million images; much larger datasets, such as 300 million images, have been explored and found to still offer benefits). Perforce, most of these images will not be generated by the dataset maintainer and will come from a wide variety of peoples, places, cameras, and settings, reducing any systematic biases. It would be difficult to find a cheap trick which works over many of those categories simultaneously, and the NN training will constantly erode any category-specific tricks in favor of more generalizable pattern-recognition (in part because there's no inherent 'modularity' which could factor a NN into a \"tank cheap trick\" NN & a \"everything else real pattern-recognition\" NN). The power of generalizable abstractions will tend to overwhelm the shortcuts, and the more data & tasks a NN is trained on, providing greater supervision & richer insight, the more this will be the case.\n\n - Even in the somewhat unusual case of a special-purpose binary classification CNN being trained on a few hundred images, because of the large sizes of good CNNs, it is typical to at least start with a pretrained ImageNet CNN in order to benefit from all the learned knowledge about edges & whatnot before \"finetuning\" on the special-purpose small dataset. If the CNN starts with a huge inductive bias towards edges etc, it will have a hard time throwing away its informative priors and focusing purely on global brightness. (Often in finetuning, the lower levels of the CNN aren't allowed to change at all!)\n - Another variant on transfer learning is to use the CNN as a feature-generator, by taking the final layers' state computed on a specific image and using them as a vector embedding, a sort of summary of everything about the image content relevant to classification; this embedding is useful for other kinds of CNNs for purposes like style transfer (style transfer aims to warp an image towards the appearance of another image while preserving the embedding and thus presumably the content) or for GANs generating images (the discriminator can use the features to detect \"weird\" images which don't make sense, thereby forcing the generator to learn what images correspond to realistic embeddings).\n#. CNNs would typically throw warning signs before a serious field deployment, either in diagnostics or failures to extend the results.\n\n - One benefit of the filter setup of CNNs is that it's easy to visualize what the lower layers are 'looking at'; typically, CNN filters will look like diagonal or horizontal lines or curves or other simple geometric patterns. In the case of a hypothetical brightness-detector CNN, because it is not recognizing any shapes whatsoever or doing anything but trivial brightness averaging, one would expect its filters to look like random noise and definitely nothing like the usual filter visualizations. This would immediately alarm any deep learning researcher that the CNN is not learning what they thought it was learning.\n - Related to filter visualization is input visualization: it's common to generate some heatmaps of input images to see what regions of the input image are influencing the classification the most. If you are classifying \"cats vs dogs\", you expect a heatmap of a cat image to focus on the cat's head and tail, for example, and not on the painting on the living room wall behind it; if you have an image of a tank in a forest, you expect the heatmap to focus on the tank rather than trees in the corner or nothing in particular, just random-seeming pixels all over the image. If it's not focusing on the tank at all, how is it doing the classification?, one would then wonder. ([\"Picasso: A Modular Framework for Visualizing the Learning Process of Neural Network Image Classifiers\"](https://arxiv.org/abs/1705.05627) ([blog](https://medium.com/merantix/picasso-a-free-open-source-visualizer-for-cnns-d8ed3a35cfc5 \"Picasso: A free open-source visualizer for Convolutional Neural Networks; Cloudy with a chance of tanks\")), Henderson & Rothe 2017-05-16 quote Yudkowsky 2008's version of the tank story as a motivation for their heatmap visualization tool and demonstrate that, for example, blocking out the sky in a tank image doesn't bother a VGG-16 CNN image classifier but block the tank's treads does, and the heatmap focuses on the tank itself.) There are additional methods for trying to understand whether the NN has learned a potentially useful algorithm using other methods such as the previously cited LIME.\n#. Also related to the visualization is going beyond classification to the logical next step of \"localization\" or \"image segmentation\": having detected an image with a tank in it *somewhere*, it is natural (especially for military purposes) to ask *where* in the image the tank is?\n\n A CNN which is truly detecting the tank itself will lend itself to image segmentation (eg. CNN success in reaching human levels of ImageNet classification performance have also resulted in extremely good segmentation of an image by categorizing each pixel as human/dog/cat/etc), while one learning the cheap trick of brightness will utterly fail at guessing better than chance which pixels are the tank.\n\nSo, it is highly unlikely that a CNN trained via a normal workflow (data-augmented finetuning of a pretrained ImageNet CNN with standard diagnostics) would fail in this exact way or, at least, make it to a deployed system without failing.\n\n## Could Something Like it Happen?\n\nCould something *like* the tank story happen, in the sense of a selection-biased dataset yielding NNs which fail dismally in practice?\nOne could imagine it happening and it surely does at least occasionally, but in practice it doesn't seem to be a particularly serious or common problem---people routinely apply CNNs to very different contexts with considerable success.^[It amuses me to note when websites or tools are clearly using ImageNet CNNs, because they assume ImageNet categories or provide annotations in their metadata, or because they exhibit uncannily good recognition of dogs. Sometimes CNNs are much better than they are given credit for being and they are *assumed* by commenters to fail on problems they actually succeed on; for example, some meme images have circulated claiming that CNNs can't distinguish fried chickens from [Labradoodle](!W) dogs, chihuahuas from muffins, or sleeping dogs from bagels---but as amusing as the image-sets are, [Miles Brundage](https://twitter.com/Miles_Brundage/status/874448037929725952) reports that [Clarifai's](https://www.clarifai.com/) CNN API has little trouble accurately distinguishing man's worst food from man's best friend.]\nIf it's such a serious and common problem, one would think that people would be able to provide a wealth of real-world examples of systems deployed with dataset bias making it entirely useless, rather than repeating a fiction from 50 years ago.\n\nOne of the most relevant (if unfortunately older & possibly out of date) papers I've read on this question of dataset bias is [\"Unbiased Look at Dataset Bias\"](https://pdfs.semanticscholar.org/b9f2/04abd29874f72840b5eb204d38938e167054.pdf), Torralba & Efros 2011:\n\n> Datasets are an integral part of contemporary object recognition research. They have been the chief reason for the considerable progress in the field, not just as source of large amounts of training data, but also as means of measuring and comparing performance of competing algorithms. At the same time, datasets have often been blamed for narrowing the focus of object recognition research, reducing it to a single benchmark performance number. Indeed, some datasets, that started out as data capture efforts aimed at representing the visual world, have become closed worlds unto themselves (eg. the Corel world, the Caltech101 world, the PASCAL VOC world). With the focus on beating the latest benchmark numbers on the latest dataset, have we perhaps lost sight of the original purpose?\n>\n> The goal of this paper is to take stock of the current state of recognition datasets. We present a comparison study using a set of popular datasets, evaluated based on a number of criteria including: relative data bias, cross-dataset generalization, effects of closed-world assumption, and sample value. The experimental results, some rather surprising, suggest directions that can improve dataset collection as well as algorithm evaluation protocols. But more broadly, the hope is to stimulate discussion in the community regarding this very important, but largely neglected issue.\n\nThey demonstrate on several datasets (including ImageNet), that it's possible for a SVM (CNNs were not used) to guess at above chance levels what dataset an image comes from and that there are noticeable drops in accuracy when a classifier trained on one dataset is applied to ostensibly the same category in another dataset (eg. an ImageNet \"car\" SVM classifier applied to PASCAL's \"car\" images will go from 57% to 36% accuracy).\nBut---perhaps the glass is half-full---in none of the pairs does the performance degrade to near-zero, so despite the definite presence of dataset bias, the SVMs are still learning generalizable, transferable image classification (similarly, [Jo & Bengio 2017](https://arxiv.org/abs/1711.11561 \"Measuring the tendency of CNNs to Learn Surface Statistical Regularities\")/[Recht et al 2018](https://arxiv.org/abs/1806.00451 \"Do CIFAR-10 Classifiers Generalize to CIFAR-10?\")/[Recht et al 2019](https://arxiv.org/abs/1902.10811 \"Do ImageNet Classifiers Generalize to ImageNet?\")^[Recht et al 2019's ImageNet-v2 turns out to illustrate some [subtle issues in measuring dataset bias](https://gradientscience.org/data_rep_bias/ \"'Identifying Statistical Bias in Dataset Replication [blog]', Engstrom et al 2020\") ([Engstrom et al 2020](https://gradientscience.org/data_rep_bias.pdf \"Identifying Statistical Bias in Dataset Replication\")): because of measurement error in the labels of images causing errors in the final dataset, simply comparing a classifier trained on one with its performance on the other and noting that performance fell by X% yields a misleadingly inflated estimate of 'bias' by attributing the combined error of both datasets to the bias. A [Rip Van Winkle](http://www.offconvex.org/2021/04/07/ripvanwinkle/ \"'Rip van Winkle’s Razor, a Simple New Estimate for Adaptive Data Analysis', Arora & Zhang 2021\") estimate of CNN overfitting indicates it must be mild---CNNs just aren't all that algorithmically complex and thus unable to be overly-tailored to ImageNet. For much more theory on covariate shift impacts and decreases/increases in performance of NNs, see [Tripuraneni et al 2021](https://arxiv.org/abs/2111.08234 \"Covariate Shift in High-Dimensional Random Feature Regression\").]/[Yadav & Bottou 2019](https://arxiv.org/abs/1905.10498 \"Cold Case: The Lost MNIST Digits\")/[Zhang & Davison 2020](https://arxiv.org/abs/2002.02559 \"Impact of ImageNet Model Selection on Domain Adaptation\")/[Beyer et al 2020](https://arxiv.org/abs/2006.07159#google \"Are we done with ImageNet?\") show a generalization gap but only a small one with typically better in-sample classifiers performing better out-of-sample, [Kornblith et al 2018](https://arxiv.org/abs/1805.08974#google \"Do Better ImageNet Models Transfer Better?\") show that ImageNet resnets produce multiple new SOTAs on other image datasets using finetuning transfer learning, [Lapuschkin et al 2019](https://arxiv.org/abs/1902.10178 \"Unmasking Clever Hans Predictors and Assessing What Machines Really Learn\") compares Fisher vectors (an SVM trained on SIFT features, & [BiT](https://arxiv.org/abs/1912.11370#google \"'Big Transfer (BiT): Large Scale Learning of General Visual Representations for Transfer', Kolesnikov et al 2019\") is one of a number of [scaling papers](/note/scaling \"'Machine Learning Scaling', Branwen 2021\") showing much better representations & robustness & transfer with extremely large CNNs) to CNNs on PASCAL VOC again, finding the Fishers overfit by eg. classifying horses based on copyright watermarks while the CNN nevertheless classifies it based on the correct parts, although the CNN may succumb to a different dataset bias by classifying airplanes based on having backgrounds of skies[^Clever-Hans]); and I believe we have good reason to expect our CNNs to also work in the wild.\n\n[^Clever-Hans]: Lapuschkin et al 2019:\n\n > The first learning machine is a model based on Fisher vectors (FV) [31, 32] trained on the PASCAL VOC 2007 image dataset [33] (see §E). The model and also its competitor, a pretrained Deep Neural Network (DNN) that we fine-tune on PASCAL VOC, show both excellent state-of-the-art test set accuracy on categories such as 'person', 'train', 'car', or 'horse' of this benchmark (see Table 3). Inspecting the basis of the decisions with LRP, however, reveals for certain images substantial divergence, as the heatmaps exhibiting the reasons for the respective classification could not be more different. Clearly, the DNN's heatmap points at the horse and rider as the most relevant features (see Figure 14). In contrast, FV's heatmap is most focused onto the lower left corner of the image,which contains a source tag. A closer inspection of the data set (of 9963 samples [33]) that typically humans never look through exhaustively, shows that such source tags appear distinctively on horse images; a striking artifact of the dataset that so far had gone unnoticed [34]. Therefore, the FV model has 'overfitted' the PASCAL VOC dataset by relying mainly on the easily identifiable source tag, which incidentally correlates with the true features, a clear case of 'Clever Hans' behavior. This is confirmed by observing that artificially cutting the source tag from horse images significantly weakens the FV model's decision while the decision of the DNN stays virtually unchanged (see Figure 14). If we take instead a correctly classified image of a Ferrari and then add to it a source tag, we observe that the FV's prediction swiftly changes from 'car' to 'horse' (cf. Figure 2a) a clearly invalid decision (see §E and Figures 15--20 for further examples and analyses)... For the classification of ships the classifier is mostly focused on the presence of water in the bottom half of an image. Removing the copyright tag or the background resultsin a drop of predictive capabilities. A deep neural network, pre-trained in the ImageNet dataset [93], instead shows none of these shortcomings.\n\n The airplane example is a little more debatable---the presence of a lot of blue sky in airplane images seems like a valid cue to me and not necessarily cheating:\n\n > ...The SpRAy analysis could furthermore reveal another 'Clever Hans' type behavior in our fine-tuned DNN model, which had gone unnoticed in previous manual analysis of the relevance maps. The large eigengaps in the eigenvalue spectrum of the DNN heatmaps for class \"aeroplane\" indicate that the model uses very distinct strategies for classifying aeroplane images (see Figure 26). A t-SNE visualization (Figure 28) further highlights this cluster structure. One unexpected strategy we could discover with the help of SpRAy is to identify aeroplane images by looking at the artificial padding pattern at the image borders, which for aeroplane images predominantly consists of uniform and structureless blue background. Note that padding is typically introduced for technical reasons (the DNN model only accepts square shaped inputs), but unexpectedly (and unwantedly) the padding pattern became part of the model's strategy to classify aeroplane images. Subsequently we observe that changing the manner in which padding is performed has a strong effect on the output of the DNN classifier (see Figures 29--32).\n\nSome real instances of dataset bias, more or less (most of these were caught by standard heldout datasets and arguably aren't the 'tank story' at all):\n\n- a particularly appropriate example is the unsuccessful [WWII Russian anti-tank dog program](!W \"Anti-tank dog#Deployment by the Soviet Union\"): a failure, among several reasons, because the dogs were trained on Russian tanks and sought *them* out rather than the enemy German tanks because the dogs recognized either the fuel smell or fuel canisters (diesel vs gasoline)\n- [\"The person concept in monkeys (_Cebus apella_)\"](/doc/psychology/1988-damato.pdf), D'Amato & Van Sant 1988\n- Google Photos in June 2015 caused a social-media fuss over mislabeling African-Americans as gorillas; Google did not explain how the Photos app made that mistake but it is presumably using a CNN and an example of either dataset bias (many more Caucasian/Asian faces leading to better performance on them and continued poor performance everywhere else) and/or a mis-specified loss function (the CNN optimizing a standard classification loss and responding to class imbalance or objective color similarity by preferring to guess 'gorilla' rather than 'human' to minimize loss, despite what ought to be a greater penalty for mistakenly classifying a human as an animal/object rather than vice versa). A similar issue occurred with Flickr in May 2015.\n- [\"Gender-From-Iris or Gender-From-Mascara?\"](https://arxiv.org/abs/1702.01304), Kuehlkamp et al 2017\n- Gidi Shperber, [\"What I've learned from Kaggle's fisheries competition\"](https://gidishperber.medium.com/what-ive-learned-from-kaggle-s-fisheries-competition-92342f9ca779) (2017-05-01): initial application of VGG ImageNet CNNs for transfer solved the fish photograph classification problem almost immediately, but failed on the submission validation set; fish categories could be predicted from the specific boat taking the photographs\n- [\"Leakage in data mining: Formulation, detection, and avoidance\"](https://pdfs.semanticscholar.org/829e/6bcabe9cc1bd334429215404a5adaefc7ade.pdf), Kaufman et al 2011 discusses the general topic and mentions a few examples from KDD-Cup\n- [Dan Piponi](https://twitter.com/sigfpe/status/919995891502551042) (2017-10-16): \"Real world example from work: hospitals specialise in different injuries so CNN for diagnosis used annotations on x-rays to ID hospital.\"\n\n - A more detailed examination of X-ray saliencies: [\"Confounding variables can degrade generalization performance of radiological deep learning models\"](https://arxiv.org/abs/1807.00431), Zech et al 2018 ([blog](https://jrzech.medium.com/what-are-radiological-deep-learning-models-actually-learning-f97a546c5b98 \"What are radiological deep learning models actually learning?\"))\n- [Thomas G. Dietterich](https://twitter.com/tdietterich/status/1154839042623594496):\n\n > We made exactly the same mistake in one of my projects on insect recognition. We photographed 54 classes of insects. Specimens had been collected, identified, and placed in vials. Vials were placed in boxes sorted by class. I hired student workers to photograph the specimens. Naturally they did this one box at a time; hence, one class at a time. Photos were taken in alcohol. Bubbles would form in the alcohol. Different bubbles on different days. The learned classifier was surprisingly good. But a saliency map revealed that it was reading the bubble patterns and ignoring the specimens. I was so embarrassed that I had made the oldest mistake in the book (even if it was apocryphal). Unbelievable. Lesson: always randomize even if you don't know what you are controlling for!\n- a possible case is Wu & Zhang 2016, [\"Automated Inference on Criminality using Face Images\"](https://pdfs.semanticscholar.org/1cd3/57b675a659413e8abf2eafad2a463272a85f.pdf), attempt to use CNNs to classify standardized government ID photos of Chinese people by whether the person has been arrested, the source of the criminal IDs being government publications of wanted suspects vs ordinary peoples' IDs collected online; the photos are repeatedly described as ID photos and implied to be uniform. The use of official government ID photos taken in advance of any crime would appear to eliminate one's immediate objections about dataset bias---certainly ID photos would be distinct in many ways from ordinary cropped promotional headshots---and so the results seem strong.\n\n In response to [harsh criticism](https://www.callingbullshit.org/case_studies/case_study_criminal_machine_learning.html) (some of which points are more relevant & likely than the others...), Wu & Zhang admit in their response ([\"Responses to Critiques on Machine Learning of Criminality Perceptions (Addendum of arXiv:1611.04135)\"](https://arxiv.org/abs/1611.04135)) that the dataset is not quite as implied:\n\n > All criminal ID photos are government issued, but not mug shots. To our best knowledge, they are normal government issued ID portraits like those for driver's license in USA. In contrast, most of the noncriminal ID style photos are taken officially by some organizations (such as real estate companies, law firms, etc.) for their websites. We stress that they are not selfies.\n\n While there is no direct replication testing the Wu & Zhang 2016 results that I know of, the inherent considerable differences between the two classes, which are not homogenous at all, make me highly skeptical.\n- Possible: [Winkler et al 2019](/doc/ai/nn/cnn/2019-winkler.pdf \"Association Between Surgical Skin Markings in Dermoscopic Images and Diagnostic Performance of a Deep Learning Convolutional Neural Network for Melanoma Recognition\") examine a commercial CNN (\"Moleanalyzer-Pro\"; [Haenssle et al 2018](/doc/ai/nn/cnn/2018-haenssle.pdf \"Man against machine: diagnostic performance of a deep learning convolutional neural network for dermoscopic melanoma recognition in comparison to 58 dermatologists\")) for skin cancer detection. Concerned by the fact that doctors sometimes use purple markers to highlight potentially-malignant skin cancers for easier examination, they compare before/after photographs of skin cancers which have been highlighted, and find that the purple highlighting increases the probability of being classified as malignant.\n\n However, it is unclear that this is a dataset bias problem, as the existing training datasets for skin cancer are realistic and already include purple marker samples[^purple]. The demonstrated manipulation may simply reflect the CNN using purple as a proxy for human concern, which is an informative signal and desirable if it improves classification performance in the real world on real medical cases. It is possible that the training datasets are in fact biased to some degree with too much/too little purple or that use of purple differs systematically across hospitals, and those would damage performance to some degree, but that is not demonstrated by their before/after comparison. Ideally, one would run a field trial to test the CNN's performance as a whole by using it in various hospitals and then following up on all cases to determine benign or malignant; if the classification performance drops considerably from the original training, then that implies something (possibly the purple highlighting) has gone wrong.\n- Possible: [Esteva et al 2011](/doc/ai/nn/2017-esteva.pdf \"Dermatologist-level classification of skin cancer with deep neural networks\") trains a skin cancer classifier; the final CNN performs well in independent test sets. The paper does not mention this problem but [media coverage reported](https://www.thedailybeast.com/why-doctors-arent-afraid-of-better-more-efficient-ai-diagnosing-cancer \"Why Doctors Aren't Afraid of Better, More Efficient AI Diagnosing Cancer: Just like humans, AI isn't perfect\") that rulers in photographs served as unintentional features:\n\n > He and his colleagues had one such problem in their their study with rulers. When dermatologists are looking at a lesion that they think might be a tumor, they'll break out a ruler---the type you might have used in grade school---to take an accurate measurement of its size. Dermatologists tend to do this only for lesions that are a cause for concern. So in the set of biopsy images, if an image had a ruler in it, the algorithm was more likely to call a tumor malignant, because the presence of a ruler correlated with an increased likelihood a lesion was cancerous. Unfortunately, as Novoa emphasizes, the algorithm doesn't know why that correlation makes sense, so it could easily misinterpret a random ruler sighting as grounds to diagnose cancer.\n\n It's unclear how they detected this problem or how they fixed it. And like Winkler et al 2019, it's unclear if this was a problem which would reduce real-world performance (are dermatologists going to stop measuring worrisome lesions?).\n\n[^purple]: Winkler et al 2019: \"When reviewing the open-access International Skin Imaging Collaboration database, which is a source of training images for research groups, we found that a similar percent-age of melanomas (52 of 2169 [2.4%]) and nevi (214 of 9303 [2.3%]) carry skin markings. Nevertheless, it seems conceivable that either an imbalance in the distribution of skin markings in thousands of other training images that were used in the CNN tested herein or the assignment of higher weights to blue markings only in lesions with specific (though unknown) accompanying features may induce a CNN to associate skin markings with the diagnosis of melanoma. The latter hypothesis may also explain why melanoma probability scores remained almost unchanged in many marked nevi while being increased in others.\"\n\n# Should We Tell Stories We Know Aren't True?\n\nSo the NN tank story probably didn't happen as described, but something somewhat like it *could* have happened and things sort of like it could happen now, and it is (as proven by its history) a catchy story to warn students with---it's not true but it's [\"truthy\"](!W \"Truthiness\").\nShould we still mention it to journalists or in blog posts or in discussions of AI risk, as a noble lie?\n\nI think not.\nIn general, we should promote more epistemic rigor and higher standards in an area where there is already far too much impact of fictional stories (eg. the depressing inevitability of a _Terminator_ allusion in AI risk discussions).\nNor do I consider the story particularly effective from a didactic perspective: relegating dataset bias to mythical stories does not inform the listener about how common or how serious dataset bias is, nor is it helpful for researchers investigating countermeasures and diagnostics---the LIME developers, for example, are not helped by stories about Russian tanks, but need real testcases to show that their interpretability tools work & would help machine learning developers diagnose & fix dataset bias.\n\nI also fear that telling the tank story tends to promote complacency and underestimation of the state-of-the-art by implying that NNs and AI in general are toy systems which are far from practicality & cannot work in the real world (particularly the story variants which date it relatively recently), or that such systems when they fail will fail in easily diagnosed, visible, sometimes amusing ways, ways which can be diagnosed by a human comparing the photos or applying some political reasoning to the outputs; but modern NNs are powerful, are often deployed to the real world despite the spectre of dataset bias, and do not fail in blatant ways---what we actually see with deep learning are far more concerning failure modes like \"adversarial examples\" which are quite as inscrutable as the neural nets themselves (or AlphaGo's one misjudged move resulting in its only loss to Lee Sedol). Adversarial examples are particularly insidious as the NN will work flawlessly in all the normal settings and contexts, only to fail totally when exposed to a custom adversarial input.\nMore importantly, dataset bias and failure to transfer tends to be a self-limiting problem, particularly when embedded in an ongoing system or reinforcement learning agent, since if the NN is making errors based on dataset bias, it will in effect be generating new counterexample datapoints for its next iteration.\n\n## Alternative examples\n\n
\n> There is nothing so useless as doing efficiently that which should not be done at all.\n>\n> [Peter Drucker](!W)\n
\n\nThe more troubling errors are ones where the goal itself, the reward function, is mis-specified or wrong or harmful.\nI am less worried about algorithms learning to do poorly the right thing for the wrong reasons because humans are sloppy in their data collection than I am about them learning to do well the wrong thing for the right reasons despite perfect data collection.\nWith errors or inefficiencies in the rest of the algorithm, training may simply be slower, or there may be more local optima which may temporarily trap the agent, or its final performance may be worse than it could be; these are bad things, but normal enough.\nBut when the *reward function* is wrong, the better the algorithm is, the more useless (or dangerous) it becomes at [pursuing the wrong objective](https://arxiv.org/abs/2105.14111 \"'Goal Misgeneralization in Deep Reinforcement Learning', Koch et al 2021\") because [the reward hacking scales](https://arxiv.org/abs/2210.10760#openai \"‘Scaling Laws for Reward Model Overoptimization’, Gao et al 2022\"), and this may [happen abruptly](https://arxiv.org/abs/2201.03544 \"‘The Effects of Reward Misspecification: Mapping and Mitigating Misaligned Models’, Pan et al 2022\")!\nUsing losses which have little to do with the true human utility function or decision context is far more common than serious dataset bias: people think about where their data is coming from, but they tend not to think about what the consequences of wrong classifications are.\nSuch reward function problems cannot be fixed by collecting any amount of data or making data more representative of the real world, and for large-scale systems will be more harmful.\nAnd it can be hard to avoid errors: sure, in hindsight, once you've seen the converged reward hack, you can laugh and say \"of course that particular bit of reward-shaping was wrong, how obvious now!\"---but only in hindsight.\nBefore then, the absence of the hack is just common sense: we are [blinded by our knowledge](/unseeing \"‘On Seeing Through and Unseeing: The Hacker Mindset’, Branwen 2012\"), which is a burden optimization processes do not share.\n\nUnfortunately, I know of no particularly comprehensive lists of examples of mis-specified rewards/unexpectedly bad proxy objective functions/\"reward hacking\"/\"wireheading\"/\"perverse instantiation\"^[Getting into more general economic, behavioral, or human situations would be going too far afield, but the relevant analogues are \"[principal-agent problem](!W)\", \"[perverse incentives](!W)\", \"law of [unintended consequences](!W)\", \"[Lucas critique](!W)\", \"[Goodhart’s law](!W)\", or \"[Campbell’s law](!W)\"; such alignment problems are only partially dealt with by having ground-truth evolutionary ['outer' losses](/backstop \"'Evolution as Backstop for Reinforcement Learning', Branwen 2018\"), and avoiding reward hacking remains an open problem (even in theory). [Speedrun](!W) gaming communities frequently provide examples of reward-hacking, particularly when games are finished faster by exploiting bugs to [sequence break](!W \"Sequence breaking\"); particularly esoteric techniques require outright hacking the [\"weird machines\"](/turing-complete#security-implications) present in many games/devices---for example, [pannenkoek2012's](!W \"pannenkoek2012\") ['parallel universes'](https://pannenkoek2012.fandom.com/wiki/Parallel_Universe) [_Super Mario 64_](!W) hack which [avoids using any jumps](https://www.youtube.com/watch?v=kpk2tdsPh0A \"SM64 - Watch for Rolling Rocks - 0.5× A Presses (Commentated)\") by exploiting an [integer overflow](!W) bug & [modulo](!W \"Modular arithmetic\") wraparound to accelerate Mario to near-infinite speed, passing through the entire map multiple times, in order to stop at the right place. ]; perhaps people can make suggestions, but a few examples I have found or recall include:\n\n- [linear programming](!W) optimization for nutritious (not necessarily palatable!) low-cost diets: [\"The cost of subsistence\"](/doc/statistics/decision/1945-stigler.pdf), Stigler 1945, [\"The Diet Problem\"](/doc/statistics/decision/1990-dantzig.pdf), Dantzig 1990, [\"Stigler’s Diet Problem Revisited\"](/doc/statistics/decision/2001-garille.pdf), Garille & Gass 2001\n\n - SMT/SAT solvers are likewise infamous for finding strictly valid yet surprising or useless, which perversity is exactly what makes them so invaluable in security/formal-verification research (for example, in RISC-V verification of exceptions, discovering that it can trigger an exception by turning on a [debug unit & setting a breakpoint](https://twitter.com/oe1cxw/status/957409526940094464), or using an obscure [memory mode setting](https://twitter.com/oe1cxw/status/958704985495175169))\n- boat race reward-shaping for picking up targets results in not finish race at all but going in circles to hit targets: [\"Faulty Reward Functions in the Wild\"](https://openai.com/research/faulty-reward-functions), OpenAI\n- a classic 3D robot-arm NN agent, in a somewhat unusual setup where the evaluator/reward function is another NN trained to predict human evaluations, learns to move the arm to a position which *looks* like it is positioned at the goal but is actually just in between the 'camera' and the goal: [\"Learning from Human Preferences\"](https://openai.com/research/learning-from-human-preferences), Christiano et al 2017, OpenAI\n- reward-shaping a bicycle agent for not falling over & making progress towards a goal point (but not punishing for moving away) leads it to learn to circle around the goal in a physically stable loop: [\"Learning to Drive a Bicycle using Reinforcement Learning and Shaping\"](https://pdfs.semanticscholar.org/10ba/d197f1c1115005a56973b8326e5f7fc1031c.pdf), Randlov & Alstrom 1998; similar difficulties in avoiding pathological optimization were experienced by [Cook 2004](/doc/reinforcement-learning/model-free/2004-cook.pdf \"It Takes Two Neurons To Ride a Bicycle\") ([video](/doc/reinforcement-learning/2004-cook-twoneuronbicycle.avi) of policy-iteration learning to spin handle-bar to stay upright).\n- reward-shaping a soccer robot for touching the ball caused it to learn to get to the ball and \"vibrate\" touching it as fast as possible: David Andre & Astro Teller in Ng et al 1999, [\"Policy invariance under reward transformations: theory and application to reward shaping\"](http://luthuli.cs.uiuc.edu/~daf/courses/games/AIpapers/ng99policy.pdf)\n- environments involving walking/running/movement and rewarding movement seem to often result in the agents learning to fall over as a local optima of speed generation, possibly bouncing around or moving at hyperspeed by exploting any failure to conserve all quantities like energy.\n\n For example, Sims notes in one paper ([Sims 1994](http://www.karlsims.com/papers/siggraph94.pdf \"Evolving Virtual Creatures\")) that \"It is important that the physical simulation be reasonably accurate when optimizing for creatures that can move within it. Any bugs that allow energy leaks from non-conservation, or even round-off errors, will inevitably be discovered and exploited by the evolving creatures...speed is used as the selection criteria, but the vertical component of velocity is ignored. For land environments, it can be necessary to prevent creatures from generating high velocities by simply falling over.\" Sims mentions round-off errors as a possibility, and apparently this happened: according to [Danny Hillis](!W), \"early walking machines evolved on the Connection Machine \\[[CM-5](https://en.wikipedia.org/wiki/Connection_Machine#Designs)\\] took advantage of an obscure round-off error in the floating-point unit that the human programmers did not even know existed.\" ([Taylor & Massey 2001](/doc/ai/2001-taylor.pdf#page=6 \"‘Recent Developments in the Evolution of Morphologies and Controllers for Physically Simulated Creatures § A Re-implementation of Sims’ Work Using the MathEngine Physics Engine’, Taylor & Massey 2001 (page 6)\") attempted to reimplement Sims's work, and had to implement a large range of checks on their creatures because they kept breaking the physics engine they used.)\n\n Combined with [\"3-D Morphology\"](https://www.cs.uml.edu/~holly/91.549/readings/sims-alife94.pdf \"'Evolving 3D Morphology and Behavior by Competition', Sims 1994\"), Sims discovered that without height limits, the creatures just became as tall as possible and fell over; and if the conservation-of-momentum was not exact, creatures could evolve 'paddles' and paddle themselves at high velocity. (Evolving similar exploitation of rounding-off has been done by OpenAI in 2017 to turn [apparently linear neural networks into nonlinear ones](https://openai.com/research/nonlinear-computation-in-deep-linear-networks \"Nonlinear Computation in Deep Linear Networks\"); [Jaderberg et al 2019](/doc/reinforcement-learning/exploration/2019-jaderberg.pdf#deepmind \"Human-level performance in 3D multiplayer games with population-based reinforcement learning\") [appears to have had](https://www.science.org/content/article/artificial-intelligence-learns-teamwork-deadly-game-capture-flag \"Artificial intelligence learns teamwork in a deadly game of capture the flag\") a similar momentum bug in its _Quake_ simulator: \"In one test, the bots invented a completely novel strategy, exploiting a bug that let teammates give each other a speed boost by shooting them in the back.\")\n- [Popov et al 2017](https://arxiv.org/abs/1704.03073#deepmind \"Data-efficient Deep Reinforcement Learning for Dexterous Manipulation\"), training a simulated robot gripper arm to stack objects like Legos, included reward shaping; pathologies included \"hovering\" and for a reward-shaping for lifting the bottom face of the top block upwards, DDPG learned to knock the blocks over, thereby (temporarily) elevating the bottom of the top block and receiving the reward:\n\n > We consider three different composite rewards in additional to the original sparse task reward:\n >\n > 1. ***Grasp shaping***: *Grasp brick 1* and *Stack brick 1*, i.e. the agent receives a reward of 0.25 when the brick 1 has been grasped and a reward of 1.0 after completion of the full task.\n > 2. ***Reach and grasp shaping***: *Reach brick 1*, *Grasp brick 1* and *Stack brick 1*, i.e. the agent receives a reward of 0.125 when being close to brick 1, a reward of 0.25 when brick 1 has been grasped, and a reward of 1.0 after completion of the full task.\n > 3. ***Full composite shaping***: the sparse reward components as before in combination with the distance-based smoothly varying components.\n >\n > Figure 5 shows the results of learning with the above reward functions (blue traces). The figure makes clear that learning with the sparse reward only does not succeed for the full task. Introducing an intermediate reward for grasping allows the agent to learn to grasp but learning is very slow. The time to successful grasping can be substantially reduced by giving a distance based reward component for reaching to the first brick, but learning does not progress beyond grasping. Only with an additional intermediate reward component as in continuous reach, grasp, stack the full task can be solved.\n >\n > Although the above reward functions are specific to the particular task, we expect that the idea of a composite reward function can be applied to many other tasks thus allowing learning for to succeed even for challenging problems. Nevertheless, great care must be taken when defining the reward function. We encountered several unexpected failure cases while designing the reward function components: eg. reach and grasp components leading to a grasp unsuitable for stacking, agent not stacking the bricks because it will stop receiving the grasping reward before it receives reward for stacking and the agent flips the brick because it gets a grasping reward calculated with the wrong reference point on the brick. We show examples of these [in the video](https://www.youtube.com/watch?v=8QnD8ZM0YCo).\n- RL agents using learned model-based planning paradigms such as the model predictive control are noted to have issues with the planner essentially exploiting the learned model by choosing a plan going through the worst-modeled parts of the environment and producing unrealistic plans using teleportation, eg. Mishra et al 2017, [\"Prediction and Control with Temporal Segment Models\"](https://arxiv.org/pdf/1703.04070.pdf#page=3) who note:\n\n > If we attempt to solve the optimization problem as posed in (2), the solution will often attempt to apply action sequences outside the manifold where the dynamics model is valid: these actions come from a very different distribution than the action distribution of the training data. This can be problematic: the optimization may find actions that achieve high rewards under the model (by exploiting it in a regime where it is invalid) but that do not accomplish the goal when they are executed in the real environment.\n >\n > ...Next, we compare our method to the baselines on trajectory and policy optimization. Of interest is both the actual reward achieved in the environment, and the difference between the true reward and the expected reward under the model. If a control algorithm exploits the model to predict unrealistic behavior, then the latter will be large. We consider two tasks....Under each model, the optimization finds actions that achieve similar model-predicted rewards, but the baselines suffer from large discrepancies between model prediction and the true dynamics. Qualitatively, we notice that, on the pushing task, the optimization exploits the LSTM and one-step models to predict unrealistic state trajectories, such as the object moving without being touched or the arm passing through the object instead of colliding with it. Our model consistently performs better, and, with a latent action prior, the true execution closely matches the model's prediction. When it makes inaccurate predictions, it respects physical invariants, such as objects staying still unless they are touched, or not penetrating each other when they collide\n\n This is similar to Sims's issues, or current issues in training walking or running agents in environments like MuJoCo where it is easy for them to learn odd gaits like hopping ([Lillicrap et al 2016](https://arxiv.org/abs/1509.02971#deepmind \"Continuous Control with Deep Reinforcement Learning\") adds extra penalties for impacts to try to avoid this) or jumping (eg. [Stelmaszczyk's](https://blog.mlreview.com/our-nips-2017-learning-to-run-approach-b80a295d3bb5 \"Our 'NIPS 2017: Learning to Run' approach\") attempts at reward shaping a skeleton agent) or flailing around wildly ([Heess et al 2017](https://arxiv.org/abs/1707.02286#deepmind \"Emergence of Locomotion Behaviours in Rich Environments\") add random pushes/shoves to the environment to try to make the agent learn more generalizable policies) which may work quite well in the specific simulation but not elsewhere. (To some degree this is beneficial for driving exploration in poorly-understood regions, so it's not all bad.) [Christine Barron](https://connect.unity.com/p/pancake-bot \"Pass the Butter // Pancake bot\"), working on a pancake-cooking robot-arm simulation, ran into reward-shaping problems: rewarding for each timestep without the pancake on the floor teaches the agent to hurl the pancake into the air as hard as possible; and for the passing-the-butter agent, rewarding for getting close to the goal produces the same close-approach-but-avoidance behavior to maximize reward.\n- A curious lexicographic-preference raw-RAM NES AI algorithm learns to pause the game to never lose at Tetris: Murphy 2013, [\"The First Level of Super Mario Bros. is Easy with Lexicographic Orderings and Time Travel... after that it gets a little tricky\"](http://tom7.org/mario/)\n- RL agent in Udacity self-driving car rewarded for speed learns to spin in circles: [Matt Kelcey](https://twitter.com/mat_kelcey/status/886101319559335936)\n- NASA Mars mission planning, optimizing food/water/electricity consumption for total man-days survival, yields an optimal plan of killing 2/3 crew & keep survivor alive as long as possible: [iand675](https://lobste.rs/s/1d7whd/tales_from_trenches_ai_disaster_stories#c_le6tsr)\n- Doug Lenat's [Eurisko](!W) famously had issues with \"parasitic\" heuristics, due to the self-modifying ability, edited important results to claim credit and be rewarded, part of a class of such wireheading heuristics that Lenat made the Eurisko core unmodifiable: [\"EURISKO: A program that learns new heuristics and domain concepts: the nature of heuristics III: program design and results\"](https://pdfs.semanticscholar.org/24c7/4c798100d69555ace06145bc1ba4fd6df35d.pdf), Lenat 1983 (pg90)\n- genetic algorithms for image classification evolves timing-attack to infer image labels based on hard drive storage location: https://news.ycombinator.com/item?id=6269114\n- training a dog to roll over results in [slamming against the wall](https://www.lesswrong.com/posts/5o3CxyvZ2XKawRB5w/machine-learning-and-unintended-consequences?commentId=tKdjcCZAtbE6vJq4v); dolphins rewarded for finding trash & dead seagulls in their tank learned to [manufacture trash & hunt living seagulls](https://www.theguardian.com/science/2003/jul/03/research.science \"Why dolphins are deep thinkers: The more we study dolphins, the brighter they turn out to be\") for more rewards\n- circuit design with genetic/evolutionary computation:\n\n - an attempt to evolve a circuit on an FPGA, to discriminate audio tones of 1kHz & 10kHz without using any timing elements, evolved a design which depended on disconnected circuits in order to work: [\"An evolved circuit, intrinsic in silicon, entwined with physics\"](https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.50.9691&rep=rep1&type=pdf), Thompson 1996. (\"Possible mechanisms include interactions through the power-supply wiring, or electromagnetic coupling.\" The evolved circuit is sensitive to room temperature variations 23--43C, only working perfectly over the 10C range of room temperature it was exposed to during the 2 weeks of evolution. It is also sensitive to the exact location on the FPGA, degrading when shifted to a new position; further finetuning evolution fixes that, but then is vulnerable when shifted back to the original location.)\n - an attempt to evolve an oscillator or a timer wound up evolving a circuit which picked up radio signals from the lab PCs (although since the circuits *did* work at their assigned function as the human intended, should we consider this a case of 'dataset bias' where the 'dataset' is the local lab environment?): [\"The evolved radio and its implications for modelling the evolution of novel sensors\"](https://pdfs.semanticscholar.org/0adf/aaeebbf36f34ac97770adc2f52619a5d45c6.pdf), Jon Bird and Paul Layzell 2002\n- training a \"minitaur\" bot in simulation to carry a ball or duck on its back, CMA-ES discovers [it can drop the ball into a leg joint and then wiggle across the floor](https://blog.otoro.net/2017/11/12/evolving-stable-strategies/ \"Evolving Stable Strategies\") without the ball ever dropping\n- [CycleGAN](https://arxiv.org/abs/1703.10593#bair \"‘CycleGAN: Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks’, Zhu et al 2017\"), a cooperative GAN architecture for converting images from one genre to another (eg. horses⟺zebras), has a loss function that rewards accurate reconstruction of images from its transformed version; CycleGAN turns out to partially solve the task by, in addition to the cross-domain analogies it learns, steganographically hiding autoencoder-style data about the original image invisibly inside the transformed image to assist the reconstruction of details ([Chu et al 2017](https://arxiv.org/abs/1712.02950 \"CycleGAN, a Master of Steganography\"))\n\n A researcher in 2020 working on art colorization told me of an interesting similar behavior: his automatically-grayscaled images were failing to train the NN well, and he concluded that this was because grayscaling a color image produces many shades of gray in a way that human artists do not, and that the formula used by OpenCV for RGB → grayscale permits only a few colors to map onto any given shade of gray, enabling accurate guessing of the original color! Such issues might require learning a grayscaler, similar to superresolution needing learned downscalers ([Sun & Chen 2019](https://arxiv.org/abs/1907.12904 \"CAR: Learned Image Downscaling for Upscaling using Content Adaptive Resampler\")).\n- the ROUGE machine translation metric, based on matching sub-phrases, is typically used with RL techniques since it is a non-differentiable loss; [Salesforce](https://www.salesforce.com/products/einstein/ai-research/tl-dr-reinforced-model-abstractive-summarization/ \"'Your TL;DR by an AI: A Deep Reinforced Model for Abstractive Summarization', Paulus et al 2017\") ([Paulus et al 2017](https://arxiv.org/abs/1705.04304 \"A Deep Reinforced Model for Abstractive Summarization\")) notes that an effort at a ROUGE-only summarization NN produced largely gibberish summaries, and had to add in another loss function to get high-quality results\n- Alex Irpan [writes of 3 anecdotes](https://www.alexirpan.com/2018/02/14/rl-hard.html \"Deep Reinforcement Learning Doesn't Work Yet\"):\n\n > In talks with other RL researchers, I've heard several anecdotes about the novel behavior they've seen from improperly defined rewards.\n >\n > - A coworker is teaching an agent to navigate a room. The episode terminates if the agent walks out of bounds. He didn't add any penalty if the episode terminates this way. The final policy learned to be suicidal, because negative reward was plentiful, positive reward was too hard to achieve, and a quick death ending in 0 reward was preferable to a long life that risked negative reward.\n > - A friend is training a simulated robot arm to reach towards a point above a table. It turns out the point was defined *with respect to the table*, and the table wasn't anchored to anything. The policy learned to slam the table really hard, making the table fall over, which moved the target point too. The target point *just so happened* to fall next to the end of the arm.\n > - A researcher gives a talk about using RL to train a simulated robot hand to pick up a hammer and hammer in a nail. Initially, the reward was defined by how far the nail was pushed into the hole. Instead of picking up the hammer, the robot used its own limbs to punch the nail in. So, they added a reward term to encourage picking up the hammer, and retrained the policy. They got the policy to pick up the hammer...but then it threw the hammer at the nail instead of actually using it.\n >\n > Admittedly, these are all secondhand accounts, and I haven't seen videos of any of these behaviors. However, none of it sounds implausible to me. I've been burned by RL too many times to believe otherwise...I've taken to imagining deep RL as a demon that's deliberately misinterpreting your reward and actively searching for the laziest possible local optima. It's a bit ridiculous, but I've found it's actually a productive mindset to have.\n- [Chrabaszcz et al 2018](https://arxiv.org/abs/1802.08842 \"Back to Basics: Benchmarking Canonical Evolution Strategies for Playing Atari\"): an evolutionary strategies RL in the ALE game [_Q\\*bert_](!W \"Q*bert\") finds that it can steadily earn points by committing 'suicide' to lure an enemy into following it; more interestingly, it also discovers what appears to be a previously unknown bug where a sequence of jumps will, semi-randomly, permanently force the game into a state where the entire level begins flashing and the score increases rapidly & indefinitely until the game is reset ([video](https://www.youtube.com/watch?v=meE5aaRJ0Zs?t=14s \"Canonical ES finds a bug in Q*bert (Full)\"))\n- [Lapuschkin et al 2019](https://arxiv.org/abs/1902.10178 \"Unmasking Clever Hans Predictors and Assessing What Machines Really Learn\"){#lapuschkin-et-al-2019-3} notes a borderline case in the ALE pinball game where the 'nudge' ability is unlimited (unlike all real pinball machines) and a DQN can learn to score arbitrarily by the ball budging over a switch repeatedly:\n\n > The second showcase example studies neural network models (see Figure 5 for the network architecture) trained to play Atari games, here Pinball. As shown in [5], the DNN achieves excellent results beyond human performance. Like for the previous example, we construct LRP heatmaps to visualize the DNN's decision behavior in terms of pixels of the pinball game. Interestingly, after extensive training, the heatmaps become focused on few pixels representing high-scoring switches and loose track of the flippers. A subsequent inspection of the games in which these particular LRP heatmaps occur, reveals that DNN agent firstly moves the ball into the vicinity of a high-scoring switch without using the flippers at all, then, secondly, \"nudges\" the virtual pinball table such that the ball infinitely triggers the switch by passing over it back and forth,without causing a tilt of the pinball table (see Figure 2b and Figure 6 for the heatmaps showing this point, and also Supplementary Video 1). Here, the model has learned to abuse the \"nudging\" threshold implemented through the tilting mechanism in the Atari Pinball software. From a pure game scoring perspective, it is indeed a rational choice to exploit any game mechanism that is available. In a real pinball game, however, the player would go likely bust since the pinball machinery is programmed to tilt after a few strong movements of the whole physical machine.\n- [\"Trial without Error: Towards Safe Reinforcement Learning via Human Intervention\"](https://arxiv.org/abs/1707.05173), Saunders et al 2017; the [blog writeup](https://owainevans.github.io/blog/hirl_blog.html \"This post explains the paper Trial without Error: Towards Safe RL with Human Intervention, which was authored by William Saunders, Girish Sastry, Andreas Stuhlmüller and Owain Evans.\") notes:\n\n > The Road Runner results are especially interesting. Our goal is to have the agent learn to play Road Runner without losing a single life on Level 1 of the game. Deep RL agents are known to discover a 'Score Exploit' in Road Runner: they learn to intentionally kill themselves in a way that (paradoxically) earns greater reward. Dying at a precise time causes the agent to repeat part of Level 1, where it earns more points than on Level 2. This is a local optimum in policy space that a human gamer would never be stuck in.\n >\n > Ideally, our Blocker would prevent all deaths on Level 1 and hence eliminate the Score Exploit. However, through random exploration the agent may hit upon ways of dying that \"fool\" our Blocker (because they look different from examples in its training set) and hence learn a new version of the Score Exploit. In other words, the agent is implicitly performing a random search for adversarial examples for our Blocker (which is a convolutional neural net)...In Road Runner we did not achieve zero catastrophes but were able to reduce the rate of deaths per frame from 0.005 (with no human oversight at all) to 0.0001.\n- [Toromanoff et al 2019](https://arxiv.org/abs/1908.04683 \"Is Deep Reinforcement Learning Really Superhuman on Atari?\") note various bugs in the ALE games, but also a new infinite loop for maximizing scores:\n\n > Finally, we discovered that on some games the actual optimal strategy is by doing a loop over and over giving a small amount of reward. In _Elevator Action_ the agent learn to stay at the first floor and kill over and over the first enemy. This behavior cannot be seen as an actual issue as the agent is basically optimizing score but this is definitely not the intended goal. A human player would never perform this way.\n- [Le Paine et al 2019's](https://arxiv.org/abs/1909.01387#deepmind \"'R2D3: Making Efficient Use of Demonstrations to Solve Hard Exploration Problems', Paine et al 2019\") [R2D3](https://www.deepmind.com/publications/making-efficient-use-of-demonstrations-to-solve-hard-exploration-problems) writeup notes:\n\n > *Wall Sensor Stack*: The original Wall Sensor Stack environment had a bug that the R2D3 agent was able to exploit. We fixed the bug and verified the agent can learn the proper stacking behavior.\n >\n > ...Another desirable property of our approach is that our agents are able to learn to outperform the demonstrators, and in some cases even to discover strategies that the demonstrators were not aware of. In one of our tasks the agent is able to discover and exploit a bug in the environment in spite of all the demonstrators completing the task in the intended way...R2D3 performed better than our average human demonstrator on Baseball, Drawbridge, Navigate Cubes and the Wall Sensor tasks. The behavior on Wall Sensor Stack in particular is quite interesting. On this task R2D3 found a completely different strategy than the human demonstrators by exploiting a bug in the implementation of the environment. The intended strategy for this task is to stack two blocks on top of each other so that one of them can remain in contact with a wall mounted sensor, and this is the strategy employed by the demonstrators. However, due to a bug in the environment the strategy learned by R2D3 was to trick the sensor into remaining active even when it is not in contact with the key by pressing the key against it in a precise way.\n- [\"Emergent Tool Use From Multi-Agent Autocurricula\"](https://arxiv.org/abs/1909.07528#openai), Baker et al 2019:\n\n > We originally believed defending against ramp use would be the last stage of emergence in this environment; however, we were surprised to find that yet two more qualitatively new strategies emerged. After 380 million total episodes of training, the seekers learn to bring a box to the edge of the play area where the hiders have locked the ramps. The seekers then jump on top of the box and *surf* it to the hiders' shelter; this is possible because the environment allows agents to move together with the box regardless of whether they are on the ground or not. In response, the hiders learn to lock all of the boxes in place before building their shelter.\n\n [OA blog post](https://openai.com/research/emergent-tool-use#surprisingbehaviors \"‘Emergent Tool Use from Multi-Agent Interaction § Surprising behavior’, Baker et al 2019\"){.include-annotation}\n- Ziegler et al 2019: fine-tune trained an English text generation model based on human ratings for preference-learning; they provide a curious example of a reward specification bug. Here, the reward was accidentally negated and a new run began overnight while the devs slept; this reversal, rather than resulting in nonsense, resulted in (literally) perversely coherent behavior of emitting obscenities to maximize the new score:\n\n [blog](https://openai.com/research/fine-tuning-gpt-2#bugscanoptimizeforbadbehavior \"‘Fine-Tuning GPT-2 from Human Preferences § Bugs can optimize for bad behavior’, Ziegler et al 2019\"){.include-annotation}\n- [Custard Smingleigh](https://twitter.com/smingleigh/status/1060325665671692288):\n\n > I hooked a neural network up to my [Roomba](!W) 650. I wanted it to learn to navigate without bumping into things, so I set up a reward scheme to encourage speed and discourage hitting the bumper sensors.\n >\n > It learned to drive backwards, because there are no bumpers on the back.\n\n# See Also\n\n
\n- [Why Tool AIs Want to Be Agent AIs](/tool-ai \"AIs limited to purely computational inferential tasks (Tool AIs) supporting humans will be less intelligent, efficient, and economically valuable than more autonomous reinforcement-learning AIs (Agent AIs) who act on their own and learn to take actions over choice of computation/data/training/architecture/hyperparameters/external-resource use\"){.backlink-not}\n- [Surprisingly Turing-Complete](/turing-complete \"A catalogue of software constructs, languages, or APIs which are unexpectedly Turing-complete; implications for security and reliability\"){.backlink-not}\n
\n\n# External Links\n\n- [\"Concrete Problems in AI Safety\"](https://arxiv.org/abs/1606.06565), Amodei et al 2016\n- [\"Edge instantiation\"](https://arbital.com/p/edge_instantiation/)/[\"Nearest unblocked strategy\"](https://arbital.com/p/nearest_unblocked/)\n- [\"Adversarial Examples Are Not Bugs, They Are Features\"](https://arxiv.org/abs/1905.02175), Ilyas et al 2019\n- [\"Specification gaming: the flip side of AI ingenuity\"](https://www.deepmind.com/blog/specification-gaming-the-flip-side-of-ai-ingenuity), Krakovna et al 2020\n- Discussion: [/r/machinelearning](https://www.reddit.com/r/MachineLearning/comments/76qua8/d_that_urban_legend_about_neural_nets_tanks/), [HN](https://news.ycombinator.com/item?id=15485538)\n", "id": "e825fcbc17e4cfb633cd6493bcea5291"}