{"source": "gwern", "url": "https://www.gwern.net/Scaling-hypothesis.page", "title": "\"The Scaling Hypothesis\"", "authors": "Gwern Branwen", "date_published": "n/a", "text": "---\ntitle: \"The Scaling Hypothesis\"\ndescription: \"On GPT-3: meta-learning, scaling, implications, and deep theory. The scaling hypothesis: neural nets absorb data & compute, generalizing and becoming more Bayesian as problems get harder, manifesting new abilities even at trivial-by-global-standards-scale. The deep learning revolution has begun as foretold.\"\nthumbnail: /doc/ai/nn/transformer/gpt/2020-brown-gpt3-figure13-meanperformancescalingcurve.png\nthumbnailText: \"Figure 1.3 from Brown et al 2020 (OpenAI, GPT-3), showing roughly log-scaling of GPT-3 parameter/compute size vs benchmark performance on all text/natural language benchmarks test.\"\ncreated: 2020-05-28\nmodified: 2022-01-02\nstatus: finished\nprevious: /newsletter/2020/05\nnext: /fiction/clippy\nimportance: 10\nconfidence: likely\ncssExtension: drop-caps-kanzlei\n...\n\n