document_id
stringclasses 2
values | document_text
stringclasses 2
values | document_filename
stringclasses 2
values | document_metadata
dict |
---|---|---|---|
cb6467ef-c66b-421e-88c1-2fb268015ac7 | # The Bitter Lesson
## Rich Sutton
### March 13, 2019
The biggest lesson that can be read from 70 years of AI research is
that general methods that leverage computation are ultimately the most
effective, and by a large margin. The ultimate reason for this is
Moore's law, or rather its generalization of continued exponentially
falling cost per unit of computation. Most AI research has been
conducted as if the computation available to the agent were constant
(in which case leveraging human knowledge would be one of the only ways
to improve performance) but, over a slightly longer time than a typical
research project, massively more computation inevitably becomes
available. Seeking an improvement that makes a difference in the
shorter term, researchers seek to leverage their human knowledge of the
domain, but the only thing that matters in the long run is the
leveraging of computation. These two need not run counter to each
other, but in practice they tend to. Time spent on one is time not
spent on the other. There are psychological commitments to investment
in one approach or the other. And the human-knowledge approach tends to
complicate methods in ways that make them less suited to taking
advantage of general methods leveraging computation. There were
many examples of AI researchers' belated learning of this bitter
lesson,
and it is instructive to review some of the most prominent.
In computer chess, the methods that defeated the world champion,
Kasparov, in 1997, were based on massive, deep search. At the time,
this was looked upon with dismay by the majority of computer-chess
researchers who had pursued methods that leveraged human understanding
of the special structure of chess. When a simpler, search-based
approach with special hardware and software proved vastly more
effective, these human-knowledge-based chess researchers were not good
losers. They said that ``brute force" search may have won this time,
but it was not a general strategy, and anyway it was not how people
played chess. These researchers wanted methods based on human input to
win and were disappointed when they did not.
A similar pattern of research progress was seen in computer Go, only
delayed by a further 20 years. Enormous initial efforts went into
avoiding search by taking advantage of human knowledge, or of the
special features of the game, but all those efforts proved irrelevant,
or worse, once search was applied effectively at scale. Also important
was the use of learning by self play to learn a value function (as it
was in many other games and even in chess, although learning did not
play a big role in the 1997 program that first beat a world champion).
Learning by self play, and learning in general, is like search in that
it enables massive computation to be brought to bear. Search and
learning are the two most important classes of techniques for utilizing
massive amounts of computation in AI research. In computer Go, as in
computer chess, researchers' initial effort was directed towards
utilizing human understanding (so that less search was needed) and only
much later was much greater success had by embracing search and
learning.
In speech recognition, there was an early competition, sponsored by
DARPA, in the 1970s. Entrants included a host of special methods that
took
advantage of human knowledge---knowledge of words, of phonemes, of the
human vocal tract, etc. On the other side were newer methods that were
more statistical in nature and did much more computation, based on
hidden Markov models (HMMs). Again, the statistical methods won out
over the human-knowledge-based methods. This led to a major change in
all of natural language processing, gradually over decades, where
statistics and computation came to dominate the field. The recent rise
of deep learning in speech recognition is the most recent step in this
consistent direction. Deep learning methods rely even less on human
knowledge, and use even more computation, together with learning on
huge training sets, to produce dramatically better speech recognition
systems. As in the games, researchers always tried to make systems that
worked the way the researchers thought their own minds worked---they
tried to put that knowledge in their systems---but it proved ultimately
counterproductive, and a colossal waste of researcher's time, when,
through Moore's law, massive computation became available and a means
was found to put it to good use.
In computer vision, there has been a similar pattern. Early methods
conceived of vision as searching for edges, or generalized cylinders,
or in terms of SIFT features. But today all this is discarded. Modern
deep-learning neural networks use only the notions of convolution and
certain kinds of invariances, and perform much better.
This is a big lesson. As a field, we still have not thoroughly learned
it, as we are continuing to make the same kind of mistakes. To see
this, and to effectively resist it, we have to understand the appeal of
these mistakes. We have to learn the bitter lesson that building in how
we think we think does not work in the long run. The bitter lesson is
based on the historical observations that 1) AI researchers have often
tried to build knowledge into their agents, 2) this always helps in the
short term, and is personally satisfying to the researcher, but 3) in
the long run it plateaus and even inhibits further progress, and 4)
breakthrough progress eventually arrives by an opposing approach based
on scaling computation by search and learning. The eventual success is
tinged with bitterness, and often incompletely digested, because it is
success over a favored, human-centric approach.
One thing that should be learned from the bitter lesson is the great
power of general purpose methods, of methods that continue to scale
with increased computation even as the available computation becomes
very great. The two methods that seem to scale arbitrarily in this way
are search and learning.
The second general point to be learned from the bitter lesson is that
the actual contents of minds are tremendously, irredeemably complex; we
should stop trying to find simple ways to think about the contents of
minds, such as simple ways to think about space, objects, multiple
agents, or symmetries. All these are part of the arbitrary,
intrinsically-complex, outside world. They are not what should be built
in, as their complexity is endless; instead we should build in only the
meta-methods that can find and capture this arbitrary complexity.
Essential to these methods is that they can find good approximations,
but the search for them should be by our methods, not by us. We want AI
agents that can discover like we can, not which contain what we have
discovered. Building in our discoveries only makes it harder to see how
the discovering process can be done. | the_bitter_lesson.md | {
"file_size": 6865
} |
50cd3255-7280-4598-afa3-224934e47731 | # The Intelligence Age
September 23, 2024

In the next couple of decades, we will be able to do things that would have seemed like magic to our grandparents.
This phenomenon is not new, but it will be newly accelerated. People have become dramatically more capable over time; we can already accomplish things now that our predecessors would have believed to be impossible.
We are more capable not because of genetic change, but because we benefit from the infrastructure of society being way smarter and more capable than any one of us; in an important sense, society itself is a form of advanced intelligence. Our grandparents – and the generations that came before them – built and achieved great things. They contributed to the scaffolding of human progress that we all benefit from. AI will give people tools to solve hard problems and help us add new struts to that scaffolding that we couldn’t have figured out on our own. The story of progress will continue, and our children will be able to do things we can’t.
It won’t happen all at once, but we’ll soon be able to work with AI that helps us accomplish much more than we ever could without AI; eventually we can each have a personal AI team, full of virtual experts in different areas, working together to create almost anything we can imagine. Our children will have virtual tutors who can provide personalized instruction in any subject, in any language, and at whatever pace they need. We can imagine similar ideas for better healthcare, the ability to create any kind of software someone can imagine, and much more.
With these new abilities, we can have shared prosperity to a degree that seems unimaginable today; in the future, everyone’s lives can be better than anyone’s life is now. Prosperity alone doesn’t necessarily make people happy – there are plenty of miserable rich people – but it would meaningfully improve the lives of people around the world.
Here is one narrow way to look at human history: after thousands of years of compounding scientific discovery and technological progress, we have figured out how to melt sand, add some impurities, arrange it with astonishing precision at extraordinarily tiny scale into computer chips, run energy through it, and end up with systems capable of creating increasingly capable artificial intelligence.
This may turn out to be the most consequential fact about all of history so far. It is possible that we will have superintelligence in a few thousand days (!); it may take longer, but I’m confident we’ll get there.
How did we get to the doorstep of the next leap in prosperity?
In three words: deep learning worked.
In 15 words: deep learning worked, got predictably better with scale, and we dedicated increasing resources to it.
That’s really it; humanity discovered an algorithm that could really, truly learn any distribution of data (or really, the underlying “rules” that produce any distribution of data). To a shocking degree of precision, the more compute and data available, the better it gets at helping people solve hard problems. I find that no matter how much time I spend thinking about this, I can never really internalize how consequential it is.
There are a lot of details we still have to figure out, but it’s a mistake to get distracted by any particular challenge. Deep learning works, and we will solve the remaining problems. We can say a lot of things about what may happen next, but the main one is that AI is going to get better with scale, and that will lead to meaningful improvements to the lives of people around the world.
AI models will soon serve as autonomous personal assistants who carry out specific tasks on our behalf like coordinating medical care on your behalf. At some point further down the road, AI systems are going to get so good that they help us make better next-generation systems and make scientific progress across the board.
Technology brought us from the Stone Age to the Agricultural Age and then to the Industrial Age. From here, the path to the Intelligence Age is paved with compute, energy, and human will.
If we want to put AI into the hands of as many people as possible, we need to drive down the cost of compute and make it abundant (which requires lots of energy and chips). If we don’t build enough infrastructure, AI will be a very limited resource that wars get fought over and that becomes mostly a tool for rich people.
We need to act wisely but with conviction. The dawn of the Intelligence Age is a momentous development with very complex and extremely high-stakes challenges. It will not be an entirely positive story, but the upside is so tremendous that we owe it to ourselves, and the future, to figure out how to navigate the risks in front of us.
I believe the future is going to be so bright that no one can do it justice by trying to write about it now; a defining characteristic of the Intelligence Age will be massive prosperity.
Although it will happen incrementally, astounding triumphs – fixing the climate, establishing a space colony, and the discovery of all of physics – will eventually become commonplace. With nearly-limitless intelligence and abundant energy – the ability to generate great ideas, and the ability to make them happen – we can do quite a lot.
As we have seen with other technologies, there will also be downsides, and we need to start working now to maximize AI’s benefits while minimizing its harms. As one example, we expect that this technology can cause a significant change in labor markets (good and bad) in the coming years, but most jobs will change more slowly than most people think, and I have no fear that we’ll run out of things to do (even if they don’t look like “real jobs” to us today). People have an innate desire to create and to be useful to each other, and AI will allow us to amplify our own abilities like never before. As a society, we will be back in an expanding world, and we can again focus on playing positive-sum games.
Many of the jobs we do today would have looked like trifling wastes of time to people a few hundred years ago, but nobody is looking back at the past, wishing they were a lamplighter. If a lamplighter could see the world today, he would think the prosperity all around him was unimaginable. And if we could fast-forward a hundred years from today, the prosperity all around us would feel just as unimaginable. | the_intelligence_age.md | {
"file_size": 6638
} |
Subsets and Splits