The costs of caution

If you thought we might be able to cure cancer in 2200, then I think you ought to expect there’s a good chance we can do it within years of the advent of AI systems that can do the research work humans can do.

Josh Cason on Twitter raised an objection to recent calls for a moratorium on AI development:

I’ve said that I think we should ideally move a lot slower on developing powerful AI systems. I still believe that. But I think Josh’s objection is important and deserves a full airing.

Approximately 150,000 people die worldwide every day. Nearly all of those deaths are, in some sense, preventable, with sufficiently advanced medical technology. Every year, five million families bury a child dead before their fifth birthday. Hundreds of millions of people live in extreme poverty. Billions more have far too little money to achieve their dreams and grow into their full potential. Tens of billions of animals are tortured on factory farms.

Scientific research and economic progress could make an enormous difference to all these problems. Medical research could cure diseases. Economic progress could make food, shelter, medicine, entertainment and luxury goods accessible to people who can't afford it today. Progress in meat alternatives could allow us to shut down factory farms.

There are tens of thousands of scientists, engineers, and policymakers working on fixing these kinds of problems — working on developing vaccines and antivirals, understanding and arresting aging, treating cancer, building cheaper and cleaner energy sources, developing better crops and homes and forms of transportation. But there are only so many people working on each problem. In each field, there are dozens of useful, interesting subproblems that no one is working on, because there aren’t enough people to do the work.

If we could train AI systems powerful enough to automate everything these scientists and engineers do, they could help.

As Tom discussed in a previous post, once we develop AI that does AI research as well as a human expert, it might not be long before we have AI that is way beyond human experts in all domains. That is, AI which is way better than the best humans at all aspects of medical research: thinking of new ideas, designing experiments to test those ideas, building new technologies, and navigating bureaucracies.

This means that rather than tens of thousands of top biomedical researchers, we could have hundreds of millions of significantly superhuman biomedical researchers.[1]

That’s more than a thousand times as much effort going into tackling humanity’s biggest killers. If you thought we might be able to cure cancer in 2200, then I think you ought to expect there’s a good chance we can do it within years of the advent of AI systems that can do the research work humans can do.[2]

SMBC 2013-06-02 "The Falling Problem", Zach Wienersmith

All this may be a massive underestimate. This envisions a world that’s pretty much like ours except that extraordinary talent is no longer scarce. But that feels, in some senses, like thinking about the advent of electricity purely in terms of ‘torchlight will no longer be scarce’. Electricity did make it very cheap to light our homes at night. But it also enabled vacuum cleaners, washing machines, cars, smartphones, airplanes, video recording, Twitter — entirely new things, not just cheaper access to things we already used.

If it goes well, I think developing AI that obsoletes humans will more or less bring the 24th century crashing down on the 21st. Some of the impacts of that are mostly straightforward to predict. We will almost certainly cure a lot of diseases and make many important goods much cheaper. Some of the impacts are pretty close to unimaginable.

Since I was fifteen years old, I have harbored the hope that scientific and technological progress will come fast enough. I hoped advances in the science of aging would let my grandparents see their great-great-grandchildren get married.

Now my grandparents are in their nineties. I think hastening advanced AI might be their best shot at living longer than a few more years, but I’m still advocating for us to slow down. The risk of a catastrophe there’s no recovering from seems too high.[3] It’s worth going slowly to be more sure of getting this right, to better understand what we’re building and think about its effects.

But I’ve seen some people make the case for caution by asking, basically, ‘why are we risking the world for these trivial toys?’ And I want to make it clear that the assumption behind both AI optimism and AI pessimism is that these are not just goofy chatbots, but an early research stage towards developing a second intelligent species. Both AI fears and AI hopes rest on the belief that it may be possible to build alien minds that can do everything we can do and much more. What’s at stake, if that’s true, isn’t whether we’ll have fun chatbots. It’s the life-and-death consequences of delaying, and the possibility we’ll screw up and kill everyone.


  1. Tom argues that the compute needed to train GPT-6 would be enough to have it perform tens of millions of tasks in parallel. We expect that the training compute for superhuman AI will allow you to run many more copies still. ↩︎

  2. In fact, I think it might be even more explosive than that — even as these superhuman digital scientists conduct medical research for us, other AIs will be working on rapidly improving the capabilities of these digital biomedical researchers, and other AIs still will be improving hardware efficiency and building more hardware so that we can run increasing numbers of them. ↩︎

  3. This assumes we don’t make much progress on figuring out how to build such systems safely. Most of my hope is that we will slow down and figure out how to do this right (or be slowed down by external factors like powerful AI being very hard to develop), and if we give ourselves a lot more time, then I’m optimistic. ↩︎