Discussion about this post

User's avatar
Malcolm Sharpe's avatar

I've seen this view advanced before, that the only reason optimists aren't scared of ASI is that they secretly don't believe in rapidly accelerating technological progress, but it's false, at least for me. It _would_ be true if I believed "evil is optimal" (as Richard Sutton summed up the viewpoint), so that an ASI system would agree with Yudkowsky that its best course of action is to exterminate humanity to harvest its atoms. But I don't agree that evil is optimal, for a number of reasons.

To be clear, I'm somewhat skeptical of the "slow takeoff" curve plotted here. (It may not be physically possible, no matter how smart an AI system may be.) But if that speed of progress were to occur, I expect that it'd be a good thing and risk-reducing compared to slower progress.

Timothy B. Lee's avatar

If forced to pick I'd be in the "no takeoff" camp here, but I think your Y axis is doing a lot of work. The "human progress rate" label implies there's a binary choice here, that humans have one fixed rate of progress without AI, and then a much faster rate with AI (or after being replaced by AI). But I think it makes more sense to think of AI as the latest in a series of tools that augment the pace of human progress.

Humans were able to make faster progress in 1926 with 1926 technology (electricity, internal combustion engines) than they could have if they still had 1876 technology. And they were able to make faster progress in 1976 with 1976 technology (jet airplanes, container ships, electron microscopes, digital computers) than they could have if they'd still had 1926 technology. By the same token, we're able to make faster technology progress today (with the Internet, smartphones, CRISPR, etc) than we could if we only had 1976 technology.

We needed more and more powerful tools to continue making scientific and technological progress because the problems we were trying to solve in 1976 were a lot harder than the ones in 1926 or 1876. The problems we're trying to solve are harder still, and the problems we face in 2036 and 2046 will be even harder. So I absolutely expect we'll see faster progress in 2036 than we would have gotten in a hypothetical world with no AI, but I don't think that implies an exponential takeoff. Rather, I think that the no-AI counterfactual for the 2030s would be that progress slows to a crawl because most of the problems we didn't solve prior to 2026 were very difficult to solve with pre-2026 tools (that's why we didn't solve them!).

7 more comments...

No posts

Ready for more?