Discussion about this post

User's avatar
Malcolm Sharpe's avatar

I've seen this view advanced before, that the only reason optimists aren't scared of ASI is that they secretly don't believe in rapidly accelerating technological progress, but it's false, at least for me. It _would_ be true if I believed "evil is optimal" (as Richard Sutton summed up the viewpoint), so that an ASI system would agree with Yudkowsky that its best course of action is to exterminate humanity to harvest its atoms. But I don't agree that evil is optimal, for a number of reasons.

To be clear, I'm somewhat skeptical of the "slow takeoff" curve plotted here. (It may not be physically possible, no matter how smart an AI system may be.) But if that speed of progress were to occur, I expect that it'd be a good thing and risk-reducing compared to slower progress.

Mark's avatar

I guess the next logical question is what evidence / frameworks can we use to reason about take-off speeds?

Biological anchors and scaling laws were solid ways of reasoning about intelligence progress. But they don’t seem to tell us much about take off speeds?

1 more comment...

No posts

Ready for more?