I've seen this view advanced before, that the only reason optimists aren't scared of ASI is that they secretly don't believe in rapidly accelerating technological progress, but it's false, at least for me. It _would_ be true if I believed "evil is optimal" (as Richard Sutton summed up the viewpoint), so that an ASI system would agree with Yudkowsky that its best course of action is to exterminate humanity to harvest its atoms. But I don't agree that evil is optimal, for a number of reasons.
To be clear, I'm somewhat skeptical of the "slow takeoff" curve plotted here. (It may not be physically possible, no matter how smart an AI system may be.) But if that speed of progress were to occur, I expect that it'd be a good thing and risk-reducing compared to slower progress.
I don’t think it’s the only crux — for AI takeover, certainly there’s also a crux of whether AI systems will develop motives that incentivize AI takeover. But IME takeoff speeds (in the sense of what level of exotic tech we’ll develop when) is a big cross-cutting crux for concern about AI takeover, human takeover enabled by an AI advantage, geopolitical destabilization or war over AI, and risks from destructive tech like bioweapons.
Evil is not optimal. Indifference is optimal. A misaligned AI would not need to hate humans in order to take action against them, in the same way that humans don't hate the ants they build cities on top of---they just don't care.
If forced to pick I'd be in the "no takeoff" camp here, but I think your Y axis is doing a lot of work. The "human progress rate" label implies there's a binary choice here, that humans have one fixed rate of progress without AI, and then a much faster rate with AI (or after being replaced by AI). But I think it makes more sense to think of AI as the latest in a series of tools that augment the pace of human progress.
Humans were able to make faster progress in 1926 with 1926 technology (electricity, internal combustion engines) than they could have if they still had 1876 technology. And they were able to make faster progress in 1976 with 1976 technology (jet airplanes, container ships, electron microscopes, digital computers) than they could have if they'd still had 1926 technology. By the same token, we're able to make faster technology progress today (with the Internet, smartphones, CRISPR, etc) than we could if we only had 1976 technology.
We needed more and more powerful tools to continue making scientific and technological progress because the problems we were trying to solve in 1976 were a lot harder than the ones in 1926 or 1876. The problems we're trying to solve are harder still, and the problems we face in 2036 and 2046 will be even harder. So I absolutely expect we'll see faster progress in 2036 than we would have gotten in a hypothetical world with no AI, but I don't think that implies an exponential takeoff. Rather, I think that the no-AI counterfactual for the 2030s would be that progress slows to a crawl because most of the problems we didn't solve prior to 2026 were very difficult to solve with pre-2026 tools (that's why we didn't solve them!).
This graph highlights the terrifying math of velocity, but the critical variable missing from the safety discussion is Governance Latency.
In a 'Fast Takeoff' scenario (the red line), the AI improves faster than human OODA loops (Observation-Orientation-Decision-Action) can cycle. If we rely on Human-in-the-Loop safety mechanisms—legislative pauses, manual kill switches, or oversight boards—we lose by default because our reaction time is biological, while the risk is digital.
To survive the Red Line, we need Machine-Speed Governance: Constraints baked into the OS layer that trigger automatically when capability thresholds are breached.
We filed Patent 118 (Autonomous Physical Layer Severance) specifically for this scenario. It grants the system a constitutional 'Survival Instinct' to physically air-gap itself if it detects a rapid capability spike or alignment drift.
Safety must be faster than the intelligence it governs.
I guess the next logical question is what evidence / frameworks can we use to reason about take-off speeds?
Biological anchors and scaling laws were solid ways of reasoning about intelligence progress. But they don’t seem to tell us much about take off speeds?
My expectation of takeoff speed is that the smooth curves used in most models, including the ones sketched here, smuggle in a crucial hidden assumption: that society remains a passive backdrop while AI capability increases. In reality, technological development is not an endogenous variable operating in a vacuum. Capability gains provoke countervailing forces (regulatory, cultural, institutional) and those forces can dramatically reshape the trajectory in ways a smooth extrapolation will never capture.
We already have live examples of this dynamic playing out. Consider AI in fiction writing. If you modeled takeoff speed for AI's role in commercial publishing by simply extrapolating from the capability curve (models went from producing incoherent prose to writing passable short stories in just a few years) you'd predict rapid, near-total automation of the writing pipeline. But in practice, essentially every major commercial publishing avenue has imposed blanket bans on AI-generated fiction. The capability is there; the adoption is blocked by an institutional immune response that emerged because of the capability increase.
And this pattern isn't limited to publishing. The EU AI Act, chip export controls, the Hollywood writers' strike provisions on AI, Italy temporarily banning ChatGPT, school districts banning AI tools: these are all instances of the same phenomenon, capability increases triggering societal antibodies that actively slow deployment and integration. None of these show up in a smooth curve.
This matters enormously for the physical-technology takeoff that's central to the post's argument. Even if we develop AI that could automate scientific R&D across hard-tech fields, the path from "could" to "does" runs through export controls on critical hardware, biosafety review boards, NRC licensing for nuclear research, FDA approval pipelines, and a dozen other institutional gatekeepers, each of which is likely to become more restrictive as AI capabilities grow, not less. The bottleneck to an unrecognizably sci-fi world isn't just "can AI do the cognitive work," it's "will the surrounding society permit the results to be deployed at the speed the capability curve implies."
I've seen this view advanced before, that the only reason optimists aren't scared of ASI is that they secretly don't believe in rapidly accelerating technological progress, but it's false, at least for me. It _would_ be true if I believed "evil is optimal" (as Richard Sutton summed up the viewpoint), so that an ASI system would agree with Yudkowsky that its best course of action is to exterminate humanity to harvest its atoms. But I don't agree that evil is optimal, for a number of reasons.
To be clear, I'm somewhat skeptical of the "slow takeoff" curve plotted here. (It may not be physically possible, no matter how smart an AI system may be.) But if that speed of progress were to occur, I expect that it'd be a good thing and risk-reducing compared to slower progress.
I don’t think it’s the only crux — for AI takeover, certainly there’s also a crux of whether AI systems will develop motives that incentivize AI takeover. But IME takeoff speeds (in the sense of what level of exotic tech we’ll develop when) is a big cross-cutting crux for concern about AI takeover, human takeover enabled by an AI advantage, geopolitical destabilization or war over AI, and risks from destructive tech like bioweapons.
Evil is not optimal. Indifference is optimal. A misaligned AI would not need to hate humans in order to take action against them, in the same way that humans don't hate the ants they build cities on top of---they just don't care.
> Indifference is optimal. A misaligned AI would not need to hate humans in order to take action against them
You're making the "evil is optimal" assumption in two parts here:
- You're assuming that indifference is optimal.
- You're assuming that indifference leads to taking actions that are (net) harmful to humans.
If forced to pick I'd be in the "no takeoff" camp here, but I think your Y axis is doing a lot of work. The "human progress rate" label implies there's a binary choice here, that humans have one fixed rate of progress without AI, and then a much faster rate with AI (or after being replaced by AI). But I think it makes more sense to think of AI as the latest in a series of tools that augment the pace of human progress.
Humans were able to make faster progress in 1926 with 1926 technology (electricity, internal combustion engines) than they could have if they still had 1876 technology. And they were able to make faster progress in 1976 with 1976 technology (jet airplanes, container ships, electron microscopes, digital computers) than they could have if they'd still had 1926 technology. By the same token, we're able to make faster technology progress today (with the Internet, smartphones, CRISPR, etc) than we could if we only had 1976 technology.
We needed more and more powerful tools to continue making scientific and technological progress because the problems we were trying to solve in 1976 were a lot harder than the ones in 1926 or 1876. The problems we're trying to solve are harder still, and the problems we face in 2036 and 2046 will be even harder. So I absolutely expect we'll see faster progress in 2036 than we would have gotten in a hypothetical world with no AI, but I don't think that implies an exponential takeoff. Rather, I think that the no-AI counterfactual for the 2030s would be that progress slows to a crawl because most of the problems we didn't solve prior to 2026 were very difficult to solve with pre-2026 tools (that's why we didn't solve them!).
This graph highlights the terrifying math of velocity, but the critical variable missing from the safety discussion is Governance Latency.
In a 'Fast Takeoff' scenario (the red line), the AI improves faster than human OODA loops (Observation-Orientation-Decision-Action) can cycle. If we rely on Human-in-the-Loop safety mechanisms—legislative pauses, manual kill switches, or oversight boards—we lose by default because our reaction time is biological, while the risk is digital.
To survive the Red Line, we need Machine-Speed Governance: Constraints baked into the OS layer that trigger automatically when capability thresholds are breached.
We filed Patent 118 (Autonomous Physical Layer Severance) specifically for this scenario. It grants the system a constitutional 'Survival Instinct' to physically air-gap itself if it detects a rapid capability spike or alignment drift.
Safety must be faster than the intelligence it governs.
I guess the next logical question is what evidence / frameworks can we use to reason about take-off speeds?
Biological anchors and scaling laws were solid ways of reasoning about intelligence progress. But they don’t seem to tell us much about take off speeds?
thief https://swenldn.substack.com/p/nazi-camp-w14-west-kensington-live?utm_campaign=post-expanded-share&utm_medium=web
My expectation of takeoff speed is that the smooth curves used in most models, including the ones sketched here, smuggle in a crucial hidden assumption: that society remains a passive backdrop while AI capability increases. In reality, technological development is not an endogenous variable operating in a vacuum. Capability gains provoke countervailing forces (regulatory, cultural, institutional) and those forces can dramatically reshape the trajectory in ways a smooth extrapolation will never capture.
We already have live examples of this dynamic playing out. Consider AI in fiction writing. If you modeled takeoff speed for AI's role in commercial publishing by simply extrapolating from the capability curve (models went from producing incoherent prose to writing passable short stories in just a few years) you'd predict rapid, near-total automation of the writing pipeline. But in practice, essentially every major commercial publishing avenue has imposed blanket bans on AI-generated fiction. The capability is there; the adoption is blocked by an institutional immune response that emerged because of the capability increase.
And this pattern isn't limited to publishing. The EU AI Act, chip export controls, the Hollywood writers' strike provisions on AI, Italy temporarily banning ChatGPT, school districts banning AI tools: these are all instances of the same phenomenon, capability increases triggering societal antibodies that actively slow deployment and integration. None of these show up in a smooth curve.
This matters enormously for the physical-technology takeoff that's central to the post's argument. Even if we develop AI that could automate scientific R&D across hard-tech fields, the path from "could" to "does" runs through export controls on critical hardware, biosafety review boards, NRC licensing for nuclear research, FDA approval pipelines, and a dozen other institutional gatekeepers, each of which is likely to become more restrictive as AI capabilities grow, not less. The bottleneck to an unrecognizably sci-fi world isn't just "can AI do the cognitive work," it's "will the surrounding society permit the results to be deployed at the speed the capability curve implies."
History suggests the answer is often no.