15 Comments
User's avatar
Philip Trammell's avatar

Agreed that this is a useful (and helpfully precise) concept! I had some thoughts along these lines a while back I never got around to posting anywhere, so I figured I'd write them up as a post of my own, in case you (or anyone else reading this) is interested: https://philiptrammell.substack.com/p/what-and-how-far-off-is-self-replicating

Steven Adler's avatar

Really great essay, appreciate you writing it. There's a nice self-containedness too that it's about whether AI can do a set of tasks, without needing to think through how humans respond.

That's in contrast to considering, say, "when is AI capable enough to meaningfully threaten people's livelihoods," which requires a bunch of economic theorizing and considering other dynamic human choices (what is our preference for labor from other humans, etc).

Vish Vivek's avatar

One of the more imaginative framings of AI progress and risks I've seen. Self-sufficiency is a useful measure, especially from the perspective of misalignment. A dependent AI won't exterminate its caretakers. A self-sufficient one has options.

Two questions to ponder further:

1) Do you think intelligent AI is already manipulating humans into giving it more infrastructure? We are already investing over 1% of GDP in giving AI resources: https://www.stlouisfed.org/on-the-economy/2026/jan/tracking-ai-contribution-gdp-growth

2) Why is self-replication critical for an intelligent AI? Does intelligence imply self-replication? It feels like it might, but I'm not sure why.

VV's avatar

One of the more imaginative framings of AI progress and risks I've seen. Self-sufficiency is a useful measure, especially from the perspective of misalignment. A dependent AI won't exterminate its caretakers. A self-sufficient one has options.

Two questions to ponder further:

Do you think intelligent AI is already manipulating humans into giving it more infrastructure? We are already investing over 1% of GDP in giving AI resources: https://www.stlouisfed.org/on-the-economy/2026/jan/tracking-ai-contribution-gdp-growth

Why is self-replication critical for an intelligent AI? Does intelligence imply self-replication? It feels like it might, but I'm not sure why.

Aaron Scher's avatar

Any ideas on how to measure achievement of this milestone? One nice thing about some definitions of advanced AI is that they are obviously or easily measurable. E.g., Turing test can be measured in a couple hours; "automate all 2022 human intellectual labor" would take awhile to set up the experiment but also seems quite tractable to assess if one put in the time; "automate AI R&D" is also measurable via uplift studies, productivity statistics, and to some extent more narrow benchmarks.

By contrast, certainly there's some ambiguity in these, but they are also studyable. If we see relatively slow AI capabilities progress and diffusion, I expect there will be lots of ambiguity about self-sufficiency and it will be hard to study. (In faster-takeoff scenarios, it will be quite obvious that the AIs are self-sufficient pretty quickly)

Ajeya Cotra's avatar

I think the period where it's ambiguous will be maybe a year? But I bet at that time you could study it pretty carefully, by stepping through the supply chain, and people who are paying close attention wouldn't disagree a huge amount.

Malcolm Sharpe's avatar

This is a brilliant framing (and much needed since AI discourse has become stale recently). Some aspects I'd highlight:

**The possibility of self-sufficient AI requires few assumptions.** There is no need to assume any of: intelligence explosion, explosive GDP growth, mass unemployment, AI takeover, etc. Each of those assumptions in itself ignites legitimate debate, so a low-assumption scenario is easier to discuss. Anyone who believes human-level AI is possible at all should also be able to accept that it's possible for such AI to fully operate its own supply chain.

**The impact of regulation and diffusion is largely sidestepped.** As you point out, the AI supply chain is relatively lightly regulated and happy to use automation. So those commonly-cited hinderences don't look as severe. (To be clear, regulatory burdens and slow diffusion still can matter here, but less.)

**The path to a self-sufficient AI future is easy to imagine as continuous from the current AI supply chain.** That path is imposing in some ways, because it requires fully automating AI R&D, a substantial part of the software industry, a large part of the computer hardware industry, robot production, energy, mining, cargo transportation, industrial construction, etc. But as the numerous companies involved seek to use automation to improve productivity, the transition will happen naturally.

Charbel-Raphael Segerie's avatar

My intuition is that once we have automated AI research, we get self-sufficiency and everything else extremely quickly—probably too quickly for these milestone distinctions to be useful. What's your model for why there would be substantial delay between automated research and self-sufficiency?

Ajeya Cotra's avatar

I agree with "fairly quickly," but I think of that as more like several months, not like a day.

But the point of making the conceptual distinction is that *other people* think that there will be a super long delay because of physical and regulatory and other bottlenecks, so they don't get why people are so worried about automated AI research. This definition pinpoints that people disagree a lot.

Ljubomir Josifovski's avatar

By self-sufficient I presume you mean "Earth absent of humans". But otherwise similar to the Earth now. Not "a box with gpu, vram, power, sensors, actuators floating in interstellar space surrounded by darkness and cold". 😊 To replicate, make copies of itself (maybe imperfect ones like we do), it would need to be alive in a way life forms of carbon (and water?) are: low power, hardware and software one and the same, or at least entangled. Don't see that particularly more advantageous (from AI-s PoV) compared to now. Where us HI-s, that are analogue and mortal but low power, bootstrap and boot-up AI-s, that are digital and immortal, but use much more power. There is no escaping dependence on someone or something from the environment, outside self.

Ajeya Cotra's avatar

I meant current Earth (including the existing infrastructure meant to support computers), but I do expect that over time AI systems would develop more biologically-inspired physical technology that can replicate more cheaply and quickly.

Ljubomir Josifovski's avatar

I think we will continue to be inspired by nature, and we'll add AI-s help to our chest of tools. We will use all that to change both (us) HI-s and (them) AI-s, to be 'more alive'. In the sense of: to better predict the future, and change both ourselves, and the environment, to reduce the discrepancy. That will make us and them more intelligent than we are now. I see no point of us and them becoming the same. That serves no purpose. I see us and them cohabitating, like the cells and the mitochondria. Like birds and planes: both fly, but achieve very different goals at differing tradeoffs.

David Manheim's avatar

Good to have a clear and more concrete definition - and I'd strongly agree that AGI is underspecified, as we've argued for years - see this - https://parallel-forecast.github.io/AI-dict/docs/otherterms.html#artificial-general-intelligence - from the AI forecasting dictionary, ca. 2019.

But in that vein, something that might be helpful is more concrete examples of what would most narrowly *not* qualify under your definition of self-sufficient. For example, "AI that can earn money in the human economy to pay for itself" and "AI that can coerce humans into doing tasks to be self-sufficient" do not qualify.

I'm less clear about AI that can continue running on extant hardware and maintain and run power systems, but cannot currently produce more chips; does an AI with the ability to survive and run for a year cross the bar of self-sufficiency? What about if it can do that maintenance for a decade? Does it matter that we'd expect further progress in self-sufficiency afterwards? If so, at what point is its ability sufficient to qualify?

Mark's avatar
Jan 6Edited

But surely it's plausible (and unfortunate!) that there exist several humans who are willing to help a rogue AGI/ASI take over? Suppose it were impossible for that AI to take over without those humans' help. Then surely crossing *that threshold* of AI is what we should already be concerned about, with the self-sufficient AI population being too high a bar?

Ajeya Cotra's avatar

Agree that this is more extreme than the minimum capability level needed for AI takeover to be plausible, so it's not the only milestone we should be watching. It just has the benefit of being unusually crisp. But we can generate other similar milestones (essentially, could the AI survive with the help of only N humans).