Planned Obsolescence
Subscribe
Sign in
Home
Archive
About
Latest
Top
OpenAI's CBRN tests seem unclear
OpenAI says o1-preview can't meaningfully help novices make chemical and biological weapons. Their test results don’t clearly establish this.
Nov 21, 2024
•
Luca Righetti
August 2024
Dangerous capability tests should be harder
We should spend less time proving that today’s AIs are safe and more time figuring out how to tell if tomorrow’s AIs are dangerous.
Aug 20, 2024
•
Luca Righetti
October 2023
Scale, schlep, and systems
This startlingly fast progress in LLMs was driven both by scaling up LLMs and doing schlep to make usable systems out of them. We think scale and schlep…
Oct 10, 2023
•
Ajeya Cotra
August 2023
Language models surprised us
Most experts were surprised by progress in language models in 2022 and 2023. There may be more surprises ahead, so experts should register their…
Aug 29, 2023
•
Ajeya Cotra
June 2023
Could AI accelerate economic growth?
Most new technologies don’t accelerate the pace of economic growth. But advanced AI might do this by massively increasing the research effort going into…
Jun 6, 2023
•
Tom Davidson
May 2023
The costs of caution
If you thought we might be able to cure cancer in 2200, then I think you ought to expect there’s a good chance we can do it within years of the advent…
May 1, 2023
•
Kelsey Piper
April 2023
Continuous doesn’t mean slow
Once a lab trains AI that can fully replace its human employees, it will be able to multiply its workforce 100,000x. If these AIs do AI research, they…
Apr 12, 2023
•
Tom Davidson
AIs accelerating AI research
Researchers could potentially design the next generation of ML models more quickly by delegating some work to existing models, creating a feedback loop…
Apr 4, 2023
•
Ajeya Cotra
March 2023
Is it time for a pause?
The single most important thing we can do is to pause when the next model we train would be powerful enough to obsolete humans entirely. If it were up…
Mar 30, 2023
•
Kelsey Piper
The ethics of AI red-teaming
If we’ve decided we’re collectively fine with unleashing millions of spam bots, then the least we can do is actually study what they can – and can’t …
Mar 26, 2023
•
Kelsey Piper
Alignment researchers disagree a lot
Many fellow alignment researchers may be operating under radically different assumptions from you.
Mar 26, 2023
•
Ajeya Cotra
Training AIs to help us align AIs
If we can accurately recognize good performance on alignment, we could elicit lots of useful alignment work from our models, even if they're playing the…
Mar 26, 2023
•
Ajeya Cotra
This site requires JavaScript to run correctly. Please
turn on JavaScript
or unblock scripts