#235 – Ajeya Cotra on whether it’s crazy that every AI company’s safety plan is ‘use AI to make AI safe’

#235 – Ajeya Cotra on whether it’s crazy that every AI company’s safety plan is ‘use AI to make AI safe’

Every major AI company has the same safety plan: when AI gets crazy powerful and really dangerous, they’ll use the AI itself to figure out how to make AI safe and beneficial. It sounds circular, almost satirical. But is it actually a bad plan?

Today’s guest, Ajeya Cotra, recently placed 3rd out of 413 participants forecasting AI developments and is among the most thoughtful and respected commentators on where the technology is going.

She thinks there’s a meaningful chance we’ll see as much change in the next 23 years as humanity faced in the last 10,000, thanks to the arrival of artificial general intelligence. Ajeya doesn’t reach this conclusion lightly: she’s had a ring-side seat to the growth of all the major AI companies for 10 years — first as a researcher and grantmaker for technical AI safety at Coefficient Giving (formerly known as Open Philanthropy), and now as a member of technical staff at METR.

So host Rob Wiblin asked her: is this plan to use AI to save us from AI a reasonable one?

Ajeya agrees that humanity has repeatedly used technologies that create new problems to help solve those problems. After all:

  • Cars enabled carjackings and drive-by shootings, but also faster police pursuits.
  • Microbiology enabled bioweapons, but also faster vaccine development.
  • The internet allowed lies to disseminate faster, but had exactly the same impact for fact checks.

But she also thinks this will be a much harder case. In her view, the window between AI automating AI research and the arrival of uncontrollably powerful superintelligence could be quite brief — perhaps a year or less. In that narrow window, we’d need to redirect enormous amounts of AI labour away from making AI smarter and towards alignment research, biodefence, cyberdefence, adapting our political structures, and improving our collective decision-making.

The plan might fail just because the idea is flawed at conception: it does sound a bit crazy to use an AI you don’t trust to make sure that same AI benefits humanity.

But if we find some clever technique to overcome that, we could still fail — because the companies simply don’t follow through on their promises. They say redirecting resources to alignment and security is their strategy for dealing with the risks generated by their research — but none have quantitative commitments about what fraction of AI labour they’ll redirect during crunch time. And the competitive pressures during a recursive self-improvement loop could be irresistible.

In today’s conversation, Ajeya and Rob discuss what assumptions this plan requires, the specific problems AI could help solve during crunch time, and why — even if we pull it off — we’ll be white-knuckling it the whole way through.


Links to learn more, video, and full transcript: https://80k.info/ac26

This episode was recorded on October 20, 2025.

Chapters:

  • Cold open (00:00:00)
  • Ajeya’s strong track record for identifying key AI issues (00:00:43)
  • The 1,000-fold disagreement about AI's effect on economic growth (00:02:30)
  • Could any evidence actually change people's minds? (00:22:48)
  • The most dangerous AI progress might remain secret (00:29:55)
  • White-knuckling the 12-month window after automated AI R&D (00:46:16)
  • AI help is most valuable right before things go crazy (01:10:36)
  • Foundations should go from paying researchers to paying for inference (01:23:08)
  • Will frontier AI even be for sale during the explosion? (01:30:21)
  • Pre-crunch prep: what we should do right now (01:42:10)
  • A grantmaking trial by fire at Coefficient Giving (01:45:12)
  • Sabbatical and reflections on effective altruism (02:05:32)
  • The mundane factors that drive career satisfaction (02:34:33)
  • EA as an incubator for avant-garde causes others won't touch (02:44:07)

Video and audio editing: Dominic Armstrong, Milo McGuire, Luke Monsour, and Simon Monsour
Music: CORBIT
Coordination, transcriptions, and web: Katy Moore

Episoder(333)

We just put up a new compilation of ten core episodes of the show

We just put up a new compilation of ten core episodes of the show

We recently launched a new podcast feed that might be useful to you and people you know. It's called Effective Altruism: Ten Global Problems, and it's a collection of ten top episodes of this show, se...

20 Okt 20213min

#113 – Varsha Venugopal on using gossip to help vaccinate every child in India

#113 – Varsha Venugopal on using gossip to help vaccinate every child in India

Our failure to make sure all kids globally get all of their basic vaccinations leads to 1.5 million child deaths every year.According to today’s guest, Varsha Venugopal, for the great majority this ha...

18 Okt 20212h 5min

#112 – Carl Shulman on the common-sense case for existential risk work and its practical implications

#112 – Carl Shulman on the common-sense case for existential risk work and its practical implications

Preventing the apocalypse may sound like an idiosyncratic activity, and it sometimes is justified on exotic grounds, such as the potential for humanity to become a galaxy-spanning civilisation.But the...

5 Okt 20213h 48min

#111 – Mushtaq Khan on using institutional economics to predict effective government reforms

#111 – Mushtaq Khan on using institutional economics to predict effective government reforms

If you’re living in the Niger Delta in Nigeria, your best bet at a high-paying career is probably ‘artisanal refining’ — or, in plain language, stealing oil from pipelines. The resulting oil spills da...

10 Sep 20213h 20min

#110 – Holden Karnofsky on building aptitudes and kicking ass

#110 – Holden Karnofsky on building aptitudes and kicking ass

Holden Karnofsky helped create two of the most influential organisations in the effective philanthropy world. So when he outlines a different perspective on career advice than the one we present at 80...

26 Aug 20212h 46min

#109 – Holden Karnofsky on the most important century

#109 – Holden Karnofsky on the most important century

Will the future of humanity be wild, or boring? It's natural to think that if we're trying to be sober and measured, and predict what will really happen rather than spin an exciting story, it's more l...

19 Aug 20212h 19min

#108 – Chris Olah on working at top AI labs without an undergrad degree

#108 – Chris Olah on working at top AI labs without an undergrad degree

Chris Olah has had a fascinating and unconventional career path. Most people who want to pursue a research career feel they need a degree to get taken seriously. But Chris not only doesn't have a P...

11 Aug 20211h 33min

#107 – Chris Olah on what the hell is going on inside neural networks

#107 – Chris Olah on what the hell is going on inside neural networks

Big machine learning models can identify plant species better than any human, write passable essays, beat you at a game of Starcraft 2, figure out how a photo of Tobey Maguire and the word 'spider' ar...

4 Aug 20213h 9min

Populært innen Fakta

fastlegen
dine-penger-pengeradet
relasjonspodden-med-dora-thorhallsdottir-kjersti-idem
rss-strid-de-norske-borgerkrigene
foreldreradet
mikkels-paskenotter
treningspodden
rss-bisarr-historie
jakt-og-fiskepodden
rss-sunn-okonomi
sinnsyn
tomprat-med-gunnar-tjomlid
rss-kunsten-a-leve
hagespiren-podcast
rss-bak-luftfarten
ukast
fryktlos
hverdagspsyken
rss-mind-body-podden
gravid-uke-for-uke