#235 – Ajeya Cotra on whether it’s crazy that every AI company’s safety plan is ‘use AI to make AI safe’

#235 – Ajeya Cotra on whether it’s crazy that every AI company’s safety plan is ‘use AI to make AI safe’

Every major AI company has the same safety plan: when AI gets crazy powerful and really dangerous, they’ll use the AI itself to figure out how to make AI safe and beneficial. It sounds circular, almost satirical. But is it actually a bad plan?

Today’s guest, Ajeya Cotra, recently placed 3rd out of 413 participants forecasting AI developments and is among the most thoughtful and respected commentators on where the technology is going.

She thinks there’s a meaningful chance we’ll see as much change in the next 23 years as humanity faced in the last 10,000, thanks to the arrival of artificial general intelligence. Ajeya doesn’t reach this conclusion lightly: she’s had a ring-side seat to the growth of all the major AI companies for 10 years — first as a researcher and grantmaker for technical AI safety at Coefficient Giving (formerly known as Open Philanthropy), and now as a member of technical staff at METR.

So host Rob Wiblin asked her: is this plan to use AI to save us from AI a reasonable one?

Ajeya agrees that humanity has repeatedly used technologies that create new problems to help solve those problems. After all:

  • Cars enabled carjackings and drive-by shootings, but also faster police pursuits.
  • Microbiology enabled bioweapons, but also faster vaccine development.
  • The internet allowed lies to disseminate faster, but had exactly the same impact for fact checks.

But she also thinks this will be a much harder case. In her view, the window between AI automating AI research and the arrival of uncontrollably powerful superintelligence could be quite brief — perhaps a year or less. In that narrow window, we’d need to redirect enormous amounts of AI labour away from making AI smarter and towards alignment research, biodefence, cyberdefence, adapting our political structures, and improving our collective decision-making.

The plan might fail just because the idea is flawed at conception: it does sound a bit crazy to use an AI you don’t trust to make sure that same AI benefits humanity.

But if we find some clever technique to overcome that, we could still fail — because the companies simply don’t follow through on their promises. They say redirecting resources to alignment and security is their strategy for dealing with the risks generated by their research — but none have quantitative commitments about what fraction of AI labour they’ll redirect during crunch time. And the competitive pressures during a recursive self-improvement loop could be irresistible.

In today’s conversation, Ajeya and Rob discuss what assumptions this plan requires, the specific problems AI could help solve during crunch time, and why — even if we pull it off — we’ll be white-knuckling it the whole way through.


Links to learn more, video, and full transcript: https://80k.info/ac26

This episode was recorded on October 20, 2025.

Chapters:

  • Cold open (00:00:00)
  • Ajeya’s strong track record for identifying key AI issues (00:00:43)
  • The 1,000-fold disagreement about AI's effect on economic growth (00:02:30)
  • Could any evidence actually change people's minds? (00:22:48)
  • The most dangerous AI progress might remain secret (00:29:55)
  • White-knuckling the 12-month window after automated AI R&D (00:46:16)
  • AI help is most valuable right before things go crazy (01:10:36)
  • Foundations should go from paying researchers to paying for inference (01:23:08)
  • Will frontier AI even be for sale during the explosion? (01:30:21)
  • Pre-crunch prep: what we should do right now (01:42:10)
  • A grantmaking trial by fire at Coefficient Giving (01:45:12)
  • Sabbatical and reflections on effective altruism (02:05:32)
  • The mundane factors that drive career satisfaction (02:34:33)
  • EA as an incubator for avant-garde causes others won't touch (02:44:07)

Video and audio editing: Dominic Armstrong, Milo McGuire, Luke Monsour, and Simon Monsour
Music: CORBIT
Coordination, transcriptions, and web: Katy Moore

Episoder(333)

#138 Classic episode – Sharon Hewitt Rawlette on why pleasure and pain are the only things that intrinsically matter

#138 Classic episode – Sharon Hewitt Rawlette on why pleasure and pain are the only things that intrinsically matter

What in the world is intrinsically good — good in itself even if it has no other effects? Over the millennia, people have offered many answers: joy, justice, equality, accomplishment, loving god, wisd...

22 Jan 20252h 25min

#134 Classic episode – Ian Morris on what big-picture history teaches us

#134 Classic episode – Ian Morris on what big-picture history teaches us

Wind back 1,000 years and the moral landscape looks very different to today. Most farming societies thought slavery was natural and unobjectionable, premarital sex was an abomination, women should obe...

15 Jan 20253h 40min

#140 Classic episode – Bear Braumoeller on the case that war isn’t in decline

#140 Classic episode – Bear Braumoeller on the case that war isn’t in decline

Is war in long-term decline? Steven Pinker's The Better Angels of Our Nature brought this previously obscure academic question to the centre of public debate, and pointed to rates of death in war to a...

8 Jan 20252h 48min

2024 Highlightapalooza! (The best of The 80,000 Hours Podcast this year)

2024 Highlightapalooza! (The best of The 80,000 Hours Podcast this year)

"A shameless recycling of existing content to drive additional audience engagement on the cheap… or the single best, most valuable, and most insight-dense episode we put out in the entire year, depend...

27 Des 20242h 50min

#211 – Sam Bowman on why housing still isn't fixed and what would actually work

#211 – Sam Bowman on why housing still isn't fixed and what would actually work

Rich countries seem to find it harder and harder to do anything that creates some losers. People who don’t want houses, offices, power stations, trains, subway stations (or whatever) built in their ar...

19 Des 20243h 25min

#210 – Cameron Meyer Shorb on dismantling the myth that we can’t do anything to help wild animals

#210 – Cameron Meyer Shorb on dismantling the myth that we can’t do anything to help wild animals

"I really don’t want to give the impression that I think it is easy to make predictable, controlled, safe interventions in wild systems where there are many species interacting. I don’t think it’s eas...

29 Nov 20243h 21min

#209 – Rose Chan Loui on OpenAI’s gambit to ditch its nonprofit

#209 – Rose Chan Loui on OpenAI’s gambit to ditch its nonprofit

One OpenAI critic calls it “the theft of at least the millennium and quite possibly all of human history.” Are they right?Back in 2015 OpenAI was but a humble nonprofit. That nonprofit started a for-p...

27 Nov 20241h 22min

#208 – Elizabeth Cox on the case that TV shows, movies, and novels can improve the world

#208 – Elizabeth Cox on the case that TV shows, movies, and novels can improve the world

"I think stories are the way we shift the Overton window — so widen the range of things that are acceptable for policy and palatable to the public. Almost by definition, a lot of things that are going...

21 Nov 20242h 22min

Populært innen Fakta

fastlegen
dine-penger-pengeradet
relasjonspodden-med-dora-thorhallsdottir-kjersti-idem
foreldreradet
treningspodden
rss-strid-de-norske-borgerkrigene
mikkels-paskenotter
rss-bisarr-historie
jakt-og-fiskepodden
rss-sunn-okonomi
sinnsyn
rss-kunsten-a-leve
hverdagspsyken
tomprat-med-gunnar-tjomlid
rss-bak-luftfarten
fryktlos
ukast
rss-mind-body-podden
level-up-med-anniken-binz
rss-kull