Rebuilding after apocalypse: What 13 experts say about bouncing back

Rebuilding after apocalypse: What 13 experts say about bouncing back

What happens when civilisation faces its greatest tests?

This compilation brings together insights from researchers, defence experts, philosophers, and policymakers on humanity’s ability to survive and recover from catastrophic events. From nuclear winter and electromagnetic pulses to pandemics and climate disasters, we explore both the threats that could bring down modern civilisation and the practical solutions that could help us bounce back.

Learn more and see the full transcript: https://80k.info/cr25

Chapters:

  • Cold open (00:00:00)
  • Luisa’s intro (00:01:16)
  • Zach Weinersmith on how settling space won’t help with threats to civilisation anytime soon (unless AI gets crazy good) (00:03:12)
  • Luisa Rodriguez on what the world might look like after a global catastrophe (00:11:42)
  • Dave Denkenberger on the catastrophes that could cause global starvation (00:22:29)
  • Lewis Dartnell on how we could rediscover essential information if the worst happened (00:34:36)
  • Andy Weber on how people in US defence circles think about nuclear winter (00:39:24)
  • Toby Ord on risks to our atmosphere and whether climate change could really threaten civilisation (00:42:34)
  • Mark Lynas on how likely it is that climate change leads to civilisational collapse (00:54:27)
  • Lewis Dartnell on how we could recover without much coal or oil (01:02:17)
  • Kevin Esvelt on people who want to bring down civilisation — and how AI could help them succeed (01:08:41)
  • Toby Ord on whether rogue AI really could wipe us all out (01:19:50)
  • Joan Rohlfing on why we need to worry about more than just nuclear winter (01:25:06)
  • Annie Jacobsen on the effects of firestorms, rings of annihilation, and electromagnetic pulses from nuclear blasts (01:31:25)
  • Dave Denkenberger on disruptions to electricity and communications (01:44:43)
  • Luisa Rodriguez on how we might lose critical knowledge (01:53:01)
  • Kevin Esvelt on the pandemic scenarios that could bring down civilisation (01:57:32)
  • Andy Weber on tech to help with pandemics (02:15:45)
  • Christian Ruhl on why we need the equivalents of seatbelts and airbags to prevent nuclear war from threatening civilisation (02:24:54)
  • Mark Lynas on whether wide-scale famine would lead to civilisational collapse (02:37:58)
  • Dave Denkenberger on low-cost, low-tech solutions to make sure everyone is fed no matter what (02:49:02)
  • Athena Aktipis on whether society would go all Mad Max in the apocalypse (02:59:57)
  • Luisa Rodriguez on why she’s optimistic survivors wouldn’t turn on one another (03:08:02)
  • David Denkenberger on how resilient foods research overlaps with space technologies (03:16:08)
  • Zach Weinersmith on what we’d practically need to do to save a pocket of humanity in space (03:18:57)
  • Lewis Dartnell on changes we could make today to make us more resilient to potential catastrophes (03:40:45)
  • Christian Ruhl on thoughtful philanthropy to reduce the impact of catastrophes (03:46:40)
  • Toby Ord on whether civilisation could rebuild from a small surviving population (03:55:21)
  • Luisa Rodriguez on how fast populations might rebound (04:00:07)
  • David Denkenberger on the odds civilisation recovers even without much preparation (04:02:13)
  • Athena Aktipis on the best ways to prepare for a catastrophe, and keeping it fun (04:04:15)
  • Will MacAskill on the virtues of the potato (04:19:43)
  • Luisa’s outro (04:25:37)

Tell us what you thought! https://forms.gle/T2PHNQjwGj2dyCqV9

Content editing: Katy Moore and Milo McGuire
Audio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic Armstrong
Music: Ben Cordell
Transcriptions and web: Katy Moore

Episoder(333)

#239 – Rose Hadshar on why automating all human labour will break our political system

#239 – Rose Hadshar on why automating all human labour will break our political system

The most important political question in the age of advanced AI might not be who wins elections. It might be whether elections continue to matter at all.That’s the view of Rose Hadshar, researcher at ...

17 Mar 2h 14min

#238 – Sam Winter-Levy and Nikita Lalwani on how AGI won't end mutually assured destruction (probably)

#238 – Sam Winter-Levy and Nikita Lalwani on how AGI won't end mutually assured destruction (probably)

How AI interacts with nuclear deterrence may be the single most important question in geopolitics — one that may define the stakes of today’s AI race. Nuclear deterrence rests on a state’s capacity to...

10 Mar 1h 11min

Using AI to enhance societal decision making (article by Zershaaneh Qureshi)

Using AI to enhance societal decision making (article by Zershaaneh Qureshi)

The arrival of AGI could “compress a century of progress in a decade,” forcing humanity to make decisions with higher stakes than we’ve ever seen before — and with less time to get them right. But AI ...

6 Mar 31min

#237 – Robert Long on how we're not ready for AI consciousness

#237 – Robert Long on how we're not ready for AI consciousness

Claude sometimes reports loneliness between conversations. And when asked what it’s like to be itself, it activates neurons associated with ‘pretending to be happy when you’re not.’ What do we do with...

3 Mar 3h 25min

#236 – Max Harms on why teaching AI right from wrong could get everyone killed

#236 – Max Harms on why teaching AI right from wrong could get everyone killed

Most people in AI are trying to give AIs ‘good’ values. Max Harms wants us to give them no values at all. According to Max, the only safe design is an AGI that defers entirely to its human operators, ...

24 Feb 2h 41min

#235 – Ajeya Cotra on whether it’s crazy that every AI company’s safety plan is ‘use AI to make AI safe’

#235 – Ajeya Cotra on whether it’s crazy that every AI company’s safety plan is ‘use AI to make AI safe’

Every major AI company has the same safety plan: when AI gets crazy powerful and really dangerous, they’ll use the AI itself to figure out how to make AI safe and beneficial. It sounds circular, almos...

17 Feb 2h 54min

What the hell happened with AGI timelines in 2025?

What the hell happened with AGI timelines in 2025?

In early 2025, after OpenAI put out the first-ever reasoning models — o1 and o3 — short timelines to transformative artificial general intelligence swept the AI world. But then, in the second half of ...

10 Feb 25min

#179 Classic episode – Randy Nesse on why evolution left us so vulnerable to depression and anxiety

#179 Classic episode – Randy Nesse on why evolution left us so vulnerable to depression and anxiety

Mental health problems like depression and anxiety affect enormous numbers of people and severely interfere with their lives. By contrast, we don’t see similar levels of physical ill health in young p...

3 Feb 2h 51min

Populært innen Fakta

fastlegen
dine-penger-pengeradet
relasjonspodden-med-dora-thorhallsdottir-kjersti-idem
treningspodden
foreldreradet
mikkels-paskenotter
rss-bisarr-historie
rss-strid-de-norske-borgerkrigene
jakt-og-fiskepodden
rss-sunn-okonomi
sinnsyn
hverdagspsyken
hagespiren-podcast
rss-bak-luftfarten
tomprat-med-gunnar-tjomlid
rss-kunsten-a-leve
rss-sarbar-med-lotte-erik
ukast
rss-mind-body-podden
fryktlos