2025 Highlight-o-thon: Oops! All Bests

2025 Highlight-o-thon: Oops! All Bests

It’s that magical time of year once again — highlightapalooza! Stick around for one top bit from each episode we recorded this year, including:

  • Kyle Fish explaining how Anthropic’s AI Claude descends into spiritual woo when left to talk to itself
  • Ian Dunt on why the unelected House of Lords is by far the best part of the British government
  • Sam Bowman’s strategy to get NIMBYs to love it when things get built next to their houses
  • Buck Shlegeris on how to get an AI model that wants to seize control to accidentally help you foil its plans

…as well as 18 other top observations and arguments from the past year of the show.

Links to learn more, video, and full transcript: https://80k.info/best25

It's been another year of living through history, whether we asked for it or not. Luisa and Rob will be back in 2026 to help you make sense of whatever comes next — as Earth continues its indifferent journey through the cosmos, now accompanied by AI systems that can summarise our meetings and generate adequate birthday messages for colleagues we barely know.

Chapters:

  • Cold open (00:00:00)
  • Rob's intro (00:02:35)
  • Helen Toner on whether we're racing China to build AGI (00:03:43)
  • Hugh White on what he'd say to Americans (00:06:09)
  • Buck Shlegeris on convincing AI models they've already escaped (00:12:09)
  • Paul Scharre on a personal experience in Afghanistan that influenced his views on autonomous weapons (00:15:10)
  • Ian Dunt on how unelected septuagenarians are the heroes of UK governance (00:19:06)
  • Beth Barnes on AI companies being locally reasonable, but globally reckless (00:24:27)
  • Tyler Whitmer on one thing the California and Delaware attorneys general forced on the OpenAI for-profit as part of their restructure (00:28:02)
  • Toby Ord on whether rich people will get access to AGI first (00:30:13)
  • Andrew Snyder-Beattie on how the worst biorisks are defence dominant (00:34:24)
  • Eileen Yam on the most eye-watering gaps in opinions about AI between experts and the US public (00:39:41)
  • Will MacAskill on what a century of history crammed into a decade might feel like (00:44:07)
  • Kyle Fish on what happens when two instances of Claude are left to interact with each other (00:49:08)
  • Sam Bowman on where the Not In My Back Yard movement actually has a point (00:56:29)
  • Neel Nanda on how mechanistic interpretability is trying to be the biology of AI (01:03:12)
  • Tom Davidson on the potential to install secret AI loyalties at a very early stage (01:07:19)
  • Luisa and Rob discussing how medicine doesn't take the health burden of pregnancy seriously enough (01:10:53)
  • Marius Hobbhahn on why scheming is a very natural path for AI models — and people (01:16:23)
  • Holden Karnofsky on lessons for AI regulation drawn from successful farm animal welfare advocacy (01:21:29)
  • Allan Dafoe on how AGI is an inescapable idea but one we have to define well (01:26:19)
  • Ryan Greenblatt on the most likely ways for AI to take over (01:29:35)
  • Updates Daniel Kokotajlo has made to his forecasts since writing and publishing the AI 2027 scenario (01:32:47)
  • Dean Ball on why regulation invites path dependency, and that's a major problem (01:37:21)


Video and audio editing: Dominic Armstrong, Milo McGuire, Luke Monsour, and Simon Monsour
Music: CORBIT
Coordination, transcripts, and web: Katy Moore

Episoder(333)

The 80,000 Hours Career Guide (2023)

The 80,000 Hours Career Guide (2023)

An audio version of the 2023 80,000 Hours career guide, also available on our website, on Amazon, and on Audible.If you know someone who might find our career guide helpful, you can get a free copy se...

4 Sep 20234h 41min

#162 – Mustafa Suleyman on getting Washington and Silicon Valley to tame AI

#162 – Mustafa Suleyman on getting Washington and Silicon Valley to tame AI

Mustafa Suleyman was part of the trio that founded DeepMind, and his new AI project is building one of the world's largest supercomputers to train a large language model on 10–100x the compute used to...

1 Sep 202359min

#161 – Michael Webb on whether AI will soon cause job loss, lower incomes, and higher inequality — or the opposite

#161 – Michael Webb on whether AI will soon cause job loss, lower incomes, and higher inequality — or the opposite

"Do you remember seeing these photographs of generally women sitting in front of these huge panels and connecting calls, plugging different calls between different numbers? The automated version of th...

23 Aug 20233h 30min

#160 – Hannah Ritchie on why it makes sense to be optimistic about the environment

#160 – Hannah Ritchie on why it makes sense to be optimistic about the environment

"There's no money to invest in education elsewhere, so they almost get trapped in the cycle where they don't get a lot from crop production, but everyone in the family has to work there to just stay a...

14 Aug 20232h 36min

#159 – Jan Leike on OpenAI's massive push to make superintelligence safe in 4 years or less

#159 – Jan Leike on OpenAI's massive push to make superintelligence safe in 4 years or less

In July, OpenAI announced a new team and project: Superalignment. The goal is to figure out how to make superintelligent AI systems aligned and safe to use within four years, and the lab is putting a ...

7 Aug 20232h 51min

We now offer shorter 'interview highlights' episodes

We now offer shorter 'interview highlights' episodes

Over on our other feed, 80k After Hours, you can now find 20-30 minute highlights episodes of our 80,000 Hours Podcast interviews. These aren’t necessarily the most important parts of the interview, a...

5 Aug 20236min

#158 – Holden Karnofsky on how AIs might take over even if they're no smarter than humans, and his 4-part playbook for AI risk

#158 – Holden Karnofsky on how AIs might take over even if they're no smarter than humans, and his 4-part playbook for AI risk

Back in 2007, Holden Karnofsky cofounded GiveWell, where he sought out the charities that most cost-effectively helped save lives. He then cofounded Open Philanthropy, where he oversaw a team making b...

31 Jul 20233h 13min

#157 – Ezra Klein on existential risk from AI and what DC could do about it

#157 – Ezra Klein on existential risk from AI and what DC could do about it

In Oppenheimer, scientists detonate a nuclear weapon despite thinking there's some 'near zero' chance it would ignite the atmosphere, putting an end to life on Earth. Today, scientists working on AI t...

24 Jul 20231h 18min

Populært innen Fakta

fastlegen
dine-penger-pengeradet
relasjonspodden-med-dora-thorhallsdottir-kjersti-idem
foreldreradet
treningspodden
rss-strid-de-norske-borgerkrigene
mikkels-paskenotter
rss-bisarr-historie
jakt-og-fiskepodden
rss-sunn-okonomi
sinnsyn
rss-kunsten-a-leve
hverdagspsyken
tomprat-med-gunnar-tjomlid
rss-bak-luftfarten
fryktlos
ukast
rss-mind-body-podden
level-up-med-anniken-binz
rss-kull