AGI disagreements and misconceptions: Rob, Luisa, & past guests hash it out

AGI disagreements and misconceptions: Rob, Luisa, & past guests hash it out

Will LLMs soon be made into autonomous agents? Will they lead to job losses? Is AI misinformation overblown? Will it prove easy or hard to create AGI? And how likely is it that it will feel like something to be a superhuman AGI?

With AGI back in the headlines, we bring you 15 opinionated highlights from the show addressing those and other questions, intermixed with opinions from hosts Luisa Rodriguez and Rob Wiblin recorded back in 2023.

Check out the full transcript on the 80,000 Hours website.

You can decide whether the views we expressed (and those from guests) then have held up these last two busy years. You’ll hear:

  • Ajeya Cotra on overrated AGI worries
  • Holden Karnofsky on the dangers of aligned AI, why unaligned AI might not kill us, and the power that comes from just making models bigger
  • Ian Morris on why the future must be radically different from the present
  • Nick Joseph on whether his companies internal safety policies are enough
  • Richard Ngo on what everyone gets wrong about how ML models work
  • Tom Davidson on why he believes crazy-sounding explosive growth stories… and Michael Webb on why he doesn’t
  • Carl Shulman on why you’ll prefer robot nannies over human ones
  • Zvi Mowshowitz on why he’s against working at AI companies except in some safety roles
  • Hugo Mercier on why even superhuman AGI won’t be that persuasive
  • Rob Long on the case for and against digital sentience
  • Anil Seth on why he thinks consciousness is probably biological
  • Lewis Bollard on whether AI advances will help or hurt nonhuman animals
  • Rohin Shah on whether humanity’s work ends at the point it creates AGI

And of course, Rob and Luisa also regularly chime in on what they agree and disagree with.

Chapters:

  • Cold open (00:00:00)
  • Rob's intro (00:00:58)
  • Rob & Luisa: Bowerbirds compiling the AI story (00:03:28)
  • Ajeya Cotra on the misalignment stories she doesn’t buy (00:09:16)
  • Rob & Luisa: Agentic AI and designing machine people (00:24:06)
  • Holden Karnofsky on the dangers of even aligned AI, and how we probably won’t all die from misaligned AI (00:39:20)
  • Ian Morris on why we won’t end up living like The Jetsons (00:47:03)
  • Rob & Luisa: It’s not hard for nonexperts to understand we’re playing with fire here (00:52:21)
  • Nick Joseph on whether AI companies’ internal safety policies will be enough (00:55:43)
  • Richard Ngo on the most important misconception in how ML models work (01:03:10)
  • Rob & Luisa: Issues Rob is less worried about now (01:07:22)
  • Tom Davidson on why he buys the explosive economic growth story, despite it sounding totally crazy (01:14:08)
  • Michael Webb on why he’s sceptical about explosive economic growth (01:20:50)
  • Carl Shulman on why people will prefer robot nannies over humans (01:28:25)
  • Rob & Luisa: Should we expect AI-related job loss? (01:36:19)
  • Zvi Mowshowitz on why he thinks it’s a bad idea to work on improving capabilities at cutting-edge AI companies (01:40:06)
  • Holden Karnofsky on the power that comes from just making models bigger (01:45:21)
  • Rob & Luisa: Are risks of AI-related misinformation overblown? (01:49:49)
  • Hugo Mercier on how AI won’t cause misinformation pandemonium (01:58:29)
  • Rob & Luisa: How hard will it actually be to create intelligence? (02:09:08)
  • Robert Long on whether digital sentience is possible (02:15:09)
  • Anil Seth on why he believes in the biological basis of consciousness (02:27:21)
  • Lewis Bollard on whether AI will be good or bad for animal welfare (02:40:52)
  • Rob & Luisa: The most interesting new argument Rob’s heard this year (02:50:37)
  • Rohin Shah on whether AGI will be the last thing humanity ever does (02:57:35)
  • Rob's outro (03:11:02)

Audio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic Armstrong
Transcriptions and additional content editing: Katy Moore

Episoder(333)

'95% of AI Pilots Fail': The hidden agenda behind the viral stat that misled millions

'95% of AI Pilots Fail': The hidden agenda behind the viral stat that misled millions

You might have heard that '95% of corporate AI pilots' are failing. It was one of the most widely cited AI statistics of 2025, parroted by media outlets everywhere. It helped trigger a Nasdaq selloff ...

28 Apr 10min

#242 – Will MacAskill on how we survive the 'intelligence explosion,' AI character, and the case for 'viatopia'

#242 – Will MacAskill on how we survive the 'intelligence explosion,' AI character, and the case for 'viatopia'

Hundreds of millions already turn to AI on the most personal of topics — therapy, political opinions, and how to treat others. And as AI takes over more of the economy, the character of these systems ...

22 Apr 3h 9min

Risks from power-seeking AI systems (article narration by Zershaaneh Qureshi)

Risks from power-seeking AI systems (article narration by Zershaaneh Qureshi)

Hundreds of prominent AI scientists and other notable figures signed a statement in 2023 saying that mitigating the risk of extinction from AI should be a global priority. At 80,000 Hours, we’ve consi...

16 Apr 1h 29min

How scary is Claude Mythos? 303 pages in 21 minutes

How scary is Claude Mythos? 303 pages in 21 minutes

With Claude Mythos we have an AI that knows when it's being tested, can obscure its reasoning when it wants, and is better at breaking into (and out of) computers than any human alive. Rob Wiblin work...

10 Apr 21min

Village gossip, pesticide bans, and gene drives: 17 experts on the future of global health

Village gossip, pesticide bans, and gene drives: 17 experts on the future of global health

What does it really take to lift millions out of poverty and prevent needless deaths?In this special compilation episode, 17 past guests — including economists, nonprofit founders, and policy advisors...

7 Apr 4h 6min

What everyone is missing about Anthropic vs the Pentagon. And: The Meta leaks are worse than you think.

What everyone is missing about Anthropic vs the Pentagon. And: The Meta leaks are worse than you think.

When the Pentagon tried to strong-arm Anthropic into dropping its ban on AI-only kill decisions and mass domestic surveillance, the company refused. Its critics went on the attack: Anthropic and its s...

3 Apr 20min

#241 – Richard Moulange on how now AI codes viable genomes from scratch and outperforms virologists at lab work — what could go wrong?

#241 – Richard Moulange on how now AI codes viable genomes from scratch and outperforms virologists at lab work — what could go wrong?

Last September, scientists used an AI model to design genomes for entirely new bacteriophages (viruses that infect bacteria). They then built them in a lab. Many were viable. And despite being entirel...

31 Mar 3h 7min

#240 – Samuel Charap on how a Ukraine ceasefire could accidentally set Europe up for a bigger war

#240 – Samuel Charap on how a Ukraine ceasefire could accidentally set Europe up for a bigger war

Many people believe a ceasefire in Ukraine will leave Europe safer. But today's guest lays out how a deal could potentially generate insidious new risks — leaving us in a situation that's equally dang...

24 Mar 1h 12min

Populært innen Fakta

fastlegen
dine-penger-pengeradet
relasjonspodden-med-dora-thorhallsdottir-kjersti-idem
rss-strid-de-norske-borgerkrigene
foreldreradet
mikkels-paskenotter
treningspodden
rss-bisarr-historie
jakt-og-fiskepodden
rss-sunn-okonomi
sinnsyn
tomprat-med-gunnar-tjomlid
rss-kunsten-a-leve
hagespiren-podcast
rss-bak-luftfarten
ukast
fryktlos
hverdagspsyken
rss-mind-body-podden
gravid-uke-for-uke