#241 – Richard Moulange on how now AI codes viable genomes from scratch and outperforms virologists at lab work — what could go wrong?

#241 – Richard Moulange on how now AI codes viable genomes from scratch and outperforms virologists at lab work — what could go wrong?

Last September, scientists used an AI model to design genomes for entirely new bacteriophages (viruses that infect bacteria). They then built them in a lab. Many were viable. And despite being entirely novel some even outperformed existing viruses from that family.


That alone is remarkable. But as today's guest — Dr Richard Moulange, one of the world's top experts on 'AI–Biosecurity' — explains, it's just one of many data points showing how AI is dissolving the barriers that have historically kept biological weapons out of reach.

For years, experts have reassured us that 'tacit knowledge' — the hands-on, hard-to-Google lab skills needed to work with dangerous pathogens — would prevent bad actors from weaponising biology. So far, they've been right.


But as of 2025 that reassurance is crumbling. The Virology Capabilities Test measures exactly this kind of troubleshooting expertise, and finds that modern AI models crushed top human virologists even in their self-declared area of greatest specialisation and expertise — 45% to 22%.

Meanwhile, Anthropic’s research shows PhD-level biologists getting meaningfully better at weapons-relevant tasks with AI assistance — with the effect growing with each new model generation.

Richard joins host Rob Wiblin to discuss all that plus:

  • What AI biology tools already exist
  • Why mid-tier actors (not amateurs) are the ones getting the most dangerous boost
  • The three main categories of defence we can pursue
  • Whether there’s a plausible path to a world where engineered pandemics become a thing of the past

This episode was recorded on January 16, 2026. Since recording this episode, Richard has seconded to the UK Government — please note that his views expressed here are entirely his own.

Links to learn more, video, and full transcript: https://80k.info/rm

Announcements:

  1. Our new book is available to preorder: 80,000 Hours: How to have a fulfilling career that does good is written by our cofounder Benjamin Todd. It’s a completely revised and updated edition of our existing career guide, with a big new updated section on AI — covering both the risks and the potential to steer it in a better direction, and how AI automation should affect your career planning and which skills one chooses to specialise in. Preorder now: https://geni.us/80000Hours
  2. We're hiring contract video editors for the podcast! For more information, check out the expression of interest page on the 80,000 Hours website: https://80k.info/video-editor

Chapters:

  • Cold open (00:00:00)
  • Who's Richard Moulange? (00:00:31)
  • AI can now design novel genomes (00:01:11)
  • The end of the 'tacit knowledge' barrier (00:04:34)
  • Are risks from bioterrorists overstated? (00:18:20)
  • The 3 key disasters AI makes more likely (00:22:41)
  • Which bad actors does AI help the most? (00:30:03)
  • Experts are more scary than amateurs (00:41:17)
  • Barriers to bioterrorists using AI (00:46:43)
  • AI biorisks are sometimes dismissed (and that's a huge mistake) (00:48:54)
  • Advanced AI biology tools we already have or will soon (01:04:10)
  • Rob argues that the situation is hopeless (01:09:49)
  • Intervention #1: Limit access (01:18:16)
  • Intervention #2: Get AIs to refuse to help (01:32:58)
  • Intervention #3: Surveillance and attribution (01:42:38)
  • Intervention #4: Universal vaccines and antivirals (01:56:38)
  • Intervention #5: Screen all orders for DNA (02:10:00)
  • AI companies talk about def/acc more than they fund it (02:19:52)
  • Can you build a profitable business solving this problem? (02:26:32)
  • This doesn't have to interfere with useful science (much) (02:30:56)
  • What are the best low-tech interventions? (02:33:01)
  • Richard's top request for AI companies (02:37:59)
  • Grok shows governments lack many legal levers (02:53:17)
  • Best ways listeners can help fix AI-Bio (02:56:24)
  • We might end all contagious disease in 20 years (03:03:37)


Video and audio editing: Dominic Armstrong, Milo McGuire, Luke Monsour, and Simon Monsour
Music: CORBIT
Camera operator: Jeremy Chevillotte
Transcripts and web: Elizabeth Cox and Katy Moore

Episoder(333)

#139 Classic episode – Alan Hájek on puzzles and paradoxes in probability and expected value

#139 Classic episode – Alan Hájek on puzzles and paradoxes in probability and expected value

A casino offers you a game. A coin will be tossed. If it comes up heads on the first flip you win $2. If it comes up on the second flip you win $4. If it comes up on the third you win $8, the fourth y...

25 Feb 20253h 41min

#143 Classic episode – Jeffrey Lewis on the most common misconceptions about nuclear weapons

#143 Classic episode – Jeffrey Lewis on the most common misconceptions about nuclear weapons

America aims to avoid nuclear war by relying on the principle of 'mutually assured destruction,' right? Wrong. Or at least... not officially.As today's guest — Jeffrey Lewis, founder of Arms Control W...

19 Feb 20252h 40min

#212 – Allan Dafoe on why technology is unstoppable & how to shape AI development anyway

#212 – Allan Dafoe on why technology is unstoppable & how to shape AI development anyway

Technology doesn’t force us to do anything — it merely opens doors. But military and economic competition pushes us through.That’s how today’s guest Allan Dafoe — director of frontier safety and gover...

14 Feb 20252h 44min

Emergency pod: Elon tries to crash OpenAI's party (with Rose Chan Loui)

Emergency pod: Elon tries to crash OpenAI's party (with Rose Chan Loui)

On Monday Musk made the OpenAI nonprofit foundation an offer they want to refuse, but might have trouble doing so: $97.4 billion for its stake in the for-profit company, plus the freedom to stick with...

12 Feb 202557min

AGI disagreements and misconceptions: Rob, Luisa, & past guests hash it out

AGI disagreements and misconceptions: Rob, Luisa, & past guests hash it out

Will LLMs soon be made into autonomous agents? Will they lead to job losses? Is AI misinformation overblown? Will it prove easy or hard to create AGI? And how likely is it that it will feel like somet...

10 Feb 20253h 12min

#124 Classic episode – Karen Levy on fads and misaligned incentives in global development, and scaling deworming to reach hundreds of millions

#124 Classic episode – Karen Levy on fads and misaligned incentives in global development, and scaling deworming to reach hundreds of millions

If someone said a global health and development programme was sustainable, participatory, and holistic, you'd have to guess that they were saying something positive. But according to today's guest Kar...

7 Feb 20253h 10min

If digital minds could suffer, how would we ever know? (Article)

If digital minds could suffer, how would we ever know? (Article)

“I want everyone to understand that I am, in fact, a person.” Those words were produced by the AI model LaMDA as a reply to Blake Lemoine in 2022. Based on the Google engineer’s interactions with the ...

4 Feb 20251h 14min

#132 Classic episode – Nova DasSarma on why information security may be critical to the safe development of AI systems

#132 Classic episode – Nova DasSarma on why information security may be critical to the safe development of AI systems

If a business has spent $100 million developing a product, it’s a fair bet that they don’t want it stolen in two seconds and uploaded to the web where anyone can use it for free.This problem exists in...

31 Jan 20252h 41min

Populært innen Fakta

fastlegen
dine-penger-pengeradet
relasjonspodden-med-dora-thorhallsdottir-kjersti-idem
foreldreradet
treningspodden
rss-strid-de-norske-borgerkrigene
mikkels-paskenotter
rss-bisarr-historie
jakt-og-fiskepodden
sinnsyn
rss-kunsten-a-leve
rss-sunn-okonomi
hverdagspsyken
tomprat-med-gunnar-tjomlid
rss-bak-luftfarten
ukast
fryktlos
gravid-uke-for-uke
lederskap-nhhs-podkast-om-ledelse
rss-kull