#241 – Richard Moulange on how now AI codes viable genomes from scratch and outperforms virologists at lab work — what could go wrong?

#241 – Richard Moulange on how now AI codes viable genomes from scratch and outperforms virologists at lab work — what could go wrong?

Last September, scientists used an AI model to design genomes for entirely new bacteriophages (viruses that infect bacteria). They then built them in a lab. Many were viable. And despite being entirely novel some even outperformed existing viruses from that family.


That alone is remarkable. But as today's guest — Dr Richard Moulange, one of the world's top experts on 'AI–Biosecurity' — explains, it's just one of many data points showing how AI is dissolving the barriers that have historically kept biological weapons out of reach.

For years, experts have reassured us that 'tacit knowledge' — the hands-on, hard-to-Google lab skills needed to work with dangerous pathogens — would prevent bad actors from weaponising biology. So far, they've been right.


But as of 2025 that reassurance is crumbling. The Virology Capabilities Test measures exactly this kind of troubleshooting expertise, and finds that modern AI models crushed top human virologists even in their self-declared area of greatest specialisation and expertise — 45% to 22%.

Meanwhile, Anthropic’s research shows PhD-level biologists getting meaningfully better at weapons-relevant tasks with AI assistance — with the effect growing with each new model generation.

Richard joins host Rob Wiblin to discuss all that plus:

  • What AI biology tools already exist
  • Why mid-tier actors (not amateurs) are the ones getting the most dangerous boost
  • The three main categories of defence we can pursue
  • Whether there’s a plausible path to a world where engineered pandemics become a thing of the past

This episode was recorded on January 16, 2026. Since recording this episode, Richard has seconded to the UK Government — please note that his views expressed here are entirely his own.

Links to learn more, video, and full transcript: https://80k.info/rm

Announcements:

  1. Our new book is available to preorder: 80,000 Hours: How to have a fulfilling career that does good is written by our cofounder Benjamin Todd. It’s a completely revised and updated edition of our existing career guide, with a big new updated section on AI — covering both the risks and the potential to steer it in a better direction, and how AI automation should affect your career planning and which skills one chooses to specialise in. Preorder now: https://geni.us/80000Hours
  2. We're hiring contract video editors for the podcast! For more information, check out the expression of interest page on the 80,000 Hours website: https://80k.info/video-editor

Chapters:

  • Cold open (00:00:00)
  • Who's Richard Moulange? (00:00:31)
  • AI can now design novel genomes (00:01:11)
  • The end of the 'tacit knowledge' barrier (00:04:34)
  • Are risks from bioterrorists overstated? (00:18:20)
  • The 3 key disasters AI makes more likely (00:22:41)
  • Which bad actors does AI help the most? (00:30:03)
  • Experts are more scary than amateurs (00:41:17)
  • Barriers to bioterrorists using AI (00:46:43)
  • AI biorisks are sometimes dismissed (and that's a huge mistake) (00:48:54)
  • Advanced AI biology tools we already have or will soon (01:04:10)
  • Rob argues that the situation is hopeless (01:09:49)
  • Intervention #1: Limit access (01:18:16)
  • Intervention #2: Get AIs to refuse to help (01:32:58)
  • Intervention #3: Surveillance and attribution (01:42:38)
  • Intervention #4: Universal vaccines and antivirals (01:56:38)
  • Intervention #5: Screen all orders for DNA (02:10:00)
  • AI companies talk about def/acc more than they fund it (02:19:52)
  • Can you build a profitable business solving this problem? (02:26:32)
  • This doesn't have to interfere with useful science (much) (02:30:56)
  • What are the best low-tech interventions? (02:33:01)
  • Richard's top request for AI companies (02:37:59)
  • Grok shows governments lack many legal levers (02:53:17)
  • Best ways listeners can help fix AI-Bio (02:56:24)
  • We might end all contagious disease in 20 years (03:03:37)


Video and audio editing: Dominic Armstrong, Milo McGuire, Luke Monsour, and Simon Monsour
Music: CORBIT
Camera operator: Jeremy Chevillotte
Transcripts and web: Elizabeth Cox and Katy Moore

Episoder(333)

The 80,000 Hours Career Guide (2023)

The 80,000 Hours Career Guide (2023)

An audio version of the 2023 80,000 Hours career guide, also available on our website, on Amazon, and on Audible.If you know someone who might find our career guide helpful, you can get a free copy se...

4 Sep 20234h 41min

#162 – Mustafa Suleyman on getting Washington and Silicon Valley to tame AI

#162 – Mustafa Suleyman on getting Washington and Silicon Valley to tame AI

Mustafa Suleyman was part of the trio that founded DeepMind, and his new AI project is building one of the world's largest supercomputers to train a large language model on 10–100x the compute used to...

1 Sep 202359min

#161 – Michael Webb on whether AI will soon cause job loss, lower incomes, and higher inequality — or the opposite

#161 – Michael Webb on whether AI will soon cause job loss, lower incomes, and higher inequality — or the opposite

"Do you remember seeing these photographs of generally women sitting in front of these huge panels and connecting calls, plugging different calls between different numbers? The automated version of th...

23 Aug 20233h 30min

#160 – Hannah Ritchie on why it makes sense to be optimistic about the environment

#160 – Hannah Ritchie on why it makes sense to be optimistic about the environment

"There's no money to invest in education elsewhere, so they almost get trapped in the cycle where they don't get a lot from crop production, but everyone in the family has to work there to just stay a...

14 Aug 20232h 36min

#159 – Jan Leike on OpenAI's massive push to make superintelligence safe in 4 years or less

#159 – Jan Leike on OpenAI's massive push to make superintelligence safe in 4 years or less

In July, OpenAI announced a new team and project: Superalignment. The goal is to figure out how to make superintelligent AI systems aligned and safe to use within four years, and the lab is putting a ...

7 Aug 20232h 51min

We now offer shorter 'interview highlights' episodes

We now offer shorter 'interview highlights' episodes

Over on our other feed, 80k After Hours, you can now find 20-30 minute highlights episodes of our 80,000 Hours Podcast interviews. These aren’t necessarily the most important parts of the interview, a...

5 Aug 20236min

#158 – Holden Karnofsky on how AIs might take over even if they're no smarter than humans, and his 4-part playbook for AI risk

#158 – Holden Karnofsky on how AIs might take over even if they're no smarter than humans, and his 4-part playbook for AI risk

Back in 2007, Holden Karnofsky cofounded GiveWell, where he sought out the charities that most cost-effectively helped save lives. He then cofounded Open Philanthropy, where he oversaw a team making b...

31 Jul 20233h 13min

#157 – Ezra Klein on existential risk from AI and what DC could do about it

#157 – Ezra Klein on existential risk from AI and what DC could do about it

In Oppenheimer, scientists detonate a nuclear weapon despite thinking there's some 'near zero' chance it would ignite the atmosphere, putting an end to life on Earth. Today, scientists working on AI t...

24 Jul 20231h 18min

Populært innen Fakta

fastlegen
dine-penger-pengeradet
relasjonspodden-med-dora-thorhallsdottir-kjersti-idem
rss-strid-de-norske-borgerkrigene
foreldreradet
mikkels-paskenotter
treningspodden
rss-bisarr-historie
jakt-og-fiskepodden
rss-sunn-okonomi
sinnsyn
tomprat-med-gunnar-tjomlid
rss-kunsten-a-leve
hagespiren-podcast
rss-bak-luftfarten
ukast
fryktlos
hverdagspsyken
rss-mind-body-podden
gravid-uke-for-uke