GPT-7 may democratise bioweapons. But we can defend ourselves anyway. | Andrew Snyder-Beattie

GPT-7 may democratise bioweapons. But we can defend ourselves anyway. | Andrew Snyder-Beattie

Conventional wisdom is that safeguarding humanity from the worst biological risks — microbes optimised to kill as many as possible — is difficult bordering on impossible, making bioweapons humanity’s single greatest vulnerability. Andrew Snyder-Beattie thinks conventional wisdom could be wrong.

Andrew’s job at Open Philanthropy is to spend hundreds of millions of dollars to protect as much of humanity as possible in the worst-case scenarios — those with fatality rates near 100% and the collapse of technological civilisation a live possibility.

Video, full transcript, and links to learn more: https://80k.info/asb

As Andrew lays out, there are several ways this could happen, including:

  • A national bioweapons programme gone wrong, in particular Russia or North Korea
  • AI advances making it easier for terrorists or a rogue AI to release highly engineered pathogens
  • Mirror bacteria that can evade the immune systems of not only humans, but many animals and potentially plants as well

Most efforts to combat these extreme biorisks have focused on either prevention or new high-tech countermeasures. But prevention may well fail, and high-tech approaches can’t scale to protect billions when, with no sane people willing to leave their home, we’re just weeks from economic collapse.

So Andrew and his biosecurity research team at Open Philanthropy have been seeking an alternative approach. They’re proposing a four-stage plan using simple technology that could save most people, and is cheap enough it can be prepared without government support. Andrew is hiring for a range of roles to make it happen — from manufacturing and logistics experts to global health specialists to policymakers and other ambitious entrepreneurs — as well as programme associates to join Open Philanthropy’s biosecurity team (apply by October 20!).

Fundamentally, organisms so small have no way to penetrate physical barriers or shield themselves from UV, heat, or chemical poisons. We now know how to make highly effective ‘elastomeric’ face masks that cost $10, can sit in storage for 20 years, and can be used for six months straight without changing the filter. Any rich country could trivially stockpile enough to cover all essential workers.

People can’t wear masks 24/7, but fortunately propylene glycol — already found in vapes and smoke machines — is astonishingly good at killing microbes in the air. And, being a common chemical input, industry already produces enough of the stuff to cover every indoor space we need at all times.

Add to this the wastewater monitoring and metagenomic sequencing that will detect the most dangerous pathogens before they have a chance to wreak havoc, and we might just buy ourselves enough time to develop the cure we’ll need to come out alive.

Has everyone been wrong, and biology is actually defence dominant rather than offence dominant? Is this plan crazy — or so crazy it just might work?

That’s what host Rob Wiblin and Andrew Snyder-Beattie explore in this in-depth conversation.


What did you think of the episode? https://forms.gle/66Hw5spgnV3eVWXa6

Chapters:

  • Cold open (00:00:00)
  • Who's Andrew Snyder-Beattie? (00:01:23)
  • It could get really bad (00:01:57)
  • The worst-case scenario: mirror bacteria (00:08:58)
  • To actually work, a solution has to be low-tech (00:17:40)
  • Why ASB works on biorisks rather than AI (00:20:37)
  • Plan A is prevention. But it might not work. (00:24:48)
  • The “four pillars” plan (00:30:36)
  • ASB is hiring now to make this happen (00:32:22)
  • Everyone was wrong: biorisks are defence dominant in the limit (00:34:22)
  • Pillar 1: A wall between the virus and your lungs (00:39:33)
  • Pillar 2: Biohardening buildings (00:54:57)
  • Pillar 3: Immediately detecting the pandemic (01:13:57)
  • Pillar 4: A cure (01:27:14)
  • The plan's biggest weaknesses (01:38:35)
  • If it's so good, why are you the only group to suggest it? (01:43:04)
  • Would chaos and conflict make this impossible to pull off? (01:45:08)
  • Would rogue AI make bioweapons? Would other AIs save us? (01:50:05)
  • We can feed the world even if all the plants die (01:56:08)
  • Could a bioweapon make the Earth uninhabitable? (02:05:06)
  • Many open roles to solve bio-extinction — and you don’t necessarily need a biology background (02:07:34)
  • Career mistakes ASB thinks are common (02:16:19)
  • How to protect yourself and your family (02:28:21)

This episode was recorded on August 12, 2025

Video editing: Simon Monsour and Luke Monsour
Audio engineering: Milo McGuire, Simon Monsour, and Dominic Armstrong
Music: CORBIT
Camera operator: Jake Morris
Coordination, transcriptions, and web: Katy Moore

Episoder(300)

#3 - Dario Amodei on OpenAI and how AI will change the world for good and ill

#3 - Dario Amodei on OpenAI and how AI will change the world for good and ill

Just two years ago OpenAI didn’t exist. It’s now among the most elite groups of machine learning researchers. They’re trying to make an AI that’s smarter than humans and have $1b at their disposal. Even stranger for a Silicon Valley start-up, it’s not a business, but rather a non-profit founded by Elon Musk and Sam Altman among others, to ensure the benefits of AI are distributed broadly to all of society.  I did a long interview with one of its first machine learning researchers, Dr Dario Amodei, to learn about: * OpenAI’s latest plans and research progress. * His paper *Concrete Problems in AI Safety*, which outlines five specific ways machine learning algorithms can act in dangerous ways their designers don’t intend - something OpenAI has to work to avoid. * How listeners can best go about pursuing a career in machine learning and AI development themselves. Full transcript, apply for personalised coaching to work on AI safety, see what questions are asked when, and read extra resources to learn more. 1m33s - What OpenAI is doing, Dario’s research and why AI is important  13m - Why OpenAI scaled back its Universe project  15m50s - Why AI could be dangerous  24m20s - Would smarter than human AI solve most of the world’s problems?  29m - Paper on five concrete problems in AI safety  43m48s - Has OpenAI made progress?  49m30s - What this back flipping noodle can teach you about AI safety  55m30s - How someone can pursue a career in AI safety and get a job at OpenAI  1h02m30s - Where and what should people study?  1h4m15s - What other paradigms for AI are there?  1h7m55s - How do you go from studying to getting a job? What places are there to work?  1h13m30s - If there's a 17-year-old listening here what should they start reading first?  1h19m - Is this a good way to develop your broader career options? Is it a safe move?  1h21m10s - What if you’re older and haven’t studied machine learning? How do you break in?  1h24m - What about doing this work in academia?  1h26m50s - Is the work frustrating because solutions may not exist?  1h31m35s - How do we prevent a dangerous arms race?  1h36m30s - Final remarks on how to get into doing useful work in machine learning

21 Jul 20171h 38min

#2 - David Spiegelhalter on risk, stats and improving understanding of science

#2 - David Spiegelhalter on risk, stats and improving understanding of science

Recorded in 2015 by Robert Wiblin with colleague Jess Whittlestone at the Centre for Effective Altruism, and recovered from the dusty 80,000 Hours archives. David Spiegelhalter is a statistician at the University of Cambridge and something of an academic celebrity in the UK. Part of his role is to improve the public understanding of risk - especially everyday risks we face like getting cancer or dying in a car crash. As a result he’s regularly in the media explaining numbers in the news, trying to assist both ordinary people and politicians focus on the important risks we face, and avoid being distracted by flashy risks that don’t actually have much impact. Summary, full transcript and extra links to learn more. To help make sense of the uncertainties we face in life he has had to invent concepts like the microlife, or a 30-minute change in life expectancy. (https://en.wikipedia.org/wiki/Microlife) We wanted to learn whether he thought a lifetime of work communicating science had actually had much impact on the world, and what advice he might have for people planning their careers today.

21 Jun 201733min

#1 - Miles Brundage on the world's desperate need for AI strategists and policy experts

#1 - Miles Brundage on the world's desperate need for AI strategists and policy experts

Robert Wiblin, Director of Research at 80,000 Hours speaks with Miles Brundage, research fellow at the University of Oxford's Future of Humanity Institute. Miles studies the social implications surrounding the development of new technologies and has a particular interest in artificial general intelligence, that is, an AI system that could do most or all of the tasks humans could do. This interview complements our profile of the importance of positively shaping artificial intelligence and our guide to careers in AI policy and strategy Full transcript, apply for personalised coaching to work on AI strategy, see what questions are asked when, and read extra resources to learn more.

5 Jun 201755min

#0 – Introducing the 80,000 Hours Podcast

#0 – Introducing the 80,000 Hours Podcast

80,000 Hours is a non-profit that provides research and other support to help people switch into careers that effectively tackle the world's most pressing problems. This podcast is just one of many things we offer, the others of which you can find at 80000hours.org. Since 2017 this show has been putting out interviews about the world's most pressing problems and how to solve them — which some people enjoy because they love to learn about important things, and others are using to figure out what they want to do with their careers or with their charitable giving. If you haven't yet spent a lot of time with 80,000 Hours or our general style of thinking, called effective altruism, it's probably really helpful to first go through the episodes that set the scene, explain our overall perspective on things, and generally offer all the background information you need to get the most out of the episodes we're making now. That's why we've made a new feed with ten carefully selected episodes from the show's archives, called 'Effective Altruism: An Introduction'. You can find it by searching for 'Effective Altruism' in your podcasting app or at 80000hours.org/intro. Or, if you’d rather listen on this feed, here are the ten episodes we recommend you listen to first: • #21 – Holden Karnofsky on the world's most intellectual foundation and how philanthropy can have maximum impact by taking big risks • #6 – Toby Ord on why the long-term future of humanity matters more than anything else and what we should do about it • #17 – Will MacAskill on why our descendants might view us as moral monsters • #39 – Spencer Greenberg on the scientific approach to updating your beliefs when you get new evidence • #44 – Paul Christiano on developing real solutions to the 'AI alignment problem' • #60 – What Professor Tetlock learned from 40 years studying how to predict the future • #46 – Hilary Greaves on moral cluelessness, population ethics and tackling global issues in academia • #71 – Benjamin Todd on the key ideas of 80,000 Hours • #50 – Dave Denkenberger on how we might feed all 8 billion people through a nuclear winter • 80,000 Hours Team chat #3 – Koehler and Todd on the core idea of effective altruism and how to argue for it

1 Mai 20173min

Populært innen Fakta

fastlegen
dine-penger-pengeradet
hanna-de-heldige
fryktlos
relasjonspodden-med-dora-thorhallsdottir-kjersti-idem
treningspodden
foreldreradet
rss-strid-de-norske-borgerkrigene
dypdykk
tomprat-med-gunnar-tjomlid
rss-kunsten-a-leve
rss-sunn-okonomi
jakt-og-fiskepodden
sinnsyn
hverdagspsyken
takk-og-lov-med-anine-kierulf
doktor-fives-podcast
gravid-uke-for-uke
mikkels-paskenotter
dopet