#125 – Joan Rohlfing on how to avoid catastrophic nuclear blunders

#125 – Joan Rohlfing on how to avoid catastrophic nuclear blunders

Since the Soviet Union split into different countries in 1991, the pervasive fear of catastrophe that people lived with for decades has gradually faded from memory, and nuclear warhead stockpiles have declined by 83%. Nuclear brinksmanship, proxy wars, and the game theory of mutually assured destruction (MAD) have come to feel like relics of another era.

Russia's invasion of Ukraine has changed all that.

According to Joan Rohlfing — President of the Nuclear Threat Initiative, a Washington, DC-based nonprofit focused on reducing threats from nuclear and biological weapons — the annual risk of a ‘global catastrophic nuclear event'’ never fell as low as people like to think, and for some time has been on its way back up.

Links to learn more, summary and full transcript.

At the same time, civil society funding for research and advocacy around nuclear risks is being cut in half over a period of years — despite the fact that at $60 million a year, it was already just a thousandth as much as the US spends maintaining its nuclear deterrent.

If new funding sources are not identified to replace donors that are withdrawing, the existing pool of talent will have to leave for greener pastures, and most of the next generation will see a career in the field as unviable.

While global poverty is on the decline and life expectancy increasing, the chance of a catastrophic nuclear event is probably trending in the wrong direction.

Ukraine gave up its nuclear weapons in 1994 in exchange for security guarantees that turned out not to be worth the paper they were written on. States that have nuclear weapons (such as North Korea), states that are pursuing them (such as Iran), and states that have pursued nuclear weapons but since abandoned them (such as Libya, Syria, and South Africa) may take this as a valuable lesson in the importance of military power over promises.

China has been expanding its arsenal and testing hypersonic glide missiles that can evade missile defences. Japan now toys with the idea of nuclear weapons as a way to ensure its security against its much larger neighbour. India and Pakistan both acquired nuclear weapons in the late 1980s and their relationship continues to oscillate from hostile to civil and back.

At the same time, the risk that nuclear weapons could be interfered with due to weaknesses in computer security is far higher than during the Cold War, when systems were simpler and less networked.

In the interview, Joan discusses several steps that can be taken in the immediate term, such as renewed efforts to extend and expand arms control treaties, changes to nuclear use policy, and the retirement of what they see as vulnerable delivery systems, such as land-based silos.

In the bigger picture, NTI seeks to keep hope alive that a better system than deterrence through mutually assured destruction remains possible. The threat of retaliation does indeed make nuclear wars unlikely, but it necessarily means the system fails in an incredibly destructive way: with the death of hundreds of millions if not billions.

In the long run, even a tiny 1 in 500 risk of a nuclear war each year adds up to around an 18% chance of catastrophe over the century.

In this conversation we cover all that, as well as:

• How arms control treaties have evolved over the last few decades
• Whether lobbying by arms manufacturers is an important factor shaping nuclear strategy
• The Biden Nuclear Posture Review
• How easily humanity might recover from a nuclear exchange
• Implications for the use of nuclear energy

Chapters:

  • Rob’s intro (00:00:00)
  • Joan’s EAG presentation (00:01:40)
  • The interview begins (00:27:06)
  • Nuclear security funding situation (00:31:09)
  • Policy solutions for addressing a one-person or one-state risk factor (00:36:46)
  • Key differences in the nuclear security field (00:40:44)
  • Scary scenarios (00:47:02)
  • Why the US shouldn’t expand its nuclear arsenal (00:52:56)
  • The evolution of nuclear risk over the last 10 years (01:03:41)
  • The interaction between nuclear weapons and cybersecurity (01:10:18)
  • The chances of humanity bouncing back after nuclear war (01:13:52)
  • What we should actually do (01:17:57)
  • Could sensors be a game-changer? (01:22:39)
  • Biden Nuclear Posture Review (01:27:50)
  • Influence of lobbying firms (01:33:58)
  • What NTI might do with an additional $20 million (01:36:38)
  • Nuclear energy tradeoffs (01:43:55)
  • Why we can’t rely on Stanislav Petrovs (01:49:49)
  • Preventing war vs. building resilience for recovery (01:52:15)
  • Places to donate other than NTI (01:54:25)
  • Career advice (02:00:15)
  • Why this problem is solvable (02:09:27)


Producer: Keiran Harris
Audio mastering: Ben Cordell
Transcriptions: Katy Moore

Avsnitt(300)

#3 - Dario Amodei on OpenAI and how AI will change the world for good and ill

#3 - Dario Amodei on OpenAI and how AI will change the world for good and ill

Just two years ago OpenAI didn’t exist. It’s now among the most elite groups of machine learning researchers. They’re trying to make an AI that’s smarter than humans and have $1b at their disposal. Even stranger for a Silicon Valley start-up, it’s not a business, but rather a non-profit founded by Elon Musk and Sam Altman among others, to ensure the benefits of AI are distributed broadly to all of society.  I did a long interview with one of its first machine learning researchers, Dr Dario Amodei, to learn about: * OpenAI’s latest plans and research progress. * His paper *Concrete Problems in AI Safety*, which outlines five specific ways machine learning algorithms can act in dangerous ways their designers don’t intend - something OpenAI has to work to avoid. * How listeners can best go about pursuing a career in machine learning and AI development themselves. Full transcript, apply for personalised coaching to work on AI safety, see what questions are asked when, and read extra resources to learn more. 1m33s - What OpenAI is doing, Dario’s research and why AI is important  13m - Why OpenAI scaled back its Universe project  15m50s - Why AI could be dangerous  24m20s - Would smarter than human AI solve most of the world’s problems?  29m - Paper on five concrete problems in AI safety  43m48s - Has OpenAI made progress?  49m30s - What this back flipping noodle can teach you about AI safety  55m30s - How someone can pursue a career in AI safety and get a job at OpenAI  1h02m30s - Where and what should people study?  1h4m15s - What other paradigms for AI are there?  1h7m55s - How do you go from studying to getting a job? What places are there to work?  1h13m30s - If there's a 17-year-old listening here what should they start reading first?  1h19m - Is this a good way to develop your broader career options? Is it a safe move?  1h21m10s - What if you’re older and haven’t studied machine learning? How do you break in?  1h24m - What about doing this work in academia?  1h26m50s - Is the work frustrating because solutions may not exist?  1h31m35s - How do we prevent a dangerous arms race?  1h36m30s - Final remarks on how to get into doing useful work in machine learning

21 Juli 20171h 38min

#2 - David Spiegelhalter on risk, stats and improving understanding of science

#2 - David Spiegelhalter on risk, stats and improving understanding of science

Recorded in 2015 by Robert Wiblin with colleague Jess Whittlestone at the Centre for Effective Altruism, and recovered from the dusty 80,000 Hours archives. David Spiegelhalter is a statistician at the University of Cambridge and something of an academic celebrity in the UK. Part of his role is to improve the public understanding of risk - especially everyday risks we face like getting cancer or dying in a car crash. As a result he’s regularly in the media explaining numbers in the news, trying to assist both ordinary people and politicians focus on the important risks we face, and avoid being distracted by flashy risks that don’t actually have much impact. Summary, full transcript and extra links to learn more. To help make sense of the uncertainties we face in life he has had to invent concepts like the microlife, or a 30-minute change in life expectancy. (https://en.wikipedia.org/wiki/Microlife) We wanted to learn whether he thought a lifetime of work communicating science had actually had much impact on the world, and what advice he might have for people planning their careers today.

21 Juni 201733min

#1 - Miles Brundage on the world's desperate need for AI strategists and policy experts

#1 - Miles Brundage on the world's desperate need for AI strategists and policy experts

Robert Wiblin, Director of Research at 80,000 Hours speaks with Miles Brundage, research fellow at the University of Oxford's Future of Humanity Institute. Miles studies the social implications surrounding the development of new technologies and has a particular interest in artificial general intelligence, that is, an AI system that could do most or all of the tasks humans could do. This interview complements our profile of the importance of positively shaping artificial intelligence and our guide to careers in AI policy and strategy Full transcript, apply for personalised coaching to work on AI strategy, see what questions are asked when, and read extra resources to learn more.

5 Juni 201755min

#0 – Introducing the 80,000 Hours Podcast

#0 – Introducing the 80,000 Hours Podcast

80,000 Hours is a non-profit that provides research and other support to help people switch into careers that effectively tackle the world's most pressing problems. This podcast is just one of many things we offer, the others of which you can find at 80000hours.org. Since 2017 this show has been putting out interviews about the world's most pressing problems and how to solve them — which some people enjoy because they love to learn about important things, and others are using to figure out what they want to do with their careers or with their charitable giving. If you haven't yet spent a lot of time with 80,000 Hours or our general style of thinking, called effective altruism, it's probably really helpful to first go through the episodes that set the scene, explain our overall perspective on things, and generally offer all the background information you need to get the most out of the episodes we're making now. That's why we've made a new feed with ten carefully selected episodes from the show's archives, called 'Effective Altruism: An Introduction'. You can find it by searching for 'Effective Altruism' in your podcasting app or at 80000hours.org/intro. Or, if you’d rather listen on this feed, here are the ten episodes we recommend you listen to first: • #21 – Holden Karnofsky on the world's most intellectual foundation and how philanthropy can have maximum impact by taking big risks • #6 – Toby Ord on why the long-term future of humanity matters more than anything else and what we should do about it • #17 – Will MacAskill on why our descendants might view us as moral monsters • #39 – Spencer Greenberg on the scientific approach to updating your beliefs when you get new evidence • #44 – Paul Christiano on developing real solutions to the 'AI alignment problem' • #60 – What Professor Tetlock learned from 40 years studying how to predict the future • #46 – Hilary Greaves on moral cluelessness, population ethics and tackling global issues in academia • #71 – Benjamin Todd on the key ideas of 80,000 Hours • #50 – Dave Denkenberger on how we might feed all 8 billion people through a nuclear winter • 80,000 Hours Team chat #3 – Koehler and Todd on the core idea of effective altruism and how to argue for it

1 Maj 20173min

Populärt inom Utbildning

bygga-at-idioter
rss-bara-en-till-om-missbruk-medberoende-2
historiepodden-se
det-skaver
harrisons-dramatiska-historia
nu-blir-det-historia
allt-du-velat-veta
nar-man-talar-om-trollen
johannes-hansen-podcast
not-fanny-anymore
roda-vita-rosen
sektledare
i-vantan-pa-katastrofen
sa-in-i-sjalen
alska-oss
handen-pa-hjartat
jagaren
rss-max-tant-med-max-villman
rss-sjalsligt-avkladd
rss-npf-podden