#95 – Kelly Wanser on whether to deliberately intervene in the climate

#95 – Kelly Wanser on whether to deliberately intervene in the climate

How long do you think it’ll be before we’re able to bend the weather to our will? A massive rainmaking program in China, efforts to seed new oases in the Arabian peninsula, or chemically induce snow for skiers in Colorado.

100 years? 50 years? 20?

Those who know how to write a teaser hook for a podcast episode will have correctly guessed that all these things are already happening today. And the techniques being used could be turned to managing climate change as well.

Today’s guest, Kelly Wanser, founded SilverLining — a nonprofit organization that advocates research into climate interventions, such as seeding or brightening clouds, to ensure that we maintain a safe climate.

Links to learn more, summary and full transcript.

Kelly says that current climate projections, even if we do everything right from here on out, imply that two degrees of global warming are now unavoidable. And the same scientists who made those projections fear the flow-through effect that warming could have.

Since our best case scenario may already be too dangerous, SilverLining focuses on ways that we could intervene quickly in the climate if things get especially grim — their research serving as a kind of insurance policy.

After considering everything from mirrors in space, to shiny objects on the ocean, to materials on the Arctic, their scientists concluded that the most promising approach was leveraging one of the ways that the Earth already regulates its temperature — the reflection of sunlight off particles and clouds in the atmosphere.

Cloud brightening is a climate control approach that uses the spraying of a fine mist of sea water into clouds to make them 'whiter' so they reflect even more sunlight back into space.

These ‘streaks’ in clouds are already created by ships because the particulates from their diesel engines inadvertently make clouds a bit brighter.

Kelly says that scientists estimate that we're already lowering the global temperature this way by 0.5–1.1ºC, without even intending to.

While fossil fuel particulates are terrible for human health, they think we could replicate this effect by simply spraying sea water up into clouds. But so far there hasn't been funding to measure how much temperature change you get for a given amount of spray.

And we won't want to dive into these methods head first because the atmosphere is a complex system we can't yet properly model, and there are many things to check first. For instance, chemicals that reflect light from the upper atmosphere might totally change wind patterns in the stratosphere. Or they might not — for all the discussion of global warming the climate is surprisingly understudied.

The public tends to be skeptical of climate interventions, otherwise known as geoengineering, so in this episode we cover a range of possible objections, such as:

• It being riskier than doing nothing
• That it will inevitably be dangerously political
• And the risk of the 'double catastrophe', where a pandemic stops our climate interventions and temperatures sky-rocket at the worst time.

Kelly and Rob also talk about:

• The many climate interventions that are already happening
• The most promising ideas in the field
• And whether people would be more accepting if we found ways to intervene that had nothing to do with making the world a better place.

Chapters:
• Rob’s intro (00:00:00)
• The interview begins (00:01:37)
• Existing climate interventions (00:06:44)
• Most promising ideas (00:16:23)
• Doing good by accident (00:28:39)
• Objections to this approach (00:31:16)
• How much could countries do individually? (00:47:19)
• Government funding (00:50:08)
• Is global coordination possible? (00:53:01)
• Malicious use (00:57:07)
• Careers and SilverLining (01:04:03)
• Rob’s outro (01:23:34)

Producer: Keiran Harris.
Audio mastering: Ben Cordell.
Transcriptions: Sofia Davis-Fogel.

Avsnitt(300)

#3 - Dario Amodei on OpenAI and how AI will change the world for good and ill

#3 - Dario Amodei on OpenAI and how AI will change the world for good and ill

Just two years ago OpenAI didn’t exist. It’s now among the most elite groups of machine learning researchers. They’re trying to make an AI that’s smarter than humans and have $1b at their disposal. Even stranger for a Silicon Valley start-up, it’s not a business, but rather a non-profit founded by Elon Musk and Sam Altman among others, to ensure the benefits of AI are distributed broadly to all of society.  I did a long interview with one of its first machine learning researchers, Dr Dario Amodei, to learn about: * OpenAI’s latest plans and research progress. * His paper *Concrete Problems in AI Safety*, which outlines five specific ways machine learning algorithms can act in dangerous ways their designers don’t intend - something OpenAI has to work to avoid. * How listeners can best go about pursuing a career in machine learning and AI development themselves. Full transcript, apply for personalised coaching to work on AI safety, see what questions are asked when, and read extra resources to learn more. 1m33s - What OpenAI is doing, Dario’s research and why AI is important  13m - Why OpenAI scaled back its Universe project  15m50s - Why AI could be dangerous  24m20s - Would smarter than human AI solve most of the world’s problems?  29m - Paper on five concrete problems in AI safety  43m48s - Has OpenAI made progress?  49m30s - What this back flipping noodle can teach you about AI safety  55m30s - How someone can pursue a career in AI safety and get a job at OpenAI  1h02m30s - Where and what should people study?  1h4m15s - What other paradigms for AI are there?  1h7m55s - How do you go from studying to getting a job? What places are there to work?  1h13m30s - If there's a 17-year-old listening here what should they start reading first?  1h19m - Is this a good way to develop your broader career options? Is it a safe move?  1h21m10s - What if you’re older and haven’t studied machine learning? How do you break in?  1h24m - What about doing this work in academia?  1h26m50s - Is the work frustrating because solutions may not exist?  1h31m35s - How do we prevent a dangerous arms race?  1h36m30s - Final remarks on how to get into doing useful work in machine learning

21 Juli 20171h 38min

#2 - David Spiegelhalter on risk, stats and improving understanding of science

#2 - David Spiegelhalter on risk, stats and improving understanding of science

Recorded in 2015 by Robert Wiblin with colleague Jess Whittlestone at the Centre for Effective Altruism, and recovered from the dusty 80,000 Hours archives. David Spiegelhalter is a statistician at the University of Cambridge and something of an academic celebrity in the UK. Part of his role is to improve the public understanding of risk - especially everyday risks we face like getting cancer or dying in a car crash. As a result he’s regularly in the media explaining numbers in the news, trying to assist both ordinary people and politicians focus on the important risks we face, and avoid being distracted by flashy risks that don’t actually have much impact. Summary, full transcript and extra links to learn more. To help make sense of the uncertainties we face in life he has had to invent concepts like the microlife, or a 30-minute change in life expectancy. (https://en.wikipedia.org/wiki/Microlife) We wanted to learn whether he thought a lifetime of work communicating science had actually had much impact on the world, and what advice he might have for people planning their careers today.

21 Juni 201733min

#1 - Miles Brundage on the world's desperate need for AI strategists and policy experts

#1 - Miles Brundage on the world's desperate need for AI strategists and policy experts

Robert Wiblin, Director of Research at 80,000 Hours speaks with Miles Brundage, research fellow at the University of Oxford's Future of Humanity Institute. Miles studies the social implications surrounding the development of new technologies and has a particular interest in artificial general intelligence, that is, an AI system that could do most or all of the tasks humans could do. This interview complements our profile of the importance of positively shaping artificial intelligence and our guide to careers in AI policy and strategy Full transcript, apply for personalised coaching to work on AI strategy, see what questions are asked when, and read extra resources to learn more.

5 Juni 201755min

#0 – Introducing the 80,000 Hours Podcast

#0 – Introducing the 80,000 Hours Podcast

80,000 Hours is a non-profit that provides research and other support to help people switch into careers that effectively tackle the world's most pressing problems. This podcast is just one of many things we offer, the others of which you can find at 80000hours.org. Since 2017 this show has been putting out interviews about the world's most pressing problems and how to solve them — which some people enjoy because they love to learn about important things, and others are using to figure out what they want to do with their careers or with their charitable giving. If you haven't yet spent a lot of time with 80,000 Hours or our general style of thinking, called effective altruism, it's probably really helpful to first go through the episodes that set the scene, explain our overall perspective on things, and generally offer all the background information you need to get the most out of the episodes we're making now. That's why we've made a new feed with ten carefully selected episodes from the show's archives, called 'Effective Altruism: An Introduction'. You can find it by searching for 'Effective Altruism' in your podcasting app or at 80000hours.org/intro. Or, if you’d rather listen on this feed, here are the ten episodes we recommend you listen to first: • #21 – Holden Karnofsky on the world's most intellectual foundation and how philanthropy can have maximum impact by taking big risks • #6 – Toby Ord on why the long-term future of humanity matters more than anything else and what we should do about it • #17 – Will MacAskill on why our descendants might view us as moral monsters • #39 – Spencer Greenberg on the scientific approach to updating your beliefs when you get new evidence • #44 – Paul Christiano on developing real solutions to the 'AI alignment problem' • #60 – What Professor Tetlock learned from 40 years studying how to predict the future • #46 – Hilary Greaves on moral cluelessness, population ethics and tackling global issues in academia • #71 – Benjamin Todd on the key ideas of 80,000 Hours • #50 – Dave Denkenberger on how we might feed all 8 billion people through a nuclear winter • 80,000 Hours Team chat #3 – Koehler and Todd on the core idea of effective altruism and how to argue for it

1 Maj 20173min

Populärt inom Utbildning

bygga-at-idioter
rss-bara-en-till-om-missbruk-medberoende-2
historiepodden-se
det-skaver
nu-blir-det-historia
allt-du-velat-veta
johannes-hansen-podcast
nar-man-talar-om-trollen
sektledare
not-fanny-anymore
harrisons-dramatiska-historia
roda-vita-rosen
sa-in-i-sjalen
i-vantan-pa-katastrofen
alska-oss
rss-max-tant-med-max-villman
handen-pa-hjartat
rikatillsammans-om-privatekonomi-rikedom-i-livet
jagaren
herrsurf