#117 – David Denkenberger on using paper mills and seaweed to feed everyone in a catastrophe, ft Sahil Shah

#117 – David Denkenberger on using paper mills and seaweed to feed everyone in a catastrophe, ft Sahil Shah

If there's a nuclear war followed by nuclear winter, and the sun is blocked out for years, most of us are going to starve, right? Well, currently, probably we would, because humanity hasn't done much to prevent it. But it turns out that an ounce of forethought might be enough for most people to get the calories they need to survive, even in a future as grim as that one.

Today's guest is engineering professor Dave Denkenberger, who co-founded the Alliance to Feed the Earth in Disasters (ALLFED), which has the goal of finding ways humanity might be able to feed itself for years without relying on the sun. Over the last seven years, Dave and his team have turned up options from the mundane, like mushrooms grown on rotting wood, to the bizarre, like bacteria that can eat natural gas or electricity itself.

Links to learn more, summary and full transcript.

One option stands out as potentially able to feed billions: finding a way to eat wood ourselves. Even after a disaster, a huge amount of calories will be lying around, stored in wood and other plant cellulose. The trouble is that, even though cellulose is basically a lot of sugar molecules stuck together, humans can't eat wood.

But we do know how to turn wood into something people can eat. We can grind wood up in already existing paper mills, then mix the pulp with enzymes that break the cellulose into sugar and the hemicellulose into other sugars.

Another option that shows a lot of promise is seaweed. Buffered by the water around them, ocean life wouldn't be as affected by the lower temperatures resulting from the sun being obscured. Sea plants are also already used to growing in low light, because the water above them already shades them to some extent.

Dave points out that "there are several species of seaweed that can still grow 10% per day, even with the lower light levels in nuclear winter and lower temperatures. ... Not surprisingly, with that 10% growth per day, assuming we can scale up, we could actually get up to 160% of human calories in less than a year."

Of course it will be easier to scale up seaweed production if it's already a reasonably sized industry. At the end of the interview, we're joined by Sahil Shah, who is trying to expand seaweed production in the UK with his business Sustainable Seaweed.

While a diet of seaweed and trees turned into sugar might not seem that appealing, the team at ALLFED also thinks several perfectly normal crops could also make a big contribution to feeding the world, even in a truly catastrophic scenario. Those crops include potatoes, canola, and sugar beets, which are currently grown in cool low-light environments.

Many of these ideas could turn out to be misguided or impractical in real-world conditions, which is why Dave and ALLFED are raising money to test them out on the ground. They think it's essential to show these techniques can work so that should the worst happen, people turn their attention to producing more food rather than fighting one another over the small amount of food humanity has stockpiled.

In this conversation, Rob, Dave, and Sahil discuss the above, as well as:
• How much one can trust the sort of economic modelling ALLFED does
• Bacteria that turn natural gas or electricity into protein
• How to feed astronauts in space with nuclear power
• What individuals can do to prepare themselves for global catastrophes
• Whether we should worry about humanity running out of natural resources
• How David helped save $10 billion worth of electricity through energy efficiency standards
• And much more

Chapters:

  • Rob’s intro (00:00:00)
  • The interview begins (00:02:36)
  • Resilient foods recap (00:04:27)
  • Cost effectiveness recap (00:08:07)
  • Turning fiber or wood or cellulose into sugar (00:10:30)
  • Redirecting human-edible food away from animals (00:22:46)
  • Seaweed production (00:26:33)
  • Crops that can handle lower temperatures or lower light (00:35:24)
  • Greenhouses (00:40:51)
  • How much to trust this economic modeling (00:43:50)
  • Global cooperation (00:51:16)
  • People feeding themselves using these methods (00:57:15)
  • NASA and ALLFED (01:04:47)
  • Kinds of catastrophes (01:15:16)
  • Is New Zealand overrated? (01:25:35)
  • Should listeners be doing anything to prepare for possible disasters? (01:28:43)
  • Cost effectiveness of work on EMPs (01:30:43)
  • The future of ALLFED (01:33:34)
  • Opportunities at ALLFED (01:40:49)
  • Why Dave is optimistic around bigger-picture scarcity issues (01:46:58)
  • Energy return on energy invested (01:56:36)
  • Nitrogen and phosphorus (02:03:25)
  • Energy and food prices (02:07:18)
  • Sustainable Seaweed with Sahil Shah (02:21:44)
  • Locusts (02:38:33)
  • The effect of COVID on food supplies (02:44:01)
  • How much food prices would spike in a disaster (02:50:46)
  • How Dave helped to save ~$10 billion worth of energy (02:56:33)
  • What it’s like to live in Alaska (03:03:18)

Producer: Keiran Harris
Audio mastering: Ben Cordell
Transcriptions: Katy Moore

Avsnitt(300)

#3 - Dario Amodei on OpenAI and how AI will change the world for good and ill

#3 - Dario Amodei on OpenAI and how AI will change the world for good and ill

Just two years ago OpenAI didn’t exist. It’s now among the most elite groups of machine learning researchers. They’re trying to make an AI that’s smarter than humans and have $1b at their disposal. Even stranger for a Silicon Valley start-up, it’s not a business, but rather a non-profit founded by Elon Musk and Sam Altman among others, to ensure the benefits of AI are distributed broadly to all of society.  I did a long interview with one of its first machine learning researchers, Dr Dario Amodei, to learn about: * OpenAI’s latest plans and research progress. * His paper *Concrete Problems in AI Safety*, which outlines five specific ways machine learning algorithms can act in dangerous ways their designers don’t intend - something OpenAI has to work to avoid. * How listeners can best go about pursuing a career in machine learning and AI development themselves. Full transcript, apply for personalised coaching to work on AI safety, see what questions are asked when, and read extra resources to learn more. 1m33s - What OpenAI is doing, Dario’s research and why AI is important  13m - Why OpenAI scaled back its Universe project  15m50s - Why AI could be dangerous  24m20s - Would smarter than human AI solve most of the world’s problems?  29m - Paper on five concrete problems in AI safety  43m48s - Has OpenAI made progress?  49m30s - What this back flipping noodle can teach you about AI safety  55m30s - How someone can pursue a career in AI safety and get a job at OpenAI  1h02m30s - Where and what should people study?  1h4m15s - What other paradigms for AI are there?  1h7m55s - How do you go from studying to getting a job? What places are there to work?  1h13m30s - If there's a 17-year-old listening here what should they start reading first?  1h19m - Is this a good way to develop your broader career options? Is it a safe move?  1h21m10s - What if you’re older and haven’t studied machine learning? How do you break in?  1h24m - What about doing this work in academia?  1h26m50s - Is the work frustrating because solutions may not exist?  1h31m35s - How do we prevent a dangerous arms race?  1h36m30s - Final remarks on how to get into doing useful work in machine learning

21 Juli 20171h 38min

#2 - David Spiegelhalter on risk, stats and improving understanding of science

#2 - David Spiegelhalter on risk, stats and improving understanding of science

Recorded in 2015 by Robert Wiblin with colleague Jess Whittlestone at the Centre for Effective Altruism, and recovered from the dusty 80,000 Hours archives. David Spiegelhalter is a statistician at the University of Cambridge and something of an academic celebrity in the UK. Part of his role is to improve the public understanding of risk - especially everyday risks we face like getting cancer or dying in a car crash. As a result he’s regularly in the media explaining numbers in the news, trying to assist both ordinary people and politicians focus on the important risks we face, and avoid being distracted by flashy risks that don’t actually have much impact. Summary, full transcript and extra links to learn more. To help make sense of the uncertainties we face in life he has had to invent concepts like the microlife, or a 30-minute change in life expectancy. (https://en.wikipedia.org/wiki/Microlife) We wanted to learn whether he thought a lifetime of work communicating science had actually had much impact on the world, and what advice he might have for people planning their careers today.

21 Juni 201733min

#1 - Miles Brundage on the world's desperate need for AI strategists and policy experts

#1 - Miles Brundage on the world's desperate need for AI strategists and policy experts

Robert Wiblin, Director of Research at 80,000 Hours speaks with Miles Brundage, research fellow at the University of Oxford's Future of Humanity Institute. Miles studies the social implications surrounding the development of new technologies and has a particular interest in artificial general intelligence, that is, an AI system that could do most or all of the tasks humans could do. This interview complements our profile of the importance of positively shaping artificial intelligence and our guide to careers in AI policy and strategy Full transcript, apply for personalised coaching to work on AI strategy, see what questions are asked when, and read extra resources to learn more.

5 Juni 201755min

#0 – Introducing the 80,000 Hours Podcast

#0 – Introducing the 80,000 Hours Podcast

80,000 Hours is a non-profit that provides research and other support to help people switch into careers that effectively tackle the world's most pressing problems. This podcast is just one of many things we offer, the others of which you can find at 80000hours.org. Since 2017 this show has been putting out interviews about the world's most pressing problems and how to solve them — which some people enjoy because they love to learn about important things, and others are using to figure out what they want to do with their careers or with their charitable giving. If you haven't yet spent a lot of time with 80,000 Hours or our general style of thinking, called effective altruism, it's probably really helpful to first go through the episodes that set the scene, explain our overall perspective on things, and generally offer all the background information you need to get the most out of the episodes we're making now. That's why we've made a new feed with ten carefully selected episodes from the show's archives, called 'Effective Altruism: An Introduction'. You can find it by searching for 'Effective Altruism' in your podcasting app or at 80000hours.org/intro. Or, if you’d rather listen on this feed, here are the ten episodes we recommend you listen to first: • #21 – Holden Karnofsky on the world's most intellectual foundation and how philanthropy can have maximum impact by taking big risks • #6 – Toby Ord on why the long-term future of humanity matters more than anything else and what we should do about it • #17 – Will MacAskill on why our descendants might view us as moral monsters • #39 – Spencer Greenberg on the scientific approach to updating your beliefs when you get new evidence • #44 – Paul Christiano on developing real solutions to the 'AI alignment problem' • #60 – What Professor Tetlock learned from 40 years studying how to predict the future • #46 – Hilary Greaves on moral cluelessness, population ethics and tackling global issues in academia • #71 – Benjamin Todd on the key ideas of 80,000 Hours • #50 – Dave Denkenberger on how we might feed all 8 billion people through a nuclear winter • 80,000 Hours Team chat #3 – Koehler and Todd on the core idea of effective altruism and how to argue for it

1 Maj 20173min

Populärt inom Utbildning

bygga-at-idioter
rss-bara-en-till-om-missbruk-medberoende-2
historiepodden-se
det-skaver
harrisons-dramatiska-historia
nu-blir-det-historia
allt-du-velat-veta
nar-man-talar-om-trollen
johannes-hansen-podcast
not-fanny-anymore
roda-vita-rosen
sektledare
i-vantan-pa-katastrofen
sa-in-i-sjalen
alska-oss
handen-pa-hjartat
jagaren
rss-max-tant-med-max-villman
rss-sjalsligt-avkladd
rss-npf-podden