#191 (Part 1) – Carl Shulman on the economy and national security after AGI

#191 (Part 1) – Carl Shulman on the economy and national security after AGI

This is the first part of our marathon interview with Carl Shulman. The second episode is on government and society after AGI. You can listen to them in either order!

The human brain does what it does with a shockingly low energy supply: just 20 watts — a fraction of a cent worth of electricity per hour. What would happen if AI technology merely matched what evolution has already managed, and could accomplish the work of top human professionals given a 20-watt power supply?

Many people sort of consider that hypothetical, but maybe nobody has followed through and considered all the implications as much as Carl Shulman. Behind the scenes, his work has greatly influenced how leaders in artificial general intelligence (AGI) picture the world they're creating.

Links to learn more, highlights, and full transcript.

Carl simply follows the logic to its natural conclusion. This is a world where 1 cent of electricity can be turned into medical advice, company management, or scientific research that would today cost $100s, resulting in a scramble to manufacture chips and apply them to the most lucrative forms of intellectual labour.

It's a world where, given their incredible hourly salaries, the supply of outstanding AI researchers quickly goes from 10,000 to 10 million or more, enormously accelerating progress in the field.

It's a world where companies operated entirely by AIs working together are much faster and more cost-effective than those that lean on humans for decision making, and the latter are progressively driven out of business.

It's a world where the technical challenges around control of robots are rapidly overcome, leading to robots into strong, fast, precise, and tireless workers able to accomplish any physical work the economy requires, and a rush to build billions of them and cash in.

As the economy grows, each person could effectively afford the practical equivalent of a team of hundreds of machine 'people' to help them with every aspect of their lives.

And with growth rates this high, it doesn't take long to run up against Earth's physical limits — in this case, the toughest to engineer your way out of is the Earth's ability to release waste heat. If this machine economy and its insatiable demand for power generates more heat than the Earth radiates into space, then it will rapidly heat up and become uninhabitable for humans and other animals.

This creates pressure to move economic activity off-planet. So you could develop effective populations of billions of scientific researchers operating on computer chips orbiting in space, sending the results of their work, such as drug designs, back to Earth for use.

These are just some of the wild implications that could follow naturally from truly embracing the hypothetical: what if we develop AGI that could accomplish everything that the most productive humans can, using the same energy supply?

In today's episode, Carl explains the above, and then host Rob Wiblin pushes back on whether that’s realistic or just a cool story, asking:

  • If we're heading towards the above, how come economic growth is slow now and not really increasing?
  • Why have computers and computer chips had so little effect on economic productivity so far?
  • Are self-replicating biological systems a good comparison for self-replicating machine systems?
  • Isn't this just too crazy and weird to be plausible?
  • What bottlenecks would be encountered in supplying energy and natural resources to this growing economy?
  • Might there not be severely declining returns to bigger brains and more training?
  • Wouldn't humanity get scared and pull the brakes if such a transformation kicked off?
  • If this is right, how come economists don't agree?

Finally, Carl addresses the moral status of machine minds themselves. Would they be conscious or otherwise have a claim to moral or rights? And how might humans and machines coexist with neither side dominating or exploiting the other?

Chapters:

  • Cold open (00:00:00)
  • Rob’s intro (00:01:00)
  • Transitioning to a world where AI systems do almost all the work (00:05:21)
  • Economics after an AI explosion (00:14:25)
  • Objection: Shouldn’t we be seeing economic growth rates increasing today? (00:59:12)
  • Objection: Speed of doubling time (01:07:33)
  • Objection: Declining returns to increases in intelligence? (01:11:59)
  • Objection: Physical transformation of the environment (01:17:39)
  • Objection: Should we expect an increased demand for safety and security? (01:29:14)
  • Objection: “This sounds completely whack” (01:36:10)
  • Income and wealth distribution (01:48:02)
  • Economists and the intelligence explosion (02:13:31)
  • Baumol effect arguments (02:19:12)
  • Denying that robots can exist (02:27:18)
  • Classic economic growth models (02:36:12)
  • Robot nannies (02:48:27)
  • Slow integration of decision-making and authority power (02:57:39)
  • Economists’ mistaken heuristics (03:01:07)
  • Moral status of AIs (03:11:45)
  • Rob’s outro (04:11:47)


Producer and editor: Keiran Harris
Audio engineering lead: Ben Cordell
Technical editing: Simon Monsour, Milo McGuire, and Dominic Armstrong
Transcriptions: Katy Moore

Avsnitt(299)

#2 - David Spiegelhalter on risk, stats and improving understanding of science

#2 - David Spiegelhalter on risk, stats and improving understanding of science

Recorded in 2015 by Robert Wiblin with colleague Jess Whittlestone at the Centre for Effective Altruism, and recovered from the dusty 80,000 Hours archives. David Spiegelhalter is a statistician at the University of Cambridge and something of an academic celebrity in the UK. Part of his role is to improve the public understanding of risk - especially everyday risks we face like getting cancer or dying in a car crash. As a result he’s regularly in the media explaining numbers in the news, trying to assist both ordinary people and politicians focus on the important risks we face, and avoid being distracted by flashy risks that don’t actually have much impact. Summary, full transcript and extra links to learn more. To help make sense of the uncertainties we face in life he has had to invent concepts like the microlife, or a 30-minute change in life expectancy. (https://en.wikipedia.org/wiki/Microlife) We wanted to learn whether he thought a lifetime of work communicating science had actually had much impact on the world, and what advice he might have for people planning their careers today.

21 Juni 201733min

#1 - Miles Brundage on the world's desperate need for AI strategists and policy experts

#1 - Miles Brundage on the world's desperate need for AI strategists and policy experts

Robert Wiblin, Director of Research at 80,000 Hours speaks with Miles Brundage, research fellow at the University of Oxford's Future of Humanity Institute. Miles studies the social implications surrounding the development of new technologies and has a particular interest in artificial general intelligence, that is, an AI system that could do most or all of the tasks humans could do. This interview complements our profile of the importance of positively shaping artificial intelligence and our guide to careers in AI policy and strategy Full transcript, apply for personalised coaching to work on AI strategy, see what questions are asked when, and read extra resources to learn more.

5 Juni 201755min

#0 – Introducing the 80,000 Hours Podcast

#0 – Introducing the 80,000 Hours Podcast

80,000 Hours is a non-profit that provides research and other support to help people switch into careers that effectively tackle the world's most pressing problems. This podcast is just one of many things we offer, the others of which you can find at 80000hours.org. Since 2017 this show has been putting out interviews about the world's most pressing problems and how to solve them — which some people enjoy because they love to learn about important things, and others are using to figure out what they want to do with their careers or with their charitable giving. If you haven't yet spent a lot of time with 80,000 Hours or our general style of thinking, called effective altruism, it's probably really helpful to first go through the episodes that set the scene, explain our overall perspective on things, and generally offer all the background information you need to get the most out of the episodes we're making now. That's why we've made a new feed with ten carefully selected episodes from the show's archives, called 'Effective Altruism: An Introduction'. You can find it by searching for 'Effective Altruism' in your podcasting app or at 80000hours.org/intro. Or, if you’d rather listen on this feed, here are the ten episodes we recommend you listen to first: • #21 – Holden Karnofsky on the world's most intellectual foundation and how philanthropy can have maximum impact by taking big risks • #6 – Toby Ord on why the long-term future of humanity matters more than anything else and what we should do about it • #17 – Will MacAskill on why our descendants might view us as moral monsters • #39 – Spencer Greenberg on the scientific approach to updating your beliefs when you get new evidence • #44 – Paul Christiano on developing real solutions to the 'AI alignment problem' • #60 – What Professor Tetlock learned from 40 years studying how to predict the future • #46 – Hilary Greaves on moral cluelessness, population ethics and tackling global issues in academia • #71 – Benjamin Todd on the key ideas of 80,000 Hours • #50 – Dave Denkenberger on how we might feed all 8 billion people through a nuclear winter • 80,000 Hours Team chat #3 – Koehler and Todd on the core idea of effective altruism and how to argue for it

1 Maj 20173min

Populärt inom Utbildning

bygga-at-idioter
historiepodden-se
rss-bara-en-till-om-missbruk-medberoende-2
det-skaver
harrisons-dramatiska-historia
nu-blir-det-historia
johannes-hansen-podcast
nar-man-talar-om-trollen
roda-vita-rosen
not-fanny-anymore
i-vantan-pa-katastrofen
allt-du-velat-veta
sektledare
alska-oss
sa-in-i-sjalen
jagaren
rss-max-tant-med-max-villman
rss-sjalsligt-avkladd
herrsurf
rss-npf-podden