#166 – Tantum Collins on what he’s learned as an AI policy insider at the White House, DeepMind and elsewhere

#166 – Tantum Collins on what he’s learned as an AI policy insider at the White House, DeepMind and elsewhere

"If you and I and 100 other people were on the first ship that was going to go settle Mars, and were going to build a human civilisation, and we have to decide what that government looks like, and we have all of the technology available today, how do we think about choosing a subset of that design space?

That space is huge and it includes absolutely awful things, and mixed-bag things, and maybe some things that almost everyone would agree are really wonderful, or at least an improvement on the way that things work today. But that raises all kinds of tricky questions.

My concern is that if we don't approach the evolution of collective decision making and government in a deliberate way, we may end up inadvertently backing ourselves into a corner, where we have ended up on some slippery slope -- and all of a sudden we have, let's say, autocracies on the global stage are strengthened relative to democracies." — Tantum Collins

In today’s episode, host Rob Wiblin gets the rare chance to interview someone with insider AI policy experience at the White House and DeepMind who’s willing to speak openly — Tantum Collins.

Links to learn more, highlights, and full transcript.

They cover:

  • How AI could strengthen government capacity, and how that's a double-edged sword
  • How new technologies force us to confront tradeoffs in political philosophy that we were previously able to pretend weren't there
  • To what extent policymakers take different threats from AI seriously
  • Whether the US and China are in an AI arms race or not
  • Whether it's OK to transform the world without much of the world agreeing to it
  • The tyranny of small differences in AI policy
  • Disagreements between different schools of thought in AI policy, and proposals that could unite them
  • How the US AI Bill of Rights could be improved
  • Whether AI will transform the labour market, and whether it will become a partisan political issue
  • The tensions between the cultures of San Francisco and DC, and how to bridge the divide between them
  • What listeners might be able to do to help with this whole mess
  • Panpsychism
  • Plenty more

Chapters:

  • Cold open (00:00:00)
  • Rob's intro (00:01:00)
  • The interview begins (00:04:01)
  • The risk of autocratic lock-in due to AI (00:10:02)
  • The state of play in AI policymaking (00:13:40)
  • China and AI (00:32:12)
  • The most promising regulatory approaches (00:57:51)
  • Transforming the world without the world agreeing (01:04:44)
  • AI Bill of Rights (01:17:32)
  • Who’s ultimately responsible for the consequences of AI? (01:20:39)
  • Policy ideas that could appeal to many different groups (01:29:08)
  • Tension between those focused on x-risk and those focused on AI ethics (01:38:56)
  • Communicating with policymakers (01:54:22)
  • Is AI going to transform the labour market in the next few years? (01:58:51)
  • Is AI policy going to become a partisan political issue? (02:08:10)
  • The value of political philosophy (02:10:53)
  • Tantum’s work at DeepMind (02:21:20)
  • CSET (02:32:48)
  • Career advice (02:35:21)
  • Panpsychism (02:55:24)


Producer and editor: Keiran Harris
Audio Engineering Lead: Ben Cordell
Technical editing: Simon Monsour and Milo McGuire
Transcriptions: Katy Moore

Avsnitt(299)

#2 - David Spiegelhalter on risk, stats and improving understanding of science

#2 - David Spiegelhalter on risk, stats and improving understanding of science

Recorded in 2015 by Robert Wiblin with colleague Jess Whittlestone at the Centre for Effective Altruism, and recovered from the dusty 80,000 Hours archives. David Spiegelhalter is a statistician at the University of Cambridge and something of an academic celebrity in the UK. Part of his role is to improve the public understanding of risk - especially everyday risks we face like getting cancer or dying in a car crash. As a result he’s regularly in the media explaining numbers in the news, trying to assist both ordinary people and politicians focus on the important risks we face, and avoid being distracted by flashy risks that don’t actually have much impact. Summary, full transcript and extra links to learn more. To help make sense of the uncertainties we face in life he has had to invent concepts like the microlife, or a 30-minute change in life expectancy. (https://en.wikipedia.org/wiki/Microlife) We wanted to learn whether he thought a lifetime of work communicating science had actually had much impact on the world, and what advice he might have for people planning their careers today.

21 Juni 201733min

#1 - Miles Brundage on the world's desperate need for AI strategists and policy experts

#1 - Miles Brundage on the world's desperate need for AI strategists and policy experts

Robert Wiblin, Director of Research at 80,000 Hours speaks with Miles Brundage, research fellow at the University of Oxford's Future of Humanity Institute. Miles studies the social implications surrounding the development of new technologies and has a particular interest in artificial general intelligence, that is, an AI system that could do most or all of the tasks humans could do. This interview complements our profile of the importance of positively shaping artificial intelligence and our guide to careers in AI policy and strategy Full transcript, apply for personalised coaching to work on AI strategy, see what questions are asked when, and read extra resources to learn more.

5 Juni 201755min

#0 – Introducing the 80,000 Hours Podcast

#0 – Introducing the 80,000 Hours Podcast

80,000 Hours is a non-profit that provides research and other support to help people switch into careers that effectively tackle the world's most pressing problems. This podcast is just one of many things we offer, the others of which you can find at 80000hours.org. Since 2017 this show has been putting out interviews about the world's most pressing problems and how to solve them — which some people enjoy because they love to learn about important things, and others are using to figure out what they want to do with their careers or with their charitable giving. If you haven't yet spent a lot of time with 80,000 Hours or our general style of thinking, called effective altruism, it's probably really helpful to first go through the episodes that set the scene, explain our overall perspective on things, and generally offer all the background information you need to get the most out of the episodes we're making now. That's why we've made a new feed with ten carefully selected episodes from the show's archives, called 'Effective Altruism: An Introduction'. You can find it by searching for 'Effective Altruism' in your podcasting app or at 80000hours.org/intro. Or, if you’d rather listen on this feed, here are the ten episodes we recommend you listen to first: • #21 – Holden Karnofsky on the world's most intellectual foundation and how philanthropy can have maximum impact by taking big risks • #6 – Toby Ord on why the long-term future of humanity matters more than anything else and what we should do about it • #17 – Will MacAskill on why our descendants might view us as moral monsters • #39 – Spencer Greenberg on the scientific approach to updating your beliefs when you get new evidence • #44 – Paul Christiano on developing real solutions to the 'AI alignment problem' • #60 – What Professor Tetlock learned from 40 years studying how to predict the future • #46 – Hilary Greaves on moral cluelessness, population ethics and tackling global issues in academia • #71 – Benjamin Todd on the key ideas of 80,000 Hours • #50 – Dave Denkenberger on how we might feed all 8 billion people through a nuclear winter • 80,000 Hours Team chat #3 – Koehler and Todd on the core idea of effective altruism and how to argue for it

1 Maj 20173min

Populärt inom Utbildning

bygga-at-idioter
rss-bara-en-till-om-missbruk-medberoende-2
historiepodden-se
det-skaver
harrisons-dramatiska-historia
nu-blir-det-historia
allt-du-velat-veta
nar-man-talar-om-trollen
johannes-hansen-podcast
not-fanny-anymore
roda-vita-rosen
sektledare
i-vantan-pa-katastrofen
sa-in-i-sjalen
alska-oss
handen-pa-hjartat
jagaren
rss-max-tant-med-max-villman
rss-sjalsligt-avkladd
rss-npf-podden