#228 – Eileen Yam on how we're completely out of touch with what the public thinks about AI

#228 – Eileen Yam on how we're completely out of touch with what the public thinks about AI

If you work in AI, you probably think it’s going to boost productivity, create wealth, advance science, and improve your life. If you’re a member of the American public, you probably strongly disagree.

In three major reports released over the last year, the Pew Research Center surveyed over 5,000 US adults and 1,000 AI experts. They found that the general public holds many beliefs about AI that are virtually nonexistent in Silicon Valley, and that the tech industry’s pitch about the likely benefits of their work has thus far failed to convince many people at all. AI is, in fact, a rare topic that mostly unites Americans — regardless of politics, race, age, or gender.

Links to learn more, video, and full transcript: https://80k.info/ey

Today’s guest, Eileen Yam, director of science and society research at Pew, walks us through some of the eye-watering gaps in perception:

  • Jobs: 73% of AI experts see a positive impact on how people do their jobs. Only 23% of the public agrees.
  • Productivity: 74% of experts say AI is very likely to make humans more productive. Just 17% of the public agrees.
  • Personal benefit: 76% of experts expect AI to benefit them personally. Only 24% of the public expects the same (while 43% expect it to harm them).
  • Happiness: 22% of experts think AI is very likely to make humans happier, which is already surprisingly low — but a mere 6% of the public expects the same.

For the experts building these systems, the vision is one of human empowerment and efficiency. But outside the Silicon Valley bubble, the mood is more one of anxiety — not only about Terminator scenarios, but about AI denying their children “curiosity, problem-solving skills, critical thinking skills and creativity,” while they themselves are replaced and devalued:

  • 53% of Americans say AI will worsen people’s ability to think creatively.
  • 50% believe it will hurt our ability to form meaningful relationships.
  • 38% think it will worsen our ability to solve problems.

Open-ended responses to the surveys reveal a poignant fear: that by offloading cognitive work to algorithms we are changing childhood to a point we no longer know what adults will result. As one teacher quoted in the study noted, we risk raising a generation that relies on AI so much it never “grows its own curiosity, problem-solving skills, critical thinking skills and creativity.”

If the people building the future are this out of sync with the people living in it, the impending “techlash” might be more severe than industry anticipates.

In this episode, Eileen and host Rob Wiblin break down the data on where these groups disagree, where they actually align (nobody trusts the government or companies to regulate this), and why the “digital natives” might actually be the most worried of all.

This episode was recorded on September 25, 2025.

Chapters:

  • Cold open (00:00:00)
  • Who’s Eileen Yam? (00:01:30)
  • Is it premature to care what the public says about AI? (00:02:26)
  • The top few feelings the US public has about AI (00:06:34)
  • The public and AI insiders disagree enormously on some things (00:16:25)
  • Fear #1: Erosion of human abilities and connections (00:20:03)
  • Fear #2: Loss of control of AI (00:28:50)
  • Americans don't want AI in their personal lives (00:33:13)
  • AI at work and job loss (00:40:56)
  • Does the public always feel this way about new things? (00:44:52)
  • The public doesn't think AI is overhyped (00:51:49)
  • The AI industry seems on a collision course with the public (00:58:16)
  • Is the survey methodology good? (01:05:26)
  • Where people are positive about AI: saving time, policing, and science (01:12:51)
  • Biggest gaps between experts and the general public, and where they agree (01:18:44)
  • Demographic groups agree to a surprising degree (01:28:58)
  • Eileen’s favourite bits of the survey and what Pew will ask next (01:37:29)

Video and audio editing: Dominic Armstrong, Milo McGuire, Luke Monsour, and Simon Monsour
Music: CORBIT
Coordination, transcripts, and web: Katy Moore

Jaksot(310)

#92 – Brian Christian on the alignment problem

#92 – Brian Christian on the alignment problem

Brian Christian is a bestselling author with a particular knack for accurately communicating difficult or technical ideas from both mathematics and computer science. Listeners loved our episode about his book Algorithms to Live By — so when the team read his new book, The Alignment Problem, and found it to be an insightful and comprehensive review of the state of the research into making advanced AI useful and reliably safe, getting him back on the show was a no-brainer. Brian has so much of substance to say this episode will likely be of interest to people who know a lot about AI as well as those who know a little, and of interest to people who are nervous about where AI is going as well as those who aren't nervous at all. Links to learn more, summary and full transcript. Here’s a tease of 10 Hollywood-worthy stories from the episode: • The Riddle of Dopamine: The development of reinforcement learning solves a long-standing mystery of how humans are able to learn from their experience. • ALVINN: A student teaches a military vehicle to drive between Pittsburgh and Lake Erie, without intervention, in the early 1990s, using a computer with a tenth the processing capacity of an Apple Watch. • Couch Potato: An agent trained to be curious is stopped in its quest to navigate a maze by a paralysing TV screen. • Pitts & McCulloch: A homeless teenager and his foster father figure invent the idea of the neural net. • Tree Senility: Agents become so good at living in trees to escape predators that they forget how to leave, starve, and die. • The Danish Bicycle: A reinforcement learning agent figures out that it can better achieve its goal by riding in circles as quickly as possible than reaching its purported destination. • Montezuma's Revenge: By 2015 a reinforcement learner can play 60 different Atari games — the majority impossibly well — but can’t score a single point on one game humans find tediously simple. • Curious Pong: Two novelty-seeking agents, forced to play Pong against one another, create increasingly extreme rallies. • AlphaGo Zero: A computer program becomes superhuman at Chess and Go in under a day by attempting to imitate itself. • Robot Gymnasts: Over the course of an hour, humans teach robots to do perfect backflips just by telling them which of 2 random actions look more like a backflip. We also cover: • How reinforcement learning actually works, and some of its key achievements and failures • How a lack of curiosity can cause AIs to fail to be able to do basic things • The pitfalls of getting AI to imitate how we ourselves behave • The benefits of getting AI to infer what we must be trying to achieve • Why it’s good for agents to be uncertain about what they're doing • Why Brian isn’t that worried about explicit deception • The interviewees Brian most agrees with, and most disagrees with • Developments since Brian finished the manuscript • The effective altruism and AI safety communities • And much more Producer: Keiran Harris. Audio mastering: Ben Cordell. Transcriptions: Sofia Davis-Fogel.

5 Maalis 20212h 55min

#91 – Lewis Bollard on big wins against factory farming and how they happened

#91 – Lewis Bollard on big wins against factory farming and how they happened

I suspect today's guest, Lewis Bollard, might be the single best person in the world to interview to get an overview of all the methods that might be effective for putting an end to factory farming and what broader lessons we can learn from the experiences of people working to end cruelty in animal agriculture. That's why I interviewed him back in 2017, and it's why I've come back for an updated second dose four years later. That conversation became a touchstone resource for anyone wanting to understand why people might decide to focus their altruism on farmed animal welfare, what those people are up to, and why. Lewis leads Open Philanthropy’s strategy for farm animal welfare, and since he joined in 2015 they’ve disbursed about $130 million in grants to nonprofits as part of this program. This episode certainly isn't only for vegetarians or people whose primary focus is animal welfare. The farmed animal welfare movement has had a lot of big wins over the last five years, and many of the lessons animal activists and plant-based meat entrepreneurs have learned are of much broader interest. Links to learn more, summary and full transcript. Some of those include: • Between 2019 and 2020, Beyond Meat's cost of goods sold fell from about $4.50 a pound to $3.50 a pound. Will plant-based meat or clean meat displace animal meat, and if so when? How quickly can it reach price parity? • One study reported that philosophy students reduced their meat consumption by 13% after going through a course on the ethics of factory farming. But do studies like this replicate? And what happens several months later? • One survey showed that 33% of people supported a ban on animal farming. Should we take such findings seriously? Or is it as informative as the study which showed that 38% of Americans believe that Ted Cruz might be the Zodiac killer? • Costco, the second largest retailer in the U.S., is now over 95% cage-free. Why have they done that years before they had to? And can ethical individuals within these companies make a real difference? We also cover: • Switzerland’s ballot measure on eliminating factory farming • What a Biden administration could mean for reducing animal suffering • How chicken is cheaper than peanuts • The biggest recent wins for farmed animals • Things that haven’t gone to plan in animal advocacy • Political opportunities for farmed animal advocates in Europe • How the US is behind Brazil and Israel on animal welfare standards • The value of increasing media coverage of factory farming • The state of the animal welfare movement • And much more If you’d like an introduction to the nature of the problem and why Lewis is working on it, in addition to our 2017 interview with Lewis, you could check out this 2013 cause report from Open Philanthropy.Chapters:Rob’s intro (00:00:00)The interview begins (00:04:37)Biggest recent wins for farmed animals (00:06:13)How to lower the price of plant-based meat (00:24:57)Documentaries for farmed animals (00:37:05)Political opportunities (00:43:07)Do we know how to get people to reduce their meat consumption? (00:45:03)The fraction of Americans who don’t eat meat (00:52:17)Surprising number of people who support a ban on animal farming (00:57:57)What we’ve learned over the past four years (01:02:48)Things that haven’t gone to plan (01:26:30)Animal advocacy in emerging countries (01:34:44)Fish, crustaceans, and wild animals (01:40:28)Open Philanthropy grants (01:47:43)Audience questions (01:59:29)The elimination of slavery (02:10:03)Careers (02:15:52)Producer: Keiran Harris.Audio mastering: Ben Cordell.Transcriptions: Sofia Davis-Fogel.

15 Helmi 20212h 33min

Rob Wiblin on how he ended up the way he is

Rob Wiblin on how he ended up the way he is

This is a crosspost of an episode of the Eureka Podcast. The interviewer is Misha Saul, a childhood friend of Rob's, who he has known for over 20 years. While it's not an episode of our own show, we decided to share it with subscribers because it's fun, and because it touches on personal topics that we don't usually cover on the show. Rob and Misha cover: • How Rob's parents shaped who he is (if indeed they did) • Their shared teenage obsession with philosophy, which eventually led to Rob working at 80,000 Hours • How their politics were shaped by growing up in the 90s • How talking to Rob helped Misha develop his own very different worldview • Why The Lord of the Rings movies have held up so well • What was it like being an exchange student in Spain, and was learning Spanish a mistake? • Marriage and kids • Institutional decline and historical analogies for the US in 2021 • Making fun of teachers • Should we stop eating animals? Producer: Keiran Harris. Audio mastering: Ben Cordell.

3 Helmi 20211h 57min

#90 – Ajeya Cotra on worldview diversification and how big the future could be

#90 – Ajeya Cotra on worldview diversification and how big the future could be

You wake up in a mysterious box, and hear the booming voice of God: “I just flipped a coin. If it came up heads, I made ten boxes, labeled 1 through 10 — each of which has a human in it. If it came up tails, I made ten billion boxes, labeled 1 through 10 billion — also with one human in each box. To get into heaven, you have to answer this correctly: Which way did the coin land?” You think briefly, and decide you should bet your eternal soul on tails. The fact that you woke up at all seems like pretty good evidence that you’re in the big world — if the coin landed tails, way more people should be having an experience just like yours. But then you get up, walk outside, and look at the number on your box. ‘3’. Huh. Now you don’t know what to believe. If God made 10 billion boxes, surely it's much more likely that you would have seen a number like 7,346,678,928? In today's interview, Ajeya Cotra — a senior research analyst at Open Philanthropy — explains why this thought experiment from the niche of philosophy known as 'anthropic reasoning' could be relevant for figuring out where we should direct our charitable giving. Links to learn more, summary and full transcript. Some thinkers both inside and outside Open Philanthropy believe that philanthropic giving should be guided by 'longtermism' — the idea that we can do the most good if we focus primarily on the impact our actions will have on the long-term future. Ajeya thinks that for that notion to make sense, there needs to be a good chance we can settle other planets and solar systems and build a society that's both very large relative to what's possible on Earth and, by virtue of being so spread out, able to protect itself from extinction for a very long time. But imagine that humanity has two possible futures ahead of it: Either we’re going to have a huge future like that, in which trillions of people ultimately exist, or we’re going to wipe ourselves out quite soon, thereby ensuring that only around 100 billion people ever get to live. If there are eventually going to be 1,000 trillion humans, what should we think of the fact that we seemingly find ourselves so early in history? Being among the first 100 billion humans, as we are, is equivalent to walking outside and seeing a three on your box. Suspicious! If the future will have many trillions of people, the odds of us appearing so strangely early are very low indeed. If we accept the analogy, maybe we can be confident that humanity is at a high risk of extinction based on this so-called 'doomsday argument' alone. If that’s true, maybe we should put more of our resources into avoiding apparent extinction threats like nuclear war and pandemics. But on the other hand, maybe the argument shows we're incredibly unlikely to achieve a long and stable future no matter what we do, and we should forget the long term and just focus on the here and now instead. There are many critics of this theoretical ‘doomsday argument’, and it may be the case that it logically doesn't work. This is why Ajeya spent time investigating it, with the goal of ultimately making better philanthropic grants. In this conversation, Ajeya and Rob discuss both the doomsday argument and the challenge Open Phil faces striking a balance between taking big ideas seriously, and not going all in on philosophical arguments that may turn out to be barking up the wrong tree entirely. They also discuss: • Which worldviews Open Phil finds most plausible, and how it balances them • How hard it is to get to other solar systems • The 'simulation argument' • When transformative AI might actually arrive • And much more Producer: Keiran Harris. Audio mastering: Ben Cordell. Transcriptions: Sofia Davis-Fogel.

21 Tammi 20212h 59min

Rob Wiblin on self-improvement and research ethics

Rob Wiblin on self-improvement and research ethics

This is a crosspost of an episode of the Clearer Thinking Podcast: 022: Self-Improvement and Research Ethics with Rob Wiblin. Rob chats with Spencer Greenberg, who has been an audience favourite in episodes 11 and 39 of the 80,000 Hours Podcast, and has now created this show of his own. Among other things they cover: • Is trying to become a better person a good strategy for self-improvement • Why Rob thinks many people could achieve much more by finding themselves a line manager • Why interviews on this show are so damn long • Is it complicated to figure out what human beings value, or actually simpler than it seems • Why Rob thinks research ethics and institutional review boards are causing immense harm • Where prediction markets might be failing today and how to tell If you like this go ahead and subscribe to Spencer's show by searching for Clearer Thinking in your podcasting app. In particular, you might want to check out Spencer’s conversation with another 80,000 Hours researcher: 008: Life Experiments and Philosophical Thinking with Arden Koehler. The 80,000 Hours Podcast is produced by Keiran Harris.

13 Tammi 20212h 30min

#73 - Phil Trammell on patient philanthropy and waiting to do good [re-release]

#73 - Phil Trammell on patient philanthropy and waiting to do good [re-release]

Rebroadcast: this episode was originally released in March 2020. To do good, most of us look to use our time and money to affect the world around us today. But perhaps that's all wrong. If you took $1,000 you were going to donate and instead put it in the stock market — where it grew on average 5% a year — in 100 years you'd have $125,000 to give away instead. And in 200 years you'd have $17 million. This astonishing fact has driven today's guest, economics researcher Philip Trammell at Oxford's Global Priorities Institute, to investigate the case for and against so-called 'patient philanthropy' in depth. If the case for patient philanthropy is as strong as Phil believes, many of us should be trying to improve the world in a very different way than we are now. He points out that on top of being able to dispense vastly more, whenever your trustees decide to use your gift to improve the world, they'll also be able to rely on the much broader knowledge available to future generations. A donor two hundred years ago couldn't have known distributing anti-malarial bed nets was a good idea. Not only did bed nets not exist — we didn't even know about germs, and almost nothing in medicine was justified by science. Does the COVID-19 emergency mean we should actually use resources right now? See Phil's first thoughts on this question here. • Links to learn more, summary and full transcript. • Latest version of Phil’s paper on the topic. What similar leaps will our descendants have made in 200 years, allowing your now vast foundation to benefit more people in even greater ways? And there's a third reason to wait as well. What are the odds that we today live at the most critical point in history, when resources happen to have the greatest ability to do good? It's possible. But the future may be very long, so there has to be a good chance that some moment in the future will be both more pivotal and more malleable than our own. Of course, there are many objections to this proposal. If you start a foundation you hope will wait around for centuries, might it not be destroyed in a war, revolution, or financial collapse? Or might it not drift from its original goals, eventually just serving the interest of its distant future trustees, rather than the noble pursuits you originally intended? Or perhaps it could fail for the reverse reason, by staying true to your original vision — if that vision turns out to be as deeply morally mistaken as the Rhodes' Scholarships initial charter, which limited it to 'white Christian men'. Alternatively, maybe the world will change in the meantime, making your gift useless. At one end, humanity might destroy itself before your trust tries to do anything with the money. Or perhaps everyone in the future will be so fabulously wealthy, or the problems of the world already so overcome, that your philanthropy will no longer be able to do much good. Are these concerns, all of them legitimate, enough to overcome the case in favour of patient philanthropy? In today's conversation with researcher Phil Trammell and my colleague Howie Lempel, we try to answer that, and also discuss: • Historical attempts at patient philanthropy • Should we have a mixed strategy, where some altruists are patient and others impatient? • Which causes most need money now? • What is the research frontier here? • What does this all mean for what listeners should do differently? Get this episode by subscribing: type 80,000 Hours into your podcasting app. Or read the transcript linked above. Producer: Keiran Harris. Audio mastering: Ben Cordell. Transcripts: Zakee Ulhaq.

7 Tammi 20212h 41min

#75 – Michelle Hutchinson on what people most often ask 80,000 Hours [re-release]

#75 – Michelle Hutchinson on what people most often ask 80,000 Hours [re-release]

Rebroadcast: this episode was originally released in April 2020. Since it was founded, 80,000 Hours has done one-on-one calls to supplement our online content and offer more personalised advice. We try to help people get clear on their most plausible paths, the key uncertainties they face in choosing between them, and provide resources, pointers, and introductions to help them in those paths. I (Michelle Hutchinson) joined the team a couple of years ago after working at Oxford's Global Priorities Institute, and these days I'm 80,000 Hours' Head of Advising. Since then, chatting to hundreds of people about their career plans has given me some idea of the kinds of things it’s useful for people to hear about when thinking through their careers. So we thought it would be useful to discuss some on the show for everyone to hear. • Links to learn more, summary and full transcript. • See over 500 vacancies on our job board. • Apply for one-on-one career advising. Among other common topics, we cover: • Why traditional careers advice involves thinking through what types of roles you enjoy followed by which of those are impactful, while we recommend going the other way: ranking roles on impact, and then going down the list to find the one you think you’d most flourish in. • That if you’re pitching your job search at the right level of role, you’ll need to apply to a large number of different jobs. So it's wise to broaden your options, by applying for both stretch and backup roles, and not over-emphasising a small number of organisations. • Our suggested process for writing a longer term career plan: 1. shortlist your best medium to long-term career options, then 2. figure out the key uncertainties in choosing between them, and 3. map out concrete next steps to resolve those uncertainties. • Why many listeners aren't spending enough time finding out about what the day-to-day work is like in paths they're considering, or reaching out to people for advice or opportunities. • The difficulty of maintaining the ambition to increase your social impact, while also being proud of and motivated by what you're already accomplishing. I also thought it might be useful to give people a sense of what I do and don’t do in advising calls, to help them figure out if they should sign up for it. If you’re wondering whether you’ll benefit from advising, bear in mind that it tends to be more useful to people: 1. With similar views to 80,000 Hours on what the world’s most pressing problems are, because we’ve done most research on the problems we think it’s most important to address. 2. Who don’t yet have close connections with people working at effective altruist organisations. 3. Who aren’t strongly locationally constrained. If you’re unsure, it doesn’t take long to apply, and a lot of people say they find the application form itself helps them reflect on their plans. We’re particularly keen to hear from people from under-represented backgrounds. Also in this episode: • I describe mistakes I’ve made in advising, and career changes made by people I’ve spoken with. • Rob and I argue about what risks to take with your career, like when it’s sensible to take a study break, or start from the bottom in a new career path. • I try to forecast how I’ll change after I have a baby, Rob speculates wildly on what motherhood is like, and Arden and I mercilessly mock Rob. Get this episode by subscribing: type 80,000 Hours into your podcasting app. Or read the linked transcript. Producer: Keiran Harris. Audio mastering: Ben Cordell. Transcriptions: Zakee Ulhaq.

30 Joulu 20202h 14min

#89 – Owen Cotton-Barratt on epistemic systems and layers of defense against potential global catastrophes

#89 – Owen Cotton-Barratt on epistemic systems and layers of defense against potential global catastrophes

From one point of view academia forms one big 'epistemic' system — a process which directs attention, generates ideas, and judges which are good. Traditional print media is another such system, and we can think of society as a whole as a huge epistemic system, made up of these and many other subsystems. How these systems absorb, process, combine and organise information will have a big impact on what humanity as a whole ends up doing with itself — in fact, at a broad level it basically entirely determines the direction of the future. With that in mind, today’s guest Owen Cotton-Barratt has founded the Research Scholars Programme (RSP) at the Future of Humanity Institute at Oxford University, which gives early-stage researchers leeway to try to understand how the world works. Links to learn more, summary and full transcript. Instead of you having to pay for a masters degree, the RSP pays *you* to spend significant amounts of time thinking about high-level questions, like "What is important to do?” and “How can I usefully contribute?" Participants get to practice their research skills, while also thinking about research as a process and how research communities can function as epistemic systems that plug into the rest of society as productively as possible. The programme attracts people with several years of experience who are looking to take their existing knowledge — whether that’s in physics, medicine, policy work, or something else — and apply it to what they determine to be the most important topics. It also attracts people without much experience, but who have a lot of ideas. If you went directly into a PhD programme, you might have to narrow your focus quickly. But the RSP gives you time to explore the possibilities, and to figure out the answer to the question “What’s the topic that really matters, and that I’d be happy to spend several years of my life on?” Owen thinks one of the most useful things about the two-year programme is being around other people — other RSP participants, as well as other researchers at the Future of Humanity Institute — who are trying to think seriously about where our civilisation is headed and how to have a positive impact on this trajectory. Instead of being isolated in a PhD, you’re surrounded by folks with similar goals who can push back on your ideas and point out where you’re making mistakes. Saving years not pursuing an unproductive path could mean that you will ultimately have a much bigger impact with your career. RSP applications are set to open in the Spring of 2021 — but Owen thinks it’s helpful for people to think about it in advance. In today’s episode, Arden and Owen mostly talk about Owen’s own research. They cover: • Extinction risk classification and reduction strategies • Preventing small disasters from becoming large disasters • How likely we are to go from being in a collapsed state to going extinct • What most people should do if longtermism is true • Advice for mathematically-minded people • And much more Chapters: • Rob’s intro (00:00:00)• The interview begins (00:02:22)• Extinction risk classification and reduction strategies (00:06:02)• Defense layers (00:16:37)• Preventing small disasters from becoming large disasters (00:23:31)• Risk factors (00:38:57)• How likely are we to go from being in a collapsed state to going extinct? (00:48:02)• Estimating total levels of existential risk (00:54:35)• Everyday longtermism (01:01:35)• What should most people do if longtermism is true? (01:12:18)• 80,000 Hours’ issue with promoting career paths (01:24:12)• The existential risk of making a lot of really bad decisions (01:29:27)• What should longtermists do differently today (01:39:08)• Biggest concerns with this framework (01:51:28)• Research careers (02:04:04)• Being a mathematician (02:13:33)• Advice for mathematically minded people (02:24:30)• Rob’s outro (02:37:32)  Producer: Keiran Harris Audio mastering: Ben Cordell Transcript: Zakee Ulhaq

17 Joulu 20202h 38min

Suosittua kategoriassa Koulutus

rss-murhan-anatomia
voi-hyvin-meditaatiot-2
psykopodiaa-podcast
adhd-podi
jari-sarasvuo-podcast
rss-liian-kuuma-peruna
psykologia
rss-niinku-asia-on
aamukahvilla
rss-koira-haudattuna
puhutaan-koiraa
ihminen-tavattavissa-tommy-hellsten-instituutti
rss-monarch-talk-with-alexandra-alexis
rss-narsisti
rss-vapaudu-voimaasi
hyvan-mielen-joulukalenteri-podcast
aloita-meditaatio
salainen-paivakirja
rss-duodecim-lehti
rss-luonnollinen-synnytys-podcast