#88 – Tristan Harris on the need to change the incentives of social media companies

#88 – Tristan Harris on the need to change the incentives of social media companies

In its first 28 days on Netflix, the documentary The Social Dilemma — about the possible harms being caused by social media and other technology products — was seen by 38 million households in about 190 countries and in 30 languages.

Over the last ten years, the idea that Facebook, Twitter, and YouTube are degrading political discourse and grabbing and monetizing our attention in an alarming way has gone mainstream to such an extent that it's hard to remember how recently it was a fringe view.

It feels intuitively true that our attention spans are shortening, we’re spending more time alone, we’re less productive, there’s more polarization and radicalization, and that we have less trust in our fellow citizens, due to having less of a shared basis of reality.

But while it all feels plausible, how strong is the evidence that it's true? In the past, people have worried about every new technological development — often in ways that seem foolish in retrospect. Socrates famously feared that being able to write things down would ruin our memory.

At the same time, historians think that the printing press probably generated religious wars across Europe, and that the radio helped Hitler and Stalin maintain power by giving them and them alone the ability to spread propaganda across the whole of Germany and the USSR. Fears about new technologies aren't always misguided.

Tristan Harris, leader of the Center for Humane Technology, and co-host of the Your Undivided Attention podcast, is arguably the most prominent person working on reducing the harms of social media, and he was happy to engage with Rob’s good-faith critiques.

Links to learn more, summary and full transcript.
• FYI, the 2020 Effective Altruism Survey is closing soon: https://www.surveymonkey.co.uk/r/EAS80K2

Tristan and Rob provide a thorough exploration of the merits of possible concrete solutions – something The Social Dilemma didn’t really address.

Given that these companies are mostly trying to design their products in the way that makes them the most money, how can we get that incentive to align with what's in our interests as users and citizens?

One way is to encourage a shift to a subscription model.

One claim in The Social Dilemma is that the machine learning algorithms on these sites try to shift what you believe and what you enjoy in order to make it easier to predict what content recommendations will keep you on the site.

But if you paid a yearly fee to Facebook in lieu of seeing ads, their incentive would shift towards making you as satisfied as possible with their service — even if that meant using it for five minutes a day rather than 50.

Despite all the negatives, Tristan doesn’t want us to abandon the technologies he's concerned about. He asks us to imagine a social media environment designed to regularly bring our attention back to what each of us can do to improve our lives and the world.

Just as we can focus on the positives of nuclear power while remaining vigilant about the threat of nuclear weapons, we could embrace social media and recommendation algorithms as the largest mass-coordination engine we've ever had — tools that could educate and organise people better than anything that has come before.

The tricky and open question is how to get there.

Rob and Tristan also discuss:

• Justified concerns vs. moral panics
• The effect of social media on politics in the US and developing countries
• Tips for individuals

Chapters:

  • Rob’s intro (00:00:00)
  • The interview begins (00:01:36)
  • Center for Humane Technology (00:04:53)
  • Critics (00:08:19)
  • The Social Dilemma (00:13:20)
  • Three categories of harm (00:20:31)
  • Justified concerns vs. moral panics (00:30:23)
  • The messy real world vs. an imagined idealised world (00:38:20)
  • The persuasion apocalypse (00:47:46)
  • Revolt of the Public (00:56:48)
  • Global effects (01:02:44)
  • US politics (01:13:32)
  • Potential solutions (01:20:59)
  • Unintended consequences (01:42:57)
  • Win-win changes (01:50:47)
  • Big wins over the last 5 or 10 years (01:59:10)
  • The subscription model (02:02:28)
  • Tips for individuals (02:14:05)
  • The current state of the research (02:22:37)
  • Careers (02:26:36)


Producer: Keiran Harris.
Audio mastering: Ben Cordell.
Transcriptions: Sofia Davis-Fogel.

Avsnitt(300)

#168 – Ian Morris on whether deep history says we're heading for an intelligence explosion

#168 – Ian Morris on whether deep history says we're heading for an intelligence explosion

"If we carry on looking at these industrialised economies, not thinking about what it is they're actually doing and what the potential of this is, you can make an argument that, yes, rates of growth are slowing, the rate of innovation is slowing. But it isn't. What we're doing is creating wildly new technologies: basically producing what is nothing less than an evolutionary change in what it means to be a human being. But this has not yet spilled over into the kind of growth that we have accustomed ourselves to in the fossil-fuel industrial era. That is about to hit us in a big way." — Ian MorrisIn today’s episode, host Rob Wiblin speaks with repeat guest Ian Morris about what big-picture history says about the likely impact of machine intelligence. Links to learn more, summary and full transcript.They cover:Some crazy anomalies in the historical record of civilisational progressWhether we should think about technology from an evolutionary perspectiveWhether we ought to expect war to make a resurgence or continue dying outWhy we can't end up living like The JetsonsWhether stagnation or cyclical recurring futures seem very plausibleWhat it means that the rate of increase in the economy has been increasingWhether violence is likely between humans and powerful AI systemsThe most likely reasons for Rob and Ian to be really wrong about all of thisHow professional historians react to this sort of talkThe future of Ian’s workPlenty moreChapters:Cold open (00:00:00)Rob’s intro (00:01:27)Why we should expect the future to be wild (00:04:08)How historians have reacted to the idea of radically different futures (00:21:20)Why we won’t end up in The Jetsons (00:26:20)The rise of machine intelligence (00:31:28)AI from an evolutionary point of view (00:46:32)Is violence likely between humans and powerful AI systems? (00:59:53)Most troubling objections to this approach in Ian’s view (01:28:20)Confronting anomalies in the historical record (01:33:10)The cyclical view of history (01:56:11)Is stagnation plausible? (02:01:38)The limit on how long this growth trend can continue (02:20:57)The future of Ian’s work (02:37:17)Producer and editor: Keiran HarrisAudio Engineering Lead: Ben CordellTechnical editing: Milo McGuireTranscriptions: Katy Moore

23 Okt 20232h 43min

#167 – Seren Kell on the research gaps holding back alternative proteins from mass adoption

#167 – Seren Kell on the research gaps holding back alternative proteins from mass adoption

"There have been literally thousands of years of breeding and living with animals to optimise these kinds of problems. But because we're just so early on with alternative proteins and there's so much white space, it's actually just really exciting to know that we can keep on innovating and being far more efficient than this existing technology — which, fundamentally, is just quite inefficient. You're feeding animals a bunch of food to then extract a small fraction of their biomass to then eat that.Animal agriculture takes up 83% of farmland, but produces just 18% of food calories. So the current system just is so wasteful. And the limiting factor is that you're just growing a bunch of food to then feed a third of the world's crops directly to animals, where the vast majority of those calories going in are lost to animals existing." — Seren KellLinks to learn more, summary and full transcript.In today’s episode, host Luisa Rodriguez interviews Seren Kell — Senior Science and Technology Manager at the Good Food Institute Europe — about making alternative proteins as tasty, cheap, and convenient as traditional meat, dairy, and egg products.They cover:The basic case for alternative proteins, and why they’re so hard to makeWhy fermentation is a surprisingly promising technology for creating delicious alternative proteins The main scientific challenges that need to be solved to make fermentation even more usefulThe progress that’s been made on the cultivated meat front, and what it will take to make cultivated meat affordableHow GFI Europe is helping with some of these challengesHow people can use their careers to contribute to replacing factory farming with alternative proteinsThe best part of Seren’s jobPlenty moreChapters:Cold open (00:00:00)Luisa’s intro (00:01:08)The interview begins (00:02:22)Why alternative proteins? (00:02:36)What makes alternative proteins so hard to make? (00:11:30)Why fermentation is so exciting (00:24:23)The technical challenges involved in scaling fermentation (00:44:38)Progress in cultivated meat (01:06:04)GFI Europe’s work (01:32:47)Careers (01:45:10)The best part of Seren’s job (01:50:07)Producer and editor: Keiran HarrisAudio Engineering Lead: Ben CordellTechnical editing: Dominic Armstrong and Milo McGuireAdditional content editing: Luisa Rodriguez and Katy MooreTranscriptions: Katy Moore

18 Okt 20231h 54min

#166 – Tantum Collins on what he’s learned as an AI policy insider at the White House, DeepMind and elsewhere

#166 – Tantum Collins on what he’s learned as an AI policy insider at the White House, DeepMind and elsewhere

"If you and I and 100 other people were on the first ship that was going to go settle Mars, and were going to build a human civilisation, and we have to decide what that government looks like, and we have all of the technology available today, how do we think about choosing a subset of that design space? That space is huge and it includes absolutely awful things, and mixed-bag things, and maybe some things that almost everyone would agree are really wonderful, or at least an improvement on the way that things work today. But that raises all kinds of tricky questions. My concern is that if we don't approach the evolution of collective decision making and government in a deliberate way, we may end up inadvertently backing ourselves into a corner, where we have ended up on some slippery slope -- and all of a sudden we have, let's say, autocracies on the global stage are strengthened relative to democracies." — Tantum CollinsIn today’s episode, host Rob Wiblin gets the rare chance to interview someone with insider AI policy experience at the White House and DeepMind who’s willing to speak openly — Tantum Collins.Links to learn more, highlights, and full transcript.They cover:How AI could strengthen government capacity, and how that's a double-edged swordHow new technologies force us to confront tradeoffs in political philosophy that we were previously able to pretend weren't thereTo what extent policymakers take different threats from AI seriouslyWhether the US and China are in an AI arms race or notWhether it's OK to transform the world without much of the world agreeing to itThe tyranny of small differences in AI policyDisagreements between different schools of thought in AI policy, and proposals that could unite themHow the US AI Bill of Rights could be improvedWhether AI will transform the labour market, and whether it will become a partisan political issueThe tensions between the cultures of San Francisco and DC, and how to bridge the divide between themWhat listeners might be able to do to help with this whole messPanpsychismPlenty moreChapters:Cold open (00:00:00)Rob's intro (00:01:00)The interview begins (00:04:01)The risk of autocratic lock-in due to AI (00:10:02)The state of play in AI policymaking (00:13:40)China and AI (00:32:12)The most promising regulatory approaches (00:57:51)Transforming the world without the world agreeing (01:04:44)AI Bill of Rights (01:17:32)Who’s ultimately responsible for the consequences of AI? (01:20:39)Policy ideas that could appeal to many different groups (01:29:08)Tension between those focused on x-risk and those focused on AI ethics (01:38:56)Communicating with policymakers (01:54:22)Is AI going to transform the labour market in the next few years? (01:58:51)Is AI policy going to become a partisan political issue? (02:08:10)The value of political philosophy (02:10:53)Tantum’s work at DeepMind (02:21:20)CSET (02:32:48)Career advice (02:35:21)Panpsychism (02:55:24)Producer and editor: Keiran HarrisAudio Engineering Lead: Ben CordellTechnical editing: Simon Monsour and Milo McGuireTranscriptions: Katy Moore

12 Okt 20233h 8min

#165 – Anders Sandberg on war in space, whether civilisations age, and the best things possible in our universe

#165 – Anders Sandberg on war in space, whether civilisations age, and the best things possible in our universe

"Now, the really interesting question is: How much is there an attacker-versus-defender advantage in this kind of advanced future? Right now, if somebody's sitting on Mars and you're going to war against them, it's very hard to hit them. You don't have a weapon that can hit them very well. But in theory, if you fire a missile, after a few months, it's going to arrive and maybe hit them, but they have a few months to move away. Distance actually makes you safer: if you spread out in space, it's actually very hard to hit you. So it seems like you get a defence-dominant situation if you spread out sufficiently far. But if you're in Earth orbit, everything is close, and the lasers and missiles and the debris are a terrible danger, and everything is moving very fast. So my general conclusion has been that war looks unlikely on some size scales but not on others." — Anders SandbergIn today’s episode, host Rob Wiblin speaks with repeat guest and audience favourite Anders Sandberg about the most impressive things that could be achieved in our universe given the laws of physics.Links to learn more, summary and full transcript.They cover:The epic new book Anders is working on, and whether he’ll ever finish itWhether there's a best possible world or we can just keep improving foreverWhat wars might look like if the galaxy is mostly settledThe impediments to AI or humans making it to other starsHow the universe will end a million trillion years in the futureWhether it’s useful to wonder about whether we’re living in a simulationThe grabby aliens theoryWhether civilizations get more likely to fail the older they getThe best way to generate energy that could ever existBlack hole bombsWhether superintelligence is necessary to get a lot of valueThe likelihood that life from elsewhere has already visited EarthAnd plenty more.Producer and editor: Keiran HarrisAudio Engineering Lead: Ben CordellTechnical editing: Simon Monsour and Milo McGuireTranscriptions: Katy Moore

6 Okt 20232h 48min

#164 – Kevin Esvelt on cults that want to kill everyone, stealth vs wildfire pandemics, and how he felt inventing gene drives

#164 – Kevin Esvelt on cults that want to kill everyone, stealth vs wildfire pandemics, and how he felt inventing gene drives

"Imagine a fast-spreading respiratory HIV. It sweeps around the world. Almost nobody has symptoms. Nobody notices until years later, when the first people who are infected begin to succumb. They might die, something else debilitating might happen to them, but by that point, just about everyone on the planet would have been infected already. And then it would be a race. Can we come up with some way of defusing the thing? Can we come up with the equivalent of HIV antiretrovirals before it's too late?" — Kevin EsveltIn today’s episode, host Luisa Rodriguez interviews Kevin Esvelt — a biologist at the MIT Media Lab and the inventor of CRISPR-based gene drive — about the threat posed by engineered bioweapons.Links to learn more, summary and full transcript.They cover:Why it makes sense to focus on deliberately released pandemicsCase studies of people who actually wanted to kill billions of humansHow many people have the technical ability to produce dangerous virusesThe different threats of stealth and wildfire pandemics that could crash civilisationThe potential for AI models to increase access to dangerous pathogensWhy scientists try to identify new pandemic-capable pathogens, and the case against that researchTechnological solutions, including UV lights and advanced PPEUsing CRISPR-based gene drive to fight diseases and reduce animal sufferingAnd plenty more.Producer and editor: Keiran HarrisAudio Engineering Lead: Ben CordellTechnical editing: Simon MonsourAdditional content editing: Katy Moore and Luisa RodriguezTranscriptions: Katy Moore

2 Okt 20233h 3min

Great power conflict (Article)

Great power conflict (Article)

Today’s release is a reading of our Great power conflict problem profile, written and narrated by Stephen Clare.If you want to check out the links, footnotes and figures in today’s article, you can find those here.And if you like this article, you might enjoy a couple of related episodes of this podcast:#128 – Chris Blattman on the five reasons wars happen#140 – Bear Braumoeller on the case that war isn’t in declineAudio mastering and editing for this episode: Dominic ArmstrongAudio Engineering Lead: Ben CordellProducer: Keiran Harris

22 Sep 20231h 19min

#163 – Toby Ord on the perils of maximising the good that you do

#163 – Toby Ord on the perils of maximising the good that you do

Effective altruism is associated with the slogan "do the most good." On one level, this has to be unobjectionable: What could be bad about helping people more and more?But in today's interview, Toby Ord — moral philosopher at the University of Oxford and one of the founding figures of effective altruism — lays out three reasons to be cautious about the idea of maximising the good that you do. He suggests that rather than “doing the most good that we can,” perhaps we should be happy with a more modest and manageable goal: “doing most of the good that we can.”Links to learn more, summary and full transcript.Toby was inspired to revisit these ideas by the possibility that Sam Bankman-Fried, who stands accused of committing severe fraud as CEO of the cryptocurrency exchange FTX, was motivated to break the law by a desire to give away as much money as possible to worthy causes.Toby's top reason not to fully maximise is the following: if the goal you're aiming at is subtly wrong or incomplete, then going all the way towards maximising it will usually cause you to start doing some very harmful things.This result can be shown mathematically, but can also be made intuitive, and may explain why we feel instinctively wary of going “all-in” on any idea, or goal, or way of living — even something as benign as helping other people as much as possible.Toby gives the example of someone pursuing a career as a professional swimmer. Initially, as our swimmer takes their training and performance more seriously, they adjust their diet, hire a better trainer, and pay more attention to their technique. While swimming is the main focus of their life, they feel fit and healthy and also enjoy other aspects of their life as well — family, friends, and personal projects.But if they decide to increase their commitment further and really go all-in on their swimming career, holding back nothing back, then this picture can radically change. Their effort was already substantial, so how can they shave those final few seconds off their racing time? The only remaining options are those which were so costly they were loath to consider them before.To eke out those final gains — and go from 80% effort to 100% — our swimmer must sacrifice other hobbies, deprioritise their relationships, neglect their career, ignore food preferences, accept a higher risk of injury, and maybe even consider using steroids.Now, if maximising one's speed at swimming really were the only goal they ought to be pursuing, there'd be no problem with this. But if it's the wrong goal, or only one of many things they should be aiming for, then the outcome is disastrous. In going from 80% to 100% effort, their swimming speed was only increased by a tiny amount, while everything else they were accomplishing dropped off a cliff.The bottom line is simple: a dash of moderation makes you much more robust to uncertainty and error.As Toby notes, this is similar to the observation that a sufficiently capable superintelligent AI, given any one goal, would ruin the world if it maximised it to the exclusion of everything else. And it follows a similar pattern to performance falling off a cliff when a statistical model is 'overfit' to its data.In the full interview, Toby also explains the “moral trade” argument against pursuing narrow goals at the expense of everything else, and how consequentialism changes if you judge not just outcomes or acts, but everything according to its impacts on the world.Toby and Rob also discuss:The rise and fall of FTX and some of its impactsWhat Toby hoped effective altruism would and wouldn't become when he helped to get it off the groundWhat utilitarianism has going for it, and what's wrong with it in Toby's viewHow to mathematically model the importance of personal integrityWhich AI labs Toby thinks have been acting more responsibly than othersHow having a young child affects Toby’s feelings about AI riskWhether infinities present a fundamental problem for any theory of ethics that aspire to be fully impartialHow Toby ended up being the source of the highest quality images of the Earth from spaceGet this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Or read the transcript.Producer and editor: Keiran HarrisAudio Engineering Lead: Ben CordellTechnical editing: Simon MonsourTranscriptions: Katy Moore

8 Sep 20233h 7min

The 80,000 Hours Career Guide (2023)

The 80,000 Hours Career Guide (2023)

An audio version of the 2023 80,000 Hours career guide, also available on our website, on Amazon, and on Audible.If you know someone who might find our career guide helpful, you can get a free copy sent to them by going to 80000hours.org/gift.Chapters:Rob's intro (00:00:00)Introduction (00:04:08)Chapter 1: What Makes for a Dream Job? (00:09:09)Chapter 2: Can One Person Make a Difference? What the Evidence Says. (00:33:02)Chapter 3: Three Ways Anyone Can Make a Difference, No Matter Their Job (00:43:33)Chapter 4: Want to Do Good? Here's How to Choose an Area to Focus on (00:58:50)Chapter 5: The World's Biggest Problems and Why They're Not What First Comes to Mind (01:12:03)Chapter 6: Which Jobs Help People the Most? (01:42:15)Chapter 7: Which Jobs Put You in the Best Long-Term Position? (02:19:11)Chapter 8: How to Find the Right Career for You (02:59:26)Chapter 9: How to Make Your Career Plan (03:32:30)Chapter 10: All the Best Advice We Could Find on How to Get a Job (03:55:34)Chapter 11: One of the Most Powerful Ways to Improve Your Career - Join a Community (04:24:21)The End: The Entire Guide, in One Minute (04:35:49)Rob's outro (04:40:05)

4 Sep 20234h 41min

Populärt inom Utbildning

bygga-at-idioter
rss-bara-en-till-om-missbruk-medberoende-2
historiepodden-se
det-skaver
nu-blir-det-historia
allt-du-velat-veta
sektledare
harrisons-dramatiska-historia
alska-oss
johannes-hansen-podcast
not-fanny-anymore
nar-man-talar-om-trollen
sa-in-i-sjalen
i-vantan-pa-katastrofen
roda-vita-rosen
rikatillsammans-om-privatekonomi-rikedom-i-livet
handen-pa-hjartat
jagaren
rss-max-tant-med-max-villman
rss-sjalsligt-avkladd