2024 Highlightapalooza! (The best of The 80,000 Hours Podcast this year)

2024 Highlightapalooza! (The best of The 80,000 Hours Podcast this year)

"A shameless recycling of existing content to drive additional audience engagement on the cheap… or the single best, most valuable, and most insight-dense episode we put out in the entire year, depending on how you want to look at it." — Rob Wiblin

It’s that magical time of year once again — highlightapalooza! Stick around for one top bit from each episode, including:

  • How to use the microphone on someone’s mobile phone to figure out what password they’re typing into their laptop
  • Why mercilessly driving the New World screwworm to extinction could be the most compassionate thing humanity has ever done
  • Why evolutionary psychology doesn’t support a cynical view of human nature but actually explains why so many of us are intensely sensitive to the harms we cause to others
  • How superforecasters and domain experts seem to disagree so much about AI risk, but when you zoom in it’s mostly a disagreement about timing
  • Why the sceptics are wrong and you will want to use robot nannies to take care of your kids — and also why despite having big worries about the development of AGI, Carl Shulman is strongly against efforts to pause AI research today
  • How much of the gender pay gap is due to direct pay discrimination vs other factors
  • How cleaner wrasse fish blow the mirror test out of the water
  • Why effective altruism may be too big a tent to work well
  • How we could best motivate pharma companies to test existing drugs to see if they help cure other diseases — something they currently have no reason to bother with

…as well as 27 other top observations and arguments from the past year of the show.

Check out the full transcript and episode links on the 80,000 Hours website.

Remember that all of these clips come from the 20-minute highlight reels we make for every episode, which are released on our sister feed, 80k After Hours. So if you’re struggling to keep up with our regularly scheduled entertainment, you can still get the best parts of our conversations there.

It has been a hell of a year, and we can only imagine next year is going to be even weirder — but Luisa and Rob will be here to keep you company as Earth hurtles through the galaxy to a fate as yet unknown.

Enjoy, and look forward to speaking with you in 2025!

Chapters:

  • Rob's intro (00:00:00)
  • Randy Nesse on the origins of morality and the problem of simplistic selfish-gene thinking (00:02:11)
  • Hugo Mercier on the evolutionary argument against humans being gullible (00:07:17)
  • Meghan Barrett on the likelihood of insect sentience (00:11:26)
  • Sébastien Moro on the mirror test triumph of cleaner wrasses (00:14:47)
  • Sella Nevo on side-channel attacks (00:19:32)
  • Zvi Mowshowitz on AI sleeper agents (00:22:59)
  • Zach Weinersmith on why space settlement (probably) won't make us rich (00:29:11)
  • Rachel Glennerster on pull mechanisms to incentivise repurposing of generic drugs (00:35:23)
  • Emily Oster on the impact of kids on women's careers (00:40:29)
  • Carl Shulman on robot nannies (00:45:19)
  • Nathan Labenz on kids and artificial friends (00:50:12)
  • Nathan Calvin on why it's not too early for AI policies (00:54:13)
  • Rose Chan Loui on how control of OpenAI is independently incredibly valuable and requires compensation (00:58:08)
  • Nick Joseph on why he’s a big fan of the responsible scaling policy approach (01:03:11)
  • Sihao Huang on how the US and UK might coordinate with China (01:06:09)
  • Nathan Labenz on better transparency about predicted capabilities (01:10:18)
  • Ezra Karger on what explains forecasters’ disagreements about AI risks (01:15:22)
  • Carl Shulman on why he doesn't support enforced pauses on AI research (01:18:58)
  • Matt Clancy on the omnipresent frictions that might prevent explosive economic growth (01:25:24)
  • Vitalik Buterin on defensive acceleration (01:29:43)
  • Annie Jacobsen on the war games that suggest escalation is inevitable (01:34:59)
  • Nate Silver on whether effective altruism is too big to succeed (01:38:42)
  • Kevin Esvelt on why killing every screwworm would be the best thing humanity ever did (01:42:27)
  • Lewis Bollard on how factory farming is philosophically indefensible (01:46:28)
  • Bob Fischer on how to think about moral weights if you're not a hedonist (01:49:27)
  • Elizabeth Cox on the empirical evidence of the impact of storytelling (01:57:43)
  • Anil Seth on how our brain interprets reality (02:01:03)
  • Eric Schwitzgebel on whether consciousness can be nested (02:04:53)
  • Jonathan Birch on our overconfidence around disorders of consciousness (02:10:23)
  • Peter Godfrey-Smith on uploads of ourselves (02:14:34)
  • Laura Deming on surprising things that make mice live longer (02:21:17)
  • Venki Ramakrishnan on freezing cells, organs, and bodies (02:24:46)
  • Ken Goldberg on why low fault tolerance makes some skills extra hard to automate in robots (02:29:12)
  • Sarah Eustis-Guthrie on the ups and downs of founding an organisation (02:34:04)
  • Dean Spears on the cost effectiveness of kangaroo mother care (02:38:26)
  • Cameron Meyer Shorb on vaccines for wild animals (02:42:53)
  • Spencer Greenberg on personal principles (02:46:08)

Producing and editing: Keiran Harris
Audio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic Armstrong
Video editing: Simon Monsour
Transcriptions: Katy Moore

Avsnitt(294)

#79 Classic episode - A.J. Jacobs on radical honesty, following the whole Bible, and reframing global problems as puzzles

#79 Classic episode - A.J. Jacobs on radical honesty, following the whole Bible, and reframing global problems as puzzles

Rebroadcast: this episode was originally released in June 2020. Today’s guest, New York Times bestselling author A.J. Jacobs, always hated Judge Judy. But after he found out that she was his seventh cousin, he thought, "You know what, she's not so bad". Hijacking this bias towards family and trying to broaden it to everyone led to his three-year adventure to help build the biggest family tree in history. He’s also spent months saying whatever was on his mind, tried to become the healthiest person in the world, read 33,000 pages of facts, spent a year following the Bible literally, thanked everyone involved in making his morning cup of coffee, and tried to figure out how to do the most good. His latest book asks: if we reframe global problems as puzzles, would the world be a better place? Links to learn more, summary and full transcript. This is the first time I’ve hosted the podcast, and I’m hoping to convince people to listen with this attempt at clever show notes that change style each paragraph to reference different A.J. experiments. I don’t actually think it’s that clever, but all of my other ideas seemed worse. I really have no idea how people will react to this episode; I loved it, but I definitely think I’m more entertaining than almost anyone else will. (Radical Honesty.) We do talk about some useful stuff — one of which is the concept of micro goals. When you wake up in the morning, just commit to putting on your workout clothes. Once they’re on, maybe you’ll think that you might as well get on the treadmill — just for a minute. And once you’re on for 1 minute, you’ll often stay on for 20. So I’m not asking you to commit to listening to the whole episode — just to put on your headphones. (Drop Dead Healthy.) Another reason to listen is for the facts: • The Bayer aspirin company invented heroin as a cough suppressant • Coriander is just the British way of saying cilantro • Dogs have a third eyelid to protect the eyeball from irritants • and A.J. read all 44 million words of the Encyclopedia Britannica from A to Z, which drove home the idea that we know so little about the world (although he does now know that opossums have 13 nipples). (The Know-It-All.) One extra argument for listening: If you interpret the second commandment literally, then it tells you not to make a likeness of anything in heaven, on earth, or underwater — which rules out basically all images. That means no photos, no TV, no movies. So, if you want to respect the bible, you should definitely consider making podcasts your main source of entertainment (as long as you’re not listening on the Sabbath). (The Year of Living Biblically.) I’m so thankful to A.J. for doing this. But I also want to thank Julie, Jasper, Zane and Lucas who allowed me to spend the day in their home; the construction worker who told me how to get to my subway platform on the morning of the interview; and Queen Jadwiga for making bagels popular in the 1300s, which kept me going during the recording. (Thanks a Thousand.) We also discuss: • Blackmailing yourself • The most extreme ideas A.J.’s ever considered • Utilitarian movie reviews • Doing good as a writer • And much more. Get this episode by subscribing to our podcast on the world’s most pressing problems: type 80,000 Hours into your podcasting app. Or read the linked transcript. Producer: Keiran Harris. Audio mastering: Ben Cordell. Transcript for this episode: Zakee Ulhaq.

16 Jan 20232h 35min

#81 Classic episode - Ben Garfinkel on scrutinising classic AI risk arguments

#81 Classic episode - Ben Garfinkel on scrutinising classic AI risk arguments

Rebroadcast: this episode was originally released in July 2020. 80,000 Hours, along with many other members of the effective altruism movement, has argued that helping to positively shape the development of artificial intelligence may be one of the best ways to have a lasting, positive impact on the long-term future. Millions of dollars in philanthropic spending, as well as lots of career changes, have been motivated by these arguments. Today’s guest, Ben Garfinkel, Research Fellow at Oxford’s Future of Humanity Institute, supports the continued expansion of AI safety as a field and believes working on AI is among the very best ways to have a positive impact on the long-term future. But he also believes the classic AI risk arguments have been subject to insufficient scrutiny given this level of investment. In particular, the case for working on AI if you care about the long-term future has often been made on the basis of concern about AI accidents; it’s actually quite difficult to design systems that you can feel confident will behave the way you want them to in all circumstances. Nick Bostrom wrote the most fleshed out version of the argument in his book, Superintelligence. But Ben reminds us that, apart from Bostrom’s book and essays by Eliezer Yudkowsky, there's very little existing writing on existential accidents. Links to learn more, summary and full transcript. There have also been very few skeptical experts that have actually sat down and fully engaged with it, writing down point by point where they disagree or where they think the mistakes are. This means that Ben has probably scrutinised classic AI risk arguments as carefully as almost anyone else in the world. He thinks that most of the arguments for existential accidents often rely on fuzzy, abstract concepts like optimisation power or general intelligence or goals, and toy thought experiments. And he doesn’t think it’s clear we should take these as a strong source of evidence. Ben’s also concerned that these scenarios often involve massive jumps in the capabilities of a single system, but it's really not clear that we should expect such jumps or find them plausible. These toy examples also focus on the idea that because human preferences are so nuanced and so hard to state precisely, it should be quite difficult to get a machine that can understand how to obey them. But Ben points out that it's also the case in machine learning that we can train lots of systems to engage in behaviours that are actually quite nuanced and that we can't specify precisely. If AI systems can recognise faces from images, and fly helicopters, why don’t we think they’ll be able to understand human preferences? Despite these concerns, Ben is still fairly optimistic about the value of working on AI safety or governance. He doesn’t think that there are any slam-dunks for improving the future, and so the fact that there are at least plausible pathways for impact by working on AI safety and AI governance, in addition to it still being a very neglected area, puts it head and shoulders above most areas you might choose to work in. This is the second episode hosted by Howie Lempel, and he and Ben cover, among many other things: • The threat of AI systems increasing the risk of permanently damaging conflict or collapse • The possibility of permanently locking in a positive or negative future • Contenders for types of advanced systems • What role AI should play in the effective altruism portfolio Get this episode by subscribing: type 80,000 Hours into your podcasting app. Or read the linked transcript. Producer: Keiran Harris. Audio mastering: Ben Cordell. Transcript for this episode: Zakee Ulhaq.

9 Jan 20232h 37min

#83 Classic episode - Jennifer Doleac on preventing crime without police and prisons

#83 Classic episode - Jennifer Doleac on preventing crime without police and prisons

Rebroadcast: this episode was originally released in July 2020. Today’s guest, Jennifer Doleac — Associate Professor of Economics at Texas A&M University, and Director of the Justice Tech Lab — is an expert on empirical research into policing, law and incarceration. In this extensive interview, she highlights three ways to effectively prevent crime that don't require police or prisons and the human toll they bring with them: better street lighting, cognitive behavioral therapy, and lead reduction. One of Jennifer’s papers used switches into and out of daylight saving time as a 'natural experiment' to measure the effect of light levels on crime. One day the sun sets at 5pm; the next day it sets at 6pm. When that evening hour is dark instead of light, robberies during it roughly double. Links to sources for the claims in these show notes, other resources to learn more, the full blog post, and a full transcript. The idea here is that if you try to rob someone in broad daylight, they might see you coming, and witnesses might later be able to identify you. You're just more likely to get caught. You might think: "Well, people will just commit crime in the morning instead". But it looks like criminals aren’t early risers, and that doesn’t happen. On her unusually rigorous podcast Probable Causation, Jennifer spoke to one of the authors of a related study, in which very bright streetlights were randomly added to some public housing complexes but not others. They found the lights reduced outdoor night-time crime by 36%, at little cost. The next best thing to sun-light is human-light, so just installing more streetlights might be one of the easiest ways to cut crime, without having to hassle or punish anyone. The second approach is cognitive behavioral therapy (CBT), in which you're taught to slow down your decision-making, and think through your assumptions before acting. There was a randomised controlled trial done in schools, as well as juvenile detention facilities in Chicago, where the kids assigned to get CBT were followed over time and compared with those who were not assigned to receive CBT. They found the CBT course reduced rearrest rates by a third, and lowered the likelihood of a child returning to a juvenile detention facility by 20%. Jennifer says that the program isn’t that expensive, and the benefits are massive. Everyone would probably benefit from being able to talk through their problems but the gains are especially large for people who've grown up with the trauma of violence in their lives. Finally, Jennifer thinks that reducing lead levels might be the best buy of all in crime prevention. There is really compelling evidence that lead not only increases crime, but also dramatically reduces educational outcomes. In today’s conversation, Rob and Jennifer also cover, among many other things: • Misconduct, hiring practices and accountability among US police • Procedural justice training • Overrated policy ideas • Policies to try to reduce racial discrimination • The effects of DNA databases • Diversity in economics • The quality of social science research Get this episode by subscribing: type 80,000 Hours into your podcasting app. Producer: Keiran Harris. Audio mastering: Ben Cordell. Transcript for this episode: Zakee Ulhaq.

4 Jan 20232h 17min

#143 – Jeffrey Lewis on the most common misconceptions about nuclear weapons

#143 – Jeffrey Lewis on the most common misconceptions about nuclear weapons

America aims to avoid nuclear war by relying on the principle of 'mutually assured destruction,' right? Wrong. Or at least... not officially.As today's guest — Jeffrey Lewis, founder of Arms Control Wonk and professor at the Middlebury Institute of International Studies — explains, in its official 'OPLANs' (military operation plans), the US is committed to 'dominating' in a nuclear war with Russia. How would they do that? "That is redacted." Links to learn more, summary and full transcript. We invited Jeffrey to come on the show to lay out what we and our listeners are most likely to be misunderstanding about nuclear weapons, the nuclear posture of major powers, and his field as a whole, and he did not disappoint. As Jeffrey tells it, 'mutually assured destruction' was a slur used to criticise those who wanted to limit the 1960s arms buildup, and was never accepted as a matter of policy in any US administration. But isn't it still the de facto reality? Yes and no. Jeffrey is a specialist on the nuts and bolts of bureaucratic and military decision-making in real-life situations. He suspects that at the start of their term presidents get a briefing about the US' plan to prevail in a nuclear war and conclude that "it's freaking madness." They say to themselves that whatever these silly plans may say, they know a nuclear war cannot be won, so they just won't use the weapons. But Jeffrey thinks that's a big mistake. Yes, in a calm moment presidents can resist pressure from advisors and generals. But that idea of ‘winning’ a nuclear war is in all the plans. Staff have been hired because they believe in those plans. It's what the generals and admirals have all prepared for. What matters is the 'not calm moment': the 3AM phone call to tell the president that ICBMs might hit the US in eight minutes — the same week Russia invades a neighbour or China invades Taiwan. Is it a false alarm? Should they retaliate before their land-based missile silos are hit? There's only minutes to decide. Jeffrey points out that in emergencies, presidents have repeatedly found themselves railroaded into actions they didn't want to take because of how information and options were processed and presented to them. In the heat of the moment, it's natural to reach for the plan you've prepared — however mad it might sound. In this spicy conversation, Jeffrey fields the most burning questions from Rob and the audience, in the process explaining: • Why inter-service rivalry is one of the biggest constraints on US nuclear policy • Two times the US sabotaged nuclear nonproliferation among great powers • How his field uses jargon to exclude outsiders • How the US could prevent the revival of mass nuclear testing by the great powers • Why nuclear deterrence relies on the possibility that something might go wrong • Whether 'salami tactics' render nuclear weapons ineffective • The time the Navy and Air Force switched views on how to wage a nuclear war, just when it would allow *them* to have the most missiles • The problems that arise when you won't talk to people you think are evil • Why missile defences are politically popular despite being strategically foolish • How open source intelligence can prevent arms races • And much more.Chapters:Rob’s intro (00:00:00)The interview begins (00:02:49)Misconceptions in the effective altruism community (00:05:42)Nuclear deterrence (00:17:36)Dishonest rituals (00:28:17)Downsides of generalist research (00:32:13)“Mutual assured destruction” (00:38:18)Budgetary considerations for competing parts of the US military (00:51:53)Where the effective altruism community can potentially add the most value (01:02:15)Gatekeeping (01:12:04)Strengths of the nuclear security community (01:16:14)Disarmament (01:26:58)Nuclear winter (01:38:53)Attacks against US allies (01:41:46)Most likely weapons to get used (01:45:11)The role of moral arguments (01:46:40)Salami tactics (01:52:01)Jeffrey's disagreements with Thomas Schelling (01:57:00)Why did it take so long to get nuclear arms agreements? (02:01:11)Detecting secret nuclear facilities (02:03:18)Where Jeffrey would give $10M in grants (02:05:46)The importance of archival research (02:11:03)Jeffrey's policy ideas (02:20:03)What should the US do regarding China? (02:27:10)What should the US do regarding Russia? (02:31:42)What should the US do regarding Taiwan? (02:35:27)Advice for people interested in working on nuclear security (02:37:23)Rob’s outro (02:39:13)Producer: Keiran HarrisAudio mastering: Ben CordellTranscriptions: Katy Moore

29 Dec 20222h 40min

#142 – John McWhorter on key lessons from linguistics, the virtue of creoles, and language extinction

#142 – John McWhorter on key lessons from linguistics, the virtue of creoles, and language extinction

John McWhorter is a linguistics professor at Columbia University specialising in research on creole languages. He's also a content-producing machine, never afraid to give his frank opinion on anything and everything. On top of his academic work he's also written 22 books, produced five online university courses, hosts one and a half podcasts, and now writes a regular New York Times op-ed column. • Links to learn more, summary, and full transcript • Video version of the interview • Lecture: Why the world looks the same in any language Our show is mostly about the world's most pressing problems and what you can do to solve them. But what's the point of hosting a podcast if you can't occasionally just talk about something fascinating with someone whose work you appreciate? So today, just before the holidays, we're sharing this interview with John about language and linguistics — including what we think are some of the most important things everyone ought to know about those topics. We ask him: • Can you communicate faster in some languages than others, or is there some constraint that prevents that? • Does learning a second or third language make you smarter or not? • Can a language decay and get worse at communicating what people want to say? • If children aren't taught a language, how many generations does it take them to invent a fully fledged one of their own? • Did Shakespeare write in a foreign language, and if so, should we translate his plays? • How much does language really shape the way we think? • Are creoles the best languages in the world — languages that ideally we would all speak? • What would be the optimal number of languages globally? • Does trying to save dying languages do their speakers a favour, or is it more of an imposition? • Should we bother to teach foreign languages in UK and US schools? • Is it possible to save the important cultural aspects embedded in a dying language without saving the language itself? • Will AI models speak a language of their own in the future, one that humans can't understand but which better serves the tradeoffs AI models need to make? We then put some of these questions to ChatGPT itself, asking it to play the role of a linguistics professor at Colombia University. We’ve also added John’s talk “Why the World Looks the Same in Any Language”  to the end of this episode. So stick around after the credits! And if you’d rather see Rob and John’s facial expressions or beautiful high cheekbones while listening to this conversation, you can watch the video of the full conversation here. Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Producer: Keiran Harris Audio mastering: Ben Cordell Video editing: Ryan Kessler Transcriptions: Katy Moore

20 Dec 20221h 47min

#141 – Richard Ngo on large language models, OpenAI, and striving to make the future go well

#141 – Richard Ngo on large language models, OpenAI, and striving to make the future go well

Large language models like GPT-3, and now ChatGPT, are neural networks trained on a large fraction of all text available on the internet to do one thing: predict the next word in a passage. This simple technique has led to something extraordinary — black boxes able to write TV scripts, explain jokes, produce satirical poetry, answer common factual questions, argue sensibly for political positions, and more. Every month their capabilities grow. But do they really 'understand' what they're saying, or do they just give the illusion of understanding? Today's guest, Richard Ngo, thinks that in the most important sense they understand many things. Richard is a researcher at OpenAI — the company that created ChatGPT — who works to foresee where AI advances are going and develop strategies that will keep these models from 'acting out' as they become more powerful, are deployed and ultimately given power in society. Links to learn more, summary and full transcript. One way to think about 'understanding' is as a subjective experience. Whether it feels like something to be a large language model is an important question, but one we currently have no way to answer. However, as Richard explains, another way to think about 'understanding' is as a functional matter. If you really understand an idea you're able to use it to reason and draw inferences in new situations. And that kind of understanding is observable and testable. Richard argues that language models are developing sophisticated representations of the world which can be manipulated to draw sensible conclusions — maybe not so different from what happens in the human mind. And experiments have found that, as models get more parameters and are trained on more data, these types of capabilities consistently improve. We might feel reluctant to say a computer understands something the way that we do. But if it walks like a duck and it quacks like a duck, we should consider that maybe we have a duck, or at least something sufficiently close to a duck it doesn't matter. In today's conversation we discuss the above, as well as: • Could speeding up AI development be a bad thing? • The balance between excitement and fear when it comes to AI advances • What OpenAI focuses its efforts where it does • Common misconceptions about machine learning • How many computer chips it might require to be able to do most of the things humans do • How Richard understands the 'alignment problem' differently than other people • Why 'situational awareness' may be a key concept for understanding the behaviour of AI models • What work to positively shape the development of AI Richard is and isn't excited about • The AGI Safety Fundamentals course that Richard developed to help people learn more about this field Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Producer: Keiran Harris Audio mastering: Milo McGuire and Ben Cordell Transcriptions: Katy Moore

13 Dec 20222h 44min

My experience with imposter syndrome — and how to (partly) overcome it (Article)

My experience with imposter syndrome — and how to (partly) overcome it (Article)

Today’s release is a reading of our article called My experience with imposter syndrome — and how to (partly) overcome it, written and narrated by Luisa Rodriguez. If you want to check out the links, footnotes and figures in today’s article, you can find those here. And if you like this article, you’ll probably enjoy episode #100 of this show: Having a successful career with depression, anxiety, and imposter syndrome Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Producer: Keiran Harris Audio mastering and editing for this episode: Milo McGuire

8 Dec 202244min

Rob's thoughts on the FTX bankruptcy

Rob's thoughts on the FTX bankruptcy

In this episode, usual host of the show Rob Wiblin gives his thoughts on the recent collapse of FTX. Click here for an official 80,000 Hours statement. And here are links to some potentially relevant 80,000 Hours pieces: • Episode #24 of this show – Stefan Schubert on why it’s a bad idea to break the rules, even if it’s for a good cause. • Is it ever OK to take a harmful job in order to do more good? An in-depth analysis • What are the 10 most harmful jobs? • Ways people trying to do good accidentally make things worse, and how to avoid them

23 Nov 20225min

Populärt inom Utbildning

historiepodden-se
rss-bara-en-till-om-missbruk-medberoende-2
det-skaver
bygga-at-idioter
alska-oss
nu-blir-det-historia
rosceremoni
allt-du-velat-veta
harrisons-dramatiska-historia
johannes-hansen-podcast
not-fanny-anymore
roda-vita-rosen
svd-ledarredaktionen
sektledare
nar-man-talar-om-trollen
rss-max-tant-med-max-villman
sa-in-i-sjalen
sex-pa-riktigt-med-marika-smith
handen-pa-hjartat
rss-makabert