2024 Highlightapalooza! (The best of The 80,000 Hours Podcast this year)

2024 Highlightapalooza! (The best of The 80,000 Hours Podcast this year)

"A shameless recycling of existing content to drive additional audience engagement on the cheap… or the single best, most valuable, and most insight-dense episode we put out in the entire year, depending on how you want to look at it." — Rob Wiblin

It’s that magical time of year once again — highlightapalooza! Stick around for one top bit from each episode, including:

  • How to use the microphone on someone’s mobile phone to figure out what password they’re typing into their laptop
  • Why mercilessly driving the New World screwworm to extinction could be the most compassionate thing humanity has ever done
  • Why evolutionary psychology doesn’t support a cynical view of human nature but actually explains why so many of us are intensely sensitive to the harms we cause to others
  • How superforecasters and domain experts seem to disagree so much about AI risk, but when you zoom in it’s mostly a disagreement about timing
  • Why the sceptics are wrong and you will want to use robot nannies to take care of your kids — and also why despite having big worries about the development of AGI, Carl Shulman is strongly against efforts to pause AI research today
  • How much of the gender pay gap is due to direct pay discrimination vs other factors
  • How cleaner wrasse fish blow the mirror test out of the water
  • Why effective altruism may be too big a tent to work well
  • How we could best motivate pharma companies to test existing drugs to see if they help cure other diseases — something they currently have no reason to bother with

…as well as 27 other top observations and arguments from the past year of the show.

Check out the full transcript and episode links on the 80,000 Hours website.

Remember that all of these clips come from the 20-minute highlight reels we make for every episode, which are released on our sister feed, 80k After Hours. So if you’re struggling to keep up with our regularly scheduled entertainment, you can still get the best parts of our conversations there.

It has been a hell of a year, and we can only imagine next year is going to be even weirder — but Luisa and Rob will be here to keep you company as Earth hurtles through the galaxy to a fate as yet unknown.

Enjoy, and look forward to speaking with you in 2025!

Chapters:

  • Rob's intro (00:00:00)
  • Randy Nesse on the origins of morality and the problem of simplistic selfish-gene thinking (00:02:11)
  • Hugo Mercier on the evolutionary argument against humans being gullible (00:07:17)
  • Meghan Barrett on the likelihood of insect sentience (00:11:26)
  • Sébastien Moro on the mirror test triumph of cleaner wrasses (00:14:47)
  • Sella Nevo on side-channel attacks (00:19:32)
  • Zvi Mowshowitz on AI sleeper agents (00:22:59)
  • Zach Weinersmith on why space settlement (probably) won't make us rich (00:29:11)
  • Rachel Glennerster on pull mechanisms to incentivise repurposing of generic drugs (00:35:23)
  • Emily Oster on the impact of kids on women's careers (00:40:29)
  • Carl Shulman on robot nannies (00:45:19)
  • Nathan Labenz on kids and artificial friends (00:50:12)
  • Nathan Calvin on why it's not too early for AI policies (00:54:13)
  • Rose Chan Loui on how control of OpenAI is independently incredibly valuable and requires compensation (00:58:08)
  • Nick Joseph on why he’s a big fan of the responsible scaling policy approach (01:03:11)
  • Sihao Huang on how the US and UK might coordinate with China (01:06:09)
  • Nathan Labenz on better transparency about predicted capabilities (01:10:18)
  • Ezra Karger on what explains forecasters’ disagreements about AI risks (01:15:22)
  • Carl Shulman on why he doesn't support enforced pauses on AI research (01:18:58)
  • Matt Clancy on the omnipresent frictions that might prevent explosive economic growth (01:25:24)
  • Vitalik Buterin on defensive acceleration (01:29:43)
  • Annie Jacobsen on the war games that suggest escalation is inevitable (01:34:59)
  • Nate Silver on whether effective altruism is too big to succeed (01:38:42)
  • Kevin Esvelt on why killing every screwworm would be the best thing humanity ever did (01:42:27)
  • Lewis Bollard on how factory farming is philosophically indefensible (01:46:28)
  • Bob Fischer on how to think about moral weights if you're not a hedonist (01:49:27)
  • Elizabeth Cox on the empirical evidence of the impact of storytelling (01:57:43)
  • Anil Seth on how our brain interprets reality (02:01:03)
  • Eric Schwitzgebel on whether consciousness can be nested (02:04:53)
  • Jonathan Birch on our overconfidence around disorders of consciousness (02:10:23)
  • Peter Godfrey-Smith on uploads of ourselves (02:14:34)
  • Laura Deming on surprising things that make mice live longer (02:21:17)
  • Venki Ramakrishnan on freezing cells, organs, and bodies (02:24:46)
  • Ken Goldberg on why low fault tolerance makes some skills extra hard to automate in robots (02:29:12)
  • Sarah Eustis-Guthrie on the ups and downs of founding an organisation (02:34:04)
  • Dean Spears on the cost effectiveness of kangaroo mother care (02:38:26)
  • Cameron Meyer Shorb on vaccines for wild animals (02:42:53)
  • Spencer Greenberg on personal principles (02:46:08)

Producing and editing: Keiran Harris
Audio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic Armstrong
Video editing: Simon Monsour
Transcriptions: Katy Moore

Avsnitt(299)

#161 – Michael Webb on whether AI will soon cause job loss, lower incomes, and higher inequality — or the opposite

#161 – Michael Webb on whether AI will soon cause job loss, lower incomes, and higher inequality — or the opposite

"Do you remember seeing these photographs of generally women sitting in front of these huge panels and connecting calls, plugging different calls between different numbers? The automated version of that was invented in 1892. However, the number of human manual operators peaked in 1920 -- 30 years after this. At which point, AT&T is the monopoly provider of this, and they are the largest single employer in America, 30 years after they've invented the complete automation of this thing that they're employing people to do. And the last person who is a manual switcher does not lose their job, as it were: that job doesn't stop existing until I think like 1980.So it takes 90 years from the invention of full automation to the full adoption of it in a single company that's a monopoly provider. It can do what it wants, basically. And so the question perhaps you might have is why?" — Michael WebbIn today’s episode, host Luisa Rodriguez interviews economist Michael Webb of DeepMind, the British Government, and Stanford about how AI progress is going to affect people's jobs and the labour market.Links to learn more, summary and full transcript.They cover:The jobs most and least exposed to AIWhether we’ll we see mass unemployment in the short term How long it took other technologies like electricity and computers to have economy-wide effectsWhether AI will increase or decrease inequalityWhether AI will lead to explosive economic growthWhat we can we learn from history, and reasons to think this time is differentCareer advice for a world of LLMsWhy Michael is starting a new org to relieve talent bottlenecks through accelerated learning, and how you can get involvedMichael's take as a musician on AI-generated musicAnd plenty moreIf you'd like to work with Michael on his new org to radically accelerate how quickly people acquire expertise in critical cause areas, he's now hiring! Check out Quantum Leap's website.Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Or read the transcript.Producer and editor: Keiran HarrisAudio Engineering Lead: Ben CordellTechnical editing: Milo McGuire and Dominic ArmstrongAdditional content editing: Katy Moore and Luisa RodriguezTranscriptions: Katy Moore

23 Aug 20233h 30min

#160 – Hannah Ritchie on why it makes sense to be optimistic about the environment

#160 – Hannah Ritchie on why it makes sense to be optimistic about the environment

"There's no money to invest in education elsewhere, so they almost get trapped in the cycle where they don't get a lot from crop production, but everyone in the family has to work there to just stay afloat. Basically, you get locked in. There's almost no opportunities externally to go elsewhere. So one of my core arguments is that if you're going to address global poverty, you have to increase agricultural productivity in sub-Saharan Africa. There's almost no way of avoiding that." — Hannah RitchieIn today’s episode, host Luisa Rodriguez interviews the head of research at Our World in Data — Hannah Ritchie — on the case for environmental optimism.Links to learn more, summary and full transcript.They cover:Why agricultural productivity in sub-Saharan Africa could be so important, and how much better things could getHer new book about how we could be the first generation to build a sustainable planetWhether climate change is the most worrying environmental issueHow we reduced outdoor air pollutionWhy Hannah is worried about the state of ​​biodiversitySolutions that address multiple environmental issues at onceHow the world coordinated to address the hole in the ozone layerSurprises from Our World in Data’s researchPsychological challenges that come up in Hannah’s workAnd plenty moreGet this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Or read the transcript.Producer and editor: Keiran HarrisAudio Engineering Lead: Ben CordellTechnical editing: Milo McGuire and Dominic ArmstrongAdditional content editing: Katy Moore and Luisa RodriguezTranscriptions: Katy Moore

14 Aug 20232h 36min

#159 – Jan Leike on OpenAI's massive push to make superintelligence safe in 4 years or less

#159 – Jan Leike on OpenAI's massive push to make superintelligence safe in 4 years or less

In July, OpenAI announced a new team and project: Superalignment. The goal is to figure out how to make superintelligent AI systems aligned and safe to use within four years, and the lab is putting a massive 20% of its computational resources behind the effort.Today's guest, Jan Leike, is Head of Alignment at OpenAI and will be co-leading the project. As OpenAI puts it, "...the vast power of superintelligence could be very dangerous, and lead to the disempowerment of humanity or even human extinction. ... Currently, we don't have a solution for steering or controlling a potentially superintelligent AI, and preventing it from going rogue."Links to learn more, summary and full transcript.Given that OpenAI is in the business of developing superintelligent AI, it sees that as a scary problem that urgently has to be fixed. So it’s not just throwing compute at the problem -- it’s also hiring dozens of scientists and engineers to build out the Superalignment team.Plenty of people are pessimistic that this can be done at all, let alone in four years. But Jan is guardedly optimistic. As he explains: Honestly, it really feels like we have a real angle of attack on the problem that we can actually iterate on... and I think it's pretty likely going to work, actually. And that's really, really wild, and it's really exciting. It's like we have this hard problem that we've been talking about for years and years and years, and now we have a real shot at actually solving it. And that'd be so good if we did.Jan thinks that this work is actually the most scientifically interesting part of machine learning. Rather than just throwing more chips and more data at a training run, this work requires actually understanding how these models work and how they think. The answers are likely to be breakthroughs on the level of solving the mysteries of the human brain.The plan, in a nutshell, is to get AI to help us solve alignment. That might sound a bit crazy -- as one person described it, “like using one fire to put out another fire.”But Jan’s thinking is this: the core problem is that AI capabilities will keep getting better and the challenge of monitoring cutting-edge models will keep getting harder, while human intelligence stays more or less the same. To have any hope of ensuring safety, we need our ability to monitor, understand, and design ML models to advance at the same pace as the complexity of the models themselves. And there's an obvious way to do that: get AI to do most of the work, such that the sophistication of the AIs that need aligning, and the sophistication of the AIs doing the aligning, advance in lockstep.Jan doesn't want to produce machine learning models capable of doing ML research. But such models are coming, whether we like it or not. And at that point Jan wants to make sure we turn them towards useful alignment and safety work, as much or more than we use them to advance AI capabilities.Jan thinks it's so crazy it just might work. But some critics think it's simply crazy. They ask a wide range of difficult questions, including:If you don't know how to solve alignment, how can you tell that your alignment assistant AIs are actually acting in your interest rather than working against you? Especially as they could just be pretending to care about what you care about.How do you know that these technical problems can be solved at all, even in principle?At the point that models are able to help with alignment, won't they also be so good at improving capabilities that we're in the middle of an explosion in what AI can do?In today's interview host Rob Wiblin puts these doubts to Jan to hear how he responds to each, and they also cover:OpenAI's current plans to achieve 'superalignment' and the reasoning behind themWhy alignment work is the most fundamental and scientifically interesting research in MLThe kinds of people he’s excited to hire to join his team and maybe save the worldWhat most readers misunderstood about the OpenAI announcementThe three ways Jan expects AI to help solve alignment: mechanistic interpretability, generalization, and scalable oversightWhat the standard should be for confirming whether Jan's team has succeededWhether OpenAI should (or will) commit to stop training more powerful general models if they don't think the alignment problem has been solvedWhether Jan thinks OpenAI has deployed models too quickly or too slowlyThe many other actors who also have to do their jobs really well if we're going to have a good AI futurePlenty moreGet this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Or read the transcript.Producer and editor: Keiran HarrisAudio Engineering Lead: Ben CordellTechnical editing: Simon Monsour and Milo McGuireAdditional content editing: Katy Moore and Luisa RodriguezTranscriptions: Katy Moore

7 Aug 20232h 51min

We now offer shorter 'interview highlights' episodes

We now offer shorter 'interview highlights' episodes

Over on our other feed, 80k After Hours, you can now find 20-30 minute highlights episodes of our 80,000 Hours Podcast interviews. These aren’t necessarily the most important parts of the interview, and if a topic matters to you we do recommend listening to the full episode — but we think these will be a nice upgrade on skipping episodes entirely.Get these highlight episodes by subscribing to our more experimental podcast on the world’s most pressing problems and how to solve them: type 80k After Hours into your podcasting app.Highlights put together by Simon Monsour and Milo McGuire

5 Aug 20236min

#158 – Holden Karnofsky on how AIs might take over even if they're no smarter than humans, and his 4-part playbook for AI risk

#158 – Holden Karnofsky on how AIs might take over even if they're no smarter than humans, and his 4-part playbook for AI risk

Back in 2007, Holden Karnofsky cofounded GiveWell, where he sought out the charities that most cost-effectively helped save lives. He then cofounded Open Philanthropy, where he oversaw a team making billions of dollars’ worth of grants across a range of areas: pandemic control, criminal justice reform, farmed animal welfare, and making AI safe, among others. This year, having learned about AI for years and observed recent events, he's narrowing his focus once again, this time on making the transition to advanced AI go well.In today's conversation, Holden returns to the show to share his overall understanding of the promise and the risks posed by machine intelligence, and what to do about it. That understanding has accumulated over around 14 years, during which he went from being sceptical that AI was important or risky, to making AI risks the focus of his work.Links to learn more, summary and full transcript.(As Holden reminds us, his wife is also the president of one of the world's top AI labs, Anthropic, giving him both conflicts of interest and a front-row seat to recent events. For our part, Open Philanthropy is 80,000 Hours' largest financial supporter.)One point he makes is that people are too narrowly focused on AI becoming 'superintelligent.' While that could happen and would be important, it's not necessary for AI to be transformative or perilous. Rather, machines with human levels of intelligence could end up being enormously influential simply if the amount of computer hardware globally were able to operate tens or hundreds of billions of them, in a sense making machine intelligences a majority of the global population, or at least a majority of global thought.As Holden explains, he sees four key parts to the playbook humanity should use to guide the transition to very advanced AI in a positive direction: alignment research, standards and monitoring, creating a successful and careful AI lab, and finally, information security.In today’s episode, host Rob Wiblin interviews return guest Holden Karnofsky about that playbook, as well as:Why we can’t rely on just gradually solving those problems as they come up, the way we usually do with new technologies.What multiple different groups can do to improve our chances of a good outcome — including listeners to this show, governments, computer security experts, and journalists.Holden’s case against 'hardcore utilitarianism' and what actually motivates him to work hard for a better world.What the ML and AI safety communities get wrong in Holden's view.Ways we might succeed with AI just by dumb luck.The value of laying out imaginable success stories.Why information security is so important and underrated.Whether it's good to work at an AI lab that you think is particularly careful.The track record of futurists’ predictions.And much more.Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Or read the transcript.Producer: Keiran HarrisAudio Engineering Lead: Ben CordellTechnical editing: Simon Monsour and Milo McGuireTranscriptions: Katy Moore

31 Juli 20233h 13min

#157 – Ezra Klein on existential risk from AI and what DC could do about it

#157 – Ezra Klein on existential risk from AI and what DC could do about it

In Oppenheimer, scientists detonate a nuclear weapon despite thinking there's some 'near zero' chance it would ignite the atmosphere, putting an end to life on Earth. Today, scientists working on AI think the chance their work puts an end to humanity is vastly higher than that.In response, some have suggested we launch a Manhattan Project to make AI safe via enormous investment in relevant R&D. Others have suggested that we need international organisations modelled on those that slowed the proliferation of nuclear weapons. Others still seek a research slowdown by labs while an auditing and licencing scheme is created.Today's guest — journalist Ezra Klein of The New York Times — has watched policy discussions and legislative battles play out in DC for 20 years.Links to learn more, summary and full transcript.Like many people he has also taken a big interest in AI this year, writing articles such as “This changes everything.” In his first interview on the show in 2021, he flagged AI as one topic that DC would regret not having paid more attention to. So we invited him on to get his take on which regulatory proposals have promise, and which seem either unhelpful or politically unviable.Out of the ideas on the table right now, Ezra favours a focus on direct government funding — both for AI safety research and to develop AI models designed to solve problems other than making money for their operators. He is sympathetic to legislation that would require AI models to be legible in a way that none currently are — and embraces the fact that that will slow down the release of models while businesses figure out how their products actually work.By contrast, he's pessimistic that it's possible to coordinate countries around the world to agree to prevent or delay the deployment of dangerous AI models — at least not unless there's some spectacular AI-related disaster to create such a consensus. And he fears attempts to require licences to train the most powerful ML models will struggle unless they can find a way to exclude and thereby appease people working on relatively safe consumer technologies rather than cutting-edge research.From observing how DC works, Ezra expects that even a small community of experts in AI governance can have a large influence on how the the US government responds to AI advances. But in Ezra's view, that requires those experts to move to DC and spend years building relationships with people in government, rather than clustering elsewhere in academia and AI labs.In today's brisk conversation, Ezra and host Rob Wiblin cover the above as well as:They cover:Whether it's desirable to slow down AI researchThe value of engaging with current policy debates even if they don't seem directly importantWhich AI business models seem more or less dangerousTensions between people focused on existing vs emergent risks from AITwo major challenges of being a new parentGet this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Or read the transcript below.Producer: Keiran HarrisAudio Engineering Lead: Ben CordellTechnical editing: Milo McGuireTranscriptions: Katy Moore

24 Juli 20231h 18min

#156 – Markus Anderljung on how to regulate cutting-edge AI models

#156 – Markus Anderljung on how to regulate cutting-edge AI models

"At the front of the pack we have these frontier AI developers, and we want them to identify particularly dangerous models ahead of time. Once those mines have been discovered, and the frontier developers keep walking down the minefield, there's going to be all these other people who follow along. And then a really important thing is to make sure that they don't step on the same mines. So you need to put a flag down -- not on the mine, but maybe next to it. And so what that looks like in practice is maybe once we find that if you train a model in such-and-such a way, then it can produce maybe biological weapons is a useful example, or maybe it has very offensive cyber capabilities that are difficult to defend against. In that case, we just need the regulation to be such that you can't develop those kinds of models." — Markus AnderljungIn today’s episode, host Luisa Rodriguez interviews the Head of Policy at the Centre for the Governance of AI — Markus Anderljung — about all aspects of policy and governance of superhuman AI systems.Links to learn more, summary and full transcript.They cover:The need for AI governance, including self-replicating models and ChaosGPTWhether or not AI companies will willingly accept regulationThe key regulatory strategies including licencing, risk assessment, auditing, and post-deployment monitoringWhether we can be confident that people won't train models covertly and ignore the licencing systemThe progress we’ve made so far in AI governanceThe key weaknesses of these approachesThe need for external scrutiny of powerful modelsThe emergent capabilities problemWhy it really matters where regulation happensAdvice for people wanting to pursue a career in this fieldAnd much more.Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Or read the transcript below.Producer: Keiran HarrisAudio Engineering Lead: Ben CordellTechnical editing: Simon Monsour and Milo McGuireTranscriptions: Katy Moore

10 Juli 20232h 6min

Bonus: The Worst Ideas in the History of the World

Bonus: The Worst Ideas in the History of the World

Today’s bonus release is a pilot for a new podcast called ‘The Worst Ideas in the History of the World’, created by Keiran Harris — producer of the 80,000 Hours Podcast.If you have strong opinions about this one way or another, please email us at podcast@80000hours.org to help us figure out whether more of this ought to exist.Chapters:Rob’s intro (00:00:00)The Worst Ideas in the History of the World (00:00:51)My history with longtermism (00:04:01)Outlining the format (00:06:17)Will MacAskill’s basic case (00:07:38)5 reasons for why future people might not matter morally (00:10:26)Whether we can reasonably hope to influence the future (00:15:53)Great power wars (00:18:55)Nuclear weapons (00:22:27)Gain-of-function research (00:28:31)Closer (00:33:02)Rob's outro (00:35:13)

30 Juni 202335min

Populärt inom Utbildning

bygga-at-idioter
historiepodden-se
rss-bara-en-till-om-missbruk-medberoende-2
det-skaver
harrisons-dramatiska-historia
nu-blir-det-historia
johannes-hansen-podcast
nar-man-talar-om-trollen
roda-vita-rosen
not-fanny-anymore
i-vantan-pa-katastrofen
allt-du-velat-veta
sektledare
alska-oss
sa-in-i-sjalen
jagaren
rss-max-tant-med-max-villman
rss-sjalsligt-avkladd
herrsurf
rss-npf-podden