#152 – Joe Carlsmith on navigating serious philosophical confusion

#152 – Joe Carlsmith on navigating serious philosophical confusion

What is the nature of the universe? How do we make decisions correctly? What differentiates right actions from wrong ones?

Such fundamental questions have been the subject of philosophical and theological debates for millennia. But, as we all know, and surveys of expert opinion make clear, we are very far from agreement. So... with these most basic questions unresolved, what’s a species to do?

In today's episode, philosopher Joe Carlsmith — Senior Research Analyst at Open Philanthropy — makes the case that many current debates in philosophy ought to leave us confused and humbled. These are themes he discusses in his PhD thesis, A stranger priority? Topics at the outer reaches of effective altruism.

Links to learn more, summary and full transcript.

To help transmit the disorientation he thinks is appropriate, Joe presents three disconcerting theories — originating from him and his peers — that challenge humanity's self-assured understanding of the world.

The first idea is that we might be living in a computer simulation, because, in the classic formulation, if most civilisations go on to run many computer simulations of their past history, then most beings who perceive themselves as living in such a history must themselves be in computer simulations. Joe prefers a somewhat different way of making the point, but, having looked into it, he hasn't identified any particular rebuttal to this 'simulation argument.'

If true, it could revolutionise our comprehension of the universe and the way we ought to live...

Other two ideas cut for length — click here to read the full post.

These are just three particular instances of a much broader set of ideas that some have dubbed the "train to crazy town." Basically, if you commit to always take philosophy and arguments seriously, and try to act on them, it can lead to what seem like some pretty crazy and impractical places. So what should we do with this buffet of plausible-sounding but bewildering arguments?

Joe and Rob discuss to what extent this should prompt us to pay less attention to philosophy, and how we as individuals can cope psychologically with feeling out of our depth just trying to make the most basic sense of the world.

In today's challenging conversation, Joe and Rob discuss all of the above, as well as:

  • What Joe doesn't like about the drowning child thought experiment
  • An alternative thought experiment about helping a stranger that might better highlight our intrinsic desire to help others
  • What Joe doesn't like about the expression “the train to crazy town”
  • Whether Elon Musk should place a higher probability on living in a simulation than most other people
  • Whether the deterministic twin prisoner’s dilemma, if fully appreciated, gives us an extra reason to keep promises
  • To what extent learning to doubt our own judgement about difficult questions -- so-called “epistemic learned helplessness” -- is a good thing
  • How strong the case is that advanced AI will engage in generalised power-seeking behaviour

Chapters:

  • Rob’s intro (00:00:00)
  • The interview begins (00:09:21)
  • Downsides of the drowning child thought experiment (00:12:24)
  • Making demanding moral values more resonant (00:24:56)
  • The crazy train (00:36:48)
  • Whether we’re living in a simulation (00:48:50)
  • Reasons to doubt we’re living in a simulation, and practical implications if we are (00:57:02)
  • Rob's explainer about anthropics (01:12:27)
  • Back to the interview (01:19:53)
  • Decision theory and affecting the past (01:23:33)
  • Rob's explainer about decision theory (01:29:19)
  • Back to the interview (01:39:55)
  • Newcomb's problem (01:46:14)
  • Practical implications of acausal decision theory (01:50:04)
  • The hitchhiker in the desert (01:55:57)
  • Acceptance within philosophy (02:01:22)
  • Infinite ethics (02:04:35)
  • Rob's explainer about the expanding spheres approach (02:17:05)
  • Back to the interview (02:20:27)
  • Infinite ethics and the utilitarian dream (02:27:42)
  • Rob's explainer about epicycles (02:29:30)
  • Back to the interview (02:31:26)
  • What to do with all of these weird philosophical ideas (02:35:28)
  • Welfare longtermism and wisdom longtermism (02:53:23)
  • Epistemic learned helplessness (03:03:10)
  • Power-seeking AI (03:12:41)
  • Rob’s outro (03:25:45)

Producer: Keiran Harris

Audio mastering: Milo McGuire and Ben Cordell

Transcriptions: Katy Moore

Avsnitt(300)

#186 – Dean Spears on why babies are born small in Uttar Pradesh, and how to save their lives

#186 – Dean Spears on why babies are born small in Uttar Pradesh, and how to save their lives

"I work in a place called Uttar Pradesh, which is a state in India with 240 million people. One in every 33 people in the whole world lives in Uttar Pradesh. It would be the fifth largest country if it were its own country. And if it were its own country, you’d probably know about its human development challenges, because it would have the highest neonatal mortality rate of any country except for South Sudan and Pakistan. Forty percent of children there are stunted. Only two-thirds of women are literate. So Uttar Pradesh is a place where there are lots of health challenges."And then even within that, we’re working in a district called Bahraich, where about 4 million people live. So even that district of Uttar Pradesh is the size of a country, and if it were its own country, it would have a higher neonatal mortality rate than any other country. In other words, babies born in Bahraich district are more likely to die in their first month of life than babies born in any country around the world." — Dean SpearsIn today’s episode, host Luisa Rodriguez speaks to Dean Spears — associate professor of economics at the University of Texas at Austin and founding director of r.i.c.e. — about his experience implementing a surprisingly low-tech but highly cost-effective kangaroo mother care programme in Uttar Pradesh, India to save the lives of vulnerable newborn infants.Links to learn more, highlights, and full transcript.They cover:The shockingly high neonatal mortality rates in Uttar Pradesh, India, and how social inequality and gender dynamics contribute to poor health outcomes for both mothers and babies.The remarkable benefits for vulnerable newborns that come from skin-to-skin contact and breastfeeding support.The challenges and opportunities that come with working with a government hospital to implement new, evidence-based programmes.How the currently small programme might be scaled up to save more newborns’ lives in other regions of Uttar Pradesh and beyond.How targeted health interventions stack up against direct cash transfers.Plus, a sneak peak into Dean’s new book, which explores the looming global population peak that’s expected around 2080, and the consequences of global depopulation.And much more.Chapters:Why is low birthweight a major problem in Uttar Pradesh? (00:02:45)Neonatal mortality and maternal health in Uttar Pradesh (00:06:10)Kangaroo mother care (00:12:08)What would happen without this intervention? (00:16:07)Evidence of KMC’s effectiveness (00:18:15)Longer-term outcomes (00:32:14)GiveWell’s support and implementation challenges (00:41:13)How can KMC be so cost effective? (00:52:38)Programme evaluation (00:57:21)Is KMC is better than direct cash transfers? (00:59:12)Expanding the programme and what skills are needed (01:01:29)Fertility and population decline (01:07:28)What advice Dean would give his younger self (01:16:09)Producer and editor: Keiran HarrisAudio engineering lead: Ben CordellTechnical editing: Simon Monsour, Milo McGuire, and Dominic ArmstrongAdditional content editing: Katy Moore and Luisa RodriguezTranscriptions: Katy Moore

1 Maj 20241h 18min

#185 – Lewis Bollard on the 7 most promising ways to end factory farming, and whether AI is going to be good or bad for animals

#185 – Lewis Bollard on the 7 most promising ways to end factory farming, and whether AI is going to be good or bad for animals

"The constraint right now on factory farming is how far can you push the biology of these animals? But AI could remove that constraint. It could say, 'Actually, we can push them further in these ways and these ways, and they still stay alive. And we’ve modelled out every possibility and we’ve found that it works.' I think another possibility, which I don’t understand as well, is that AI could lock in current moral values. And I think in particular there’s a risk that if AI is learning from what we do as humans today, the lesson it’s going to learn is that it’s OK to tolerate mass cruelty, so long as it occurs behind closed doors. I think there’s a risk that if it learns that, then it perpetuates that value, and perhaps slows human moral progress on this issue." —Lewis BollardIn today’s episode, host Luisa Rodriguez speaks to Lewis Bollard — director of the Farm Animal Welfare programme at Open Philanthropy — about the promising progress and future interventions to end the worst factory farming practices still around today.Links to learn more, highlights, and full transcript.They cover:The staggering scale of animal suffering in factory farms, and how it will only get worse without intervention.Work to improve farmed animal welfare that Open Philanthropy is excited about funding.The amazing recent progress made in farm animal welfare — including regulatory attention in the EU and a big win at the US Supreme Court — and the work that still needs to be done.The occasional tension between ending factory farming and curbing climate changeHow AI could transform factory farming for better or worse — and Lewis’s fears that the technology will just help us maximise cruelty in the name of profit.How Lewis has updated his opinions or grantmaking as a result of new research on the “moral weights” of different species.Lewis’s personal journey working on farm animal welfare, and how he copes with the emotional toll of confronting the scale of animal suffering.How listeners can get involved in the growing movement to end factory farming — from career and volunteer opportunities to impactful donations.And much more.Chapters:Common objections to ending factory farming (00:13:21)Potential solutions (00:30:55)Cage-free reforms (00:34:25)Broiler chicken welfare (00:46:48)Do companies follow through on these commitments? (01:00:21)Fish welfare (01:05:02)Alternatives to animal proteins (01:16:36)Farm animal welfare in Asia (01:26:00)Farm animal welfare in Europe (01:30:45)Animal welfare science (01:42:09)Approaches Lewis is less excited about (01:52:10)Will we end factory farming in our lifetimes? (01:56:36)Effect of AI (01:57:59)Recent big wins for farm animals (02:07:38)How animal advocacy has changed since Lewis first got involved (02:15:57)Response to the Moral Weight Project (02:19:52)How to help (02:28:14)Producer and editor: Keiran HarrisAudio engineering lead: Ben CordellTechnical editing: Simon Monsour, Milo McGuire, and Dominic ArmstrongAdditional content editing: Katy Moore and Luisa RodriguezTranscriptions: Katy Moore

18 Apr 20242h 33min

#184 – Zvi Mowshowitz on sleeping on sleeper agents, and the biggest AI updates since ChatGPT

#184 – Zvi Mowshowitz on sleeping on sleeper agents, and the biggest AI updates since ChatGPT

Many of you will have heard of Zvi Mowshowitz as a superhuman information-absorbing-and-processing machine — which he definitely is. As the author of the Substack Don’t Worry About the Vase, Zvi has spent as much time as literally anyone in the world over the last two years tracking in detail how the explosion of AI has been playing out — and he has strong opinions about almost every aspect of it. Links to learn more, summary, and full transcript.In today’s episode, host Rob Wiblin asks Zvi for his takes on:US-China negotiationsWhether AI progress has stalledThe biggest wins and losses for alignment in 2023EU and White House AI regulationsWhich major AI lab has the best safety strategyThe pros and cons of the Pause AI movementRecent breakthroughs in capabilitiesIn what situations it’s morally acceptable to work at AI labsWhether you agree or disagree with his views, Zvi is super informed and brimming with concrete details.Zvi and Rob also talk about:The risk of AI labs fooling themselves into believing their alignment plans are working when they may not be.The “sleeper agent” issue uncovered in a recent Anthropic paper, and how it shows us how hard alignment actually is.Why Zvi disagrees with 80,000 Hours’ advice about gaining career capital to have a positive impact.Zvi’s project to identify the most strikingly horrible and neglected policy failures in the US, and how Zvi founded a new think tank (Balsa Research) to identify innovative solutions to overthrow the horrible status quo in areas like domestic shipping, environmental reviews, and housing supply.Why Zvi thinks that improving people’s prosperity and housing can make them care more about existential risks like AI.An idea from the online rationality community that Zvi thinks is really underrated and more people should have heard of: simulacra levels.And plenty more.Chapters:Zvi’s AI-related worldview (00:03:41)Sleeper agents (00:05:55)Safety plans of the three major labs (00:21:47)Misalignment vs misuse vs structural issues (00:50:00)Should concerned people work at AI labs? (00:55:45)Pause AI campaign (01:30:16)Has progress on useful AI products stalled? (01:38:03)White House executive order and US politics (01:42:09)Reasons for AI policy optimism (01:56:38)Zvi’s day-to-day (02:09:47)Big wins and losses on safety and alignment in 2023 (02:12:29)Other unappreciated technical breakthroughs (02:17:54)Concrete things we can do to mitigate risks (02:31:19)Balsa Research and the Jones Act (02:34:40)The National Environmental Policy Act (02:50:36)Housing policy (02:59:59)Underrated rationalist worldviews (03:16:22)Producer and editor: Keiran HarrisAudio Engineering Lead: Ben CordellTechnical editing: Simon Monsour, Milo McGuire, and Dominic ArmstrongTranscriptions and additional content editing: Katy Moore

11 Apr 20243h 31min

AI governance and policy (Article)

AI governance and policy (Article)

Today’s release is a reading of our career review of AI governance and policy, written and narrated by Cody Fenwick.Advanced AI systems could have massive impacts on humanity and potentially pose global catastrophic risks, and there are opportunities in the broad field of AI governance to positively shape how society responds to and prepares for the challenges posed by the technology.Given the high stakes, pursuing this career path could be many people’s highest-impact option. But they should be very careful not to accidentally exacerbate the threats rather than mitigate them.If you want to check out the links, footnotes and figures in today’s article, you can find those here.Editing and audio proofing: Ben Cordell and Simon MonsourNarration: Cody Fenwick

28 Mars 202451min

#183 – Spencer Greenberg on causation without correlation, money and happiness, lightgassing, hype vs value, and more

#183 – Spencer Greenberg on causation without correlation, money and happiness, lightgassing, hype vs value, and more

"When a friend comes to me with a decision, and they want my thoughts on it, very rarely am I trying to give them a really specific answer, like, 'I solved your problem.' What I’m trying to do often is give them other ways of thinking about what they’re doing, or giving different framings. A classic example of this would be someone who’s been working on a project for a long time and they feel really trapped by it. And someone says, 'Let’s suppose you currently weren’t working on the project, but you could join it. And if you joined, it would be exactly the state it is now. Would you join?' And they’d be like, 'Hell no!' It’s a reframe. It doesn’t mean you definitely shouldn’t join, but it’s a reframe that gives you a new way of looking at it." —Spencer GreenbergIn today’s episode, host Rob Wiblin speaks for a fourth time with listener favourite Spencer Greenberg — serial entrepreneur and host of the Clearer Thinking podcast — about a grab-bag of topics that Spencer has explored since his last appearance on the show a year ago.Links to learn more, summary, and full transcript.They cover:How much money makes you happy — and the tricky methodological issues that come up trying to answer that question.The importance of hype in making valuable things happen.How to recognise warning signs that someone is untrustworthy or likely to hurt you.Whether Registered Reports are successfully solving reproducibility issues in science.The personal principles Spencer lives by, and whether or not we should all establish our own list of life principles.The biggest and most harmful systemic mistakes we commit when making decisions, both individually and as groups.The potential harms of lightgassing, which is the opposite of gaslighting.How Spencer’s team used non-statistical methods to test whether astrology works.Whether there’s any social value in retaliation.And much more.Chapters:Does money make you happy? (00:05:54)Hype vs value (00:31:27)Warning signs that someone is bad news (00:41:25)Integrity and reproducibility in social science research (00:57:54)Personal principles (01:16:22)Decision-making errors (01:25:56)Lightgassing (01:49:23)Astrology (02:02:26)Game theory, tit for tat, and retaliation (02:20:51)Parenting (02:30:00)Producer and editor: Keiran HarrisAudio Engineering Lead: Ben CordellTechnical editing: Simon Monsour, Milo McGuire, and Dominic ArmstrongTranscriptions: Katy Moore

14 Mars 20242h 36min

#182 – Bob Fischer on comparing the welfare of humans, chickens, pigs, octopuses, bees, and more

#182 – Bob Fischer on comparing the welfare of humans, chickens, pigs, octopuses, bees, and more

"[One] thing is just to spend time thinking about the kinds of things animals can do and what their lives are like. Just how hard a chicken will work to get to a nest box before she lays an egg, the amount of labour she’s willing to go through to do that, to think about how important that is to her. And to realise that we can quantify that, and see how much they care, or to see that they get stressed out when fellow chickens are threatened and that they seem to have some sympathy for conspecifics."Those kinds of things make me say there is something in there that is recognisable to me as another individual, with desires and preferences and a vantage point on the world, who wants things to go a certain way and is frustrated and upset when they don’t. And recognising the individuality, the perspective of nonhuman animals, for me, really challenges my tendency to not take them as seriously as I think I ought to, all things considered." — Bob FischerIn today’s episode, host Luisa Rodriguez speaks to Bob Fischer — senior research manager at Rethink Priorities and the director of the Society for the Study of Ethics and Animals — about Rethink Priorities’s Moral Weight Project.Links to learn more, summary, and full transcript.They cover:The methods used to assess the welfare ranges and capacities for pleasure and pain of chickens, pigs, octopuses, bees, and other animals — and the limitations of that approach.Concrete examples of how someone might use the estimated moral weights to compare the benefits of animal vs human interventions.The results that most surprised Bob.Why the team used a hedonic theory of welfare to inform the project, and what non-hedonic theories of welfare might bring to the table.Thought experiments like Tortured Tim that test different philosophical assumptions about welfare.Confronting our own biases when estimating animal mental capacities and moral worth.The limitations of using neuron counts as a proxy for moral weights.How different types of risk aversion, like avoiding worst-case scenarios, could impact cause prioritisation.And plenty more.Chapters:Welfare ranges (00:10:19)Historical assessments (00:16:47)Method (00:24:02)The present / absent approach (00:27:39)Results (00:31:42)Chickens (00:32:42)Bees (00:50:00)Salmon and limits of methodology (00:56:18)Octopuses (01:00:31)Pigs (01:27:50)Surprises about the project (01:30:19)Objections to the project (01:34:25)Alternative decision theories and risk aversion (01:39:14)Hedonism assumption (02:00:54)Producer and editor: Keiran HarrisAudio Engineering Lead: Ben CordellTechnical editing: Simon Monsour and Milo McGuireAdditional content editing: Katy Moore and Luisa RodriguezTranscriptions: Katy Moore

8 Mars 20242h 21min

#181 – Laura Deming on the science that could keep us healthy in our 80s and beyond

#181 – Laura Deming on the science that could keep us healthy in our 80s and beyond

"The question I care about is: What do I want to do? Like, when I'm 80, how strong do I want to be? OK, and then if I want to be that strong, how well do my muscles have to work? OK, and then if that's true, what would they have to look like at the cellular level for that to be true? Then what do we have to do to make that happen? In my head, it's much more about agency and what choice do I have over my health. And even if I live the same number of years, can I live as an 80-year-old running every day happily with my grandkids?" — Laura DemingIn today’s episode, host Luisa Rodriguez speaks to Laura Deming — founder of The Longevity Fund — about the challenge of ending ageing.Links to learn more, summary, and full transcript.They cover:How lifespan is surprisingly easy to manipulate in animals, which suggests human longevity could be increased too.Why we irrationally accept age-related health decline as inevitable.The engineering mindset Laura takes to solving the problem of ageing.Laura’s thoughts on how ending ageing is primarily a social challenge, not a scientific one.The recent exciting regulatory breakthrough for an anti-ageing drug for dogs.Laura’s vision for how increased longevity could positively transform society by giving humans agency over when and how they age.Why this decade may be the most important decade ever for making progress on anti-ageing research.The beauty and fascination of biology, which makes it such a compelling field to work in.And plenty more.Chapters:The case for ending ageing (00:04:00)What might the world look like if this all goes well? (00:21:57)Reasons not to work on ageing research (00:27:25)Things that make mice live longer (00:44:12)Parabiosis, changing the brain, and organ replacement can increase lifespan (00:54:25)Big wins the field of ageing research (01:11:40)Talent shortages and other bottlenecks for ageing research (01:17:36)Producer and editor: Keiran HarrisAudio Engineering Lead: Ben CordellTechnical editing: Simon Monsour and Milo McGuireAdditional content editing: Katy Moore and Luisa RodriguezTranscriptions: Katy Moore

1 Mars 20241h 37min

#180 – Hugo Mercier on why gullibility and misinformation are overrated

#180 – Hugo Mercier on why gullibility and misinformation are overrated

The World Economic Forum’s global risks survey of 1,400 experts, policymakers, and industry leaders ranked misinformation and disinformation as the number one global risk over the next two years — ranking it ahead of war, environmental problems, and other threats from AI.And the discussion around misinformation and disinformation has shifted to focus on how generative AI or a future super-persuasive AI might change the game and make it extremely hard to figure out what was going on in the world — or alternatively, extremely easy to mislead people into believing convenient lies.But this week’s guest, cognitive scientist Hugo Mercier, has a very different view on how people form beliefs and figure out who to trust — one in which misinformation really is barely a problem today, and is unlikely to be a problem anytime soon. As he explains in his book Not Born Yesterday, Hugo believes we seriously underrate the perceptiveness and judgement of ordinary people.Links to learn more, summary, and full transcript.In this interview, host Rob Wiblin and Hugo discuss:How our reasoning mechanisms evolved to facilitate beneficial communication, not blind gullibility.How Hugo makes sense of our apparent gullibility in many cases — like falling for financial scams, astrology, or bogus medical treatments, and voting for policies that aren’t actually beneficial for us.Rob and Hugo’s ideas about whether AI might make misinformation radically worse, and which mass persuasion approaches we should be most worried about.Why Hugo thinks our intuitions about who to trust are generally quite sound, even in today’s complex information environment.The distinction between intuitive beliefs that guide our actions versus reflective beliefs that don’t.Why fake news and conspiracy theories actually have less impact than most people assume.False beliefs that have persisted across cultures and generations — like bloodletting and vaccine hesitancy — and theories about why.And plenty more.Chapters:The view that humans are really gullible (00:04:26)The evolutionary argument against humans being gullible (00:07:46) Open vigilance (00:18:56)Intuitive and reflective beliefs (00:32:25)How people decide who to trust (00:41:15)Redefining beliefs (00:51:57)Bloodletting (01:00:38)Vaccine hesitancy and creationism (01:06:38)False beliefs without skin in the game (01:12:36)One consistent weakness in human judgement (01:22:57)Trying to explain harmful financial decisions (01:27:15)Astrology (01:40:40)Medical treatments that don’t work (01:45:47)Generative AI, LLMs, and persuasion (01:54:50)Ways AI could improve the information environment (02:29:59)Producer and editor: Keiran HarrisAudio Engineering Lead: Ben CordellTechnical editing: Simon Monsour and Milo McGuireTranscriptions: Katy Moore

21 Feb 20242h 36min

Populärt inom Utbildning

bygga-at-idioter
rss-bara-en-till-om-missbruk-medberoende-2
historiepodden-se
det-skaver
harrisons-dramatiska-historia
nu-blir-det-historia
allt-du-velat-veta
nar-man-talar-om-trollen
johannes-hansen-podcast
not-fanny-anymore
roda-vita-rosen
sektledare
i-vantan-pa-katastrofen
sa-in-i-sjalen
alska-oss
handen-pa-hjartat
jagaren
rss-max-tant-med-max-villman
rss-sjalsligt-avkladd
rss-npf-podden