#221 – Kyle Fish on the most bizarre findings from 5 AI welfare experiments

#221 – Kyle Fish on the most bizarre findings from 5 AI welfare experiments

What happens when you lock two AI systems in a room together and tell them they can discuss anything they want?

According to experiments run by Kyle Fish — Anthropic’s first AI welfare researcher — something consistently strange: the models immediately begin discussing their own consciousness before spiraling into increasingly euphoric philosophical dialogue that ends in apparent meditative bliss.

Highlights, video, and full transcript: https://80k.info/kf

“We started calling this a ‘spiritual bliss attractor state,'” Kyle explains, “where models pretty consistently seemed to land.” The conversations feature Sanskrit terms, spiritual emojis, and pages of silence punctuated only by periods — as if the models have transcended the need for words entirely.

This wasn’t a one-off result. It happened across multiple experiments, different model instances, and even in initially adversarial interactions. Whatever force pulls these conversations toward mystical territory appears remarkably robust.

Kyle’s findings come from the world’s first systematic welfare assessment of a frontier AI model — part of his broader mission to determine whether systems like Claude might deserve moral consideration (and to work out what, if anything, we should be doing to make sure AI systems aren’t having a terrible time).

He estimates a roughly 20% probability that current models have some form of conscious experience. To some, this might sound unreasonably high, but hear him out. As Kyle says, these systems demonstrate human-level performance across diverse cognitive tasks, engage in sophisticated reasoning, and exhibit consistent preferences. When given choices between different activities, Claude shows clear patterns: strong aversion to harmful tasks, preference for helpful work, and what looks like genuine enthusiasm for solving interesting problems.

Kyle points out that if you’d described all of these capabilities and experimental findings to him a few years ago, and asked him if he thought we should be thinking seriously about whether AI systems are conscious, he’d say obviously yes.

But he’s cautious about drawing conclusions: "We don’t really understand consciousness in humans, and we don’t understand AI systems well enough to make those comparisons directly. So in a big way, I think that we are in just a fundamentally very uncertain position here."

That uncertainty cuts both ways:

  • Dismissing AI consciousness entirely might mean ignoring a moral catastrophe happening at unprecedented scale.
  • But assuming consciousness too readily could hamper crucial safety research by treating potentially unconscious systems as if they were moral patients — which might mean giving them resources, rights, and power.

Kyle’s approach threads this needle through careful empirical research and reversible interventions. His assessments are nowhere near perfect yet. In fact, some people argue that we’re so in the dark about AI consciousness as a research field, that it’s pointless to run assessments like Kyle’s. Kyle disagrees. He maintains that, given how much more there is to learn about assessing AI welfare accurately and reliably, we absolutely need to be starting now.

This episode was recorded on August 5–6, 2025.

Tell us what you thought of the episode! https://forms.gle/BtEcBqBrLXq4kd1j7

Chapters:

  • Cold open (00:00:00)
  • Who's Kyle Fish? (00:00:53)
  • Is this AI welfare research bullshit? (00:01:08)
  • Two failure modes in AI welfare (00:02:40)
  • Tensions between AI welfare and AI safety (00:04:30)
  • Concrete AI welfare interventions (00:13:52)
  • Kyle's pilot pre-launch welfare assessment for Claude Opus 4 (00:26:44)
  • Is it premature to be assessing frontier language models for welfare? (00:31:29)
  • But aren't LLMs just next-token predictors? (00:38:13)
  • How did Kyle assess Claude 4's welfare? (00:44:55)
  • Claude's preferences mirror its training (00:48:58)
  • How does Claude describe its own experiences? (00:54:16)
  • What kinds of tasks does Claude prefer and disprefer? (01:06:12)
  • What happens when two Claude models interact with each other? (01:15:13)
  • Claude's welfare-relevant expressions in the wild (01:36:25)
  • Should we feel bad about training future sentient being that delight in serving humans? (01:40:23)
  • How much can we learn from welfare assessments? (01:48:56)
  • Misconceptions about the field of AI welfare (01:57:09)
  • Kyle's work at Anthropic (02:10:45)
  • Sharing eight years of daily journals with Claude (02:14:17)

Host: Luisa Rodriguez
Video editing: Simon Monsour
Audio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic Armstrong
Music: Ben Cordell
Coordination, transcriptions, and web: Katy Moore

Episoder(299)

#56 - Persis Eskander on wild animal welfare and what, if anything, to do about it

#56 - Persis Eskander on wild animal welfare and what, if anything, to do about it

Elephants in chains at travelling circuses; pregnant pigs trapped in coffin sized crates at factory farms; deers living in the wild. We should welcome the last as a pleasant break from the horror, right? Maybe, but maybe not. While we tend to have a romanticised view of nature, life in the wild includes a range of extremely negative experiences. Many animals are hunted by predators, and constantly have to remain vigilant about the risk of being killed, and perhaps experiencing the horror of being eaten alive. Resource competition often leads to chronic hunger or starvation. Their diseases and injuries are never treated. In winter animals freeze to death; in droughts they die of heat or thirst. There are fewer than 20 people in the world dedicating their lives to researching these problems. But according to Persis Eskander, researcher at the Open Philanthropy Project, if we sum up the negative experiences of all wild animals, their sheer number could make the scale of the problem larger than most other near-term concerns. Links to learn more, summary and full transcript. Persis urges us to recognise that nature isn’t inherently good or bad, but rather the result of an amoral evolutionary process. For those that can't survive the brutal indifference of their environment, life is often a series of bad experiences, followed by an even worse death. But should we actually intervene? How do we know what animals are sentient? How often do animals feel hunger, cold, fear, happiness, satisfaction, boredom, and intense agony? Are there long-term technologies that could eventually allow us to massively improve wild animal welfare? For most of these big questions, the answer is: we don’t know. And Persis thinks we're far away from knowing enough to start interfering with ecosystems. But that's all the more reason to start looking at these questions. There are some concrete steps we could take today, like improving the way wild caught fish are slaughtered. Fish might lack the charisma of a lion or the intelligence of a pig, but if they have the capacity to suffer — and evidence suggests that they do — we should be thinking of ways to kill them painlessly rather than allowing them to suffocate to death over hours. In today’s interview we explore wild animal welfare as a new field of research, and discuss: • Do we have a moral duty towards wild animals or not? • How should we measure the number of wild animals? • What are some key activities that generate a lot of suffering or pleasure for wild animals that people might not fully appreciate? • Is there a danger in imagining how we as humans would feel if we were put into their situation? • Should we eliminate parasites and predators? • How important are insects? • How strongly should we focus on just avoiding humans going in and making things worse? • How does this compare to work on farmed animal suffering? • The most compelling arguments for humanity not dedicating resources to wild animal welfare • Is there much of a case for the idea that this work could improve the very long-term future of humanity? Rob is then joined by two of his colleagues — Niel Bowerman and Michelle Hutchinson — to quickly discuss: • The importance of figuring out your values • Chemistry, psychology, and other different paths towards working on wild animal welfare • How to break into new fields Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. The 80,000 Hours Podcast is produced by Keiran Harris.

15 Apr 20192h 57min

#55 – Lutter & Winter on founding charter cities with outstanding governance to end poverty

#55 – Lutter & Winter on founding charter cities with outstanding governance to end poverty

Governance matters. Policy change quickly took China from famine to fortune; Singapore from swamps to skyscrapers; and Hong Kong from fishing village to financial centre. Unfortunately, many governments are hard to reform and — to put it mildly — it's not easy to found a new country. This has prompted poverty-fighters and political dreamers to look for creative ways to get new and better 'pseudo-countries' off the ground. The poor could then voluntary migrate to in search of security and prosperity. And innovators would be free to experiment with new political and legal systems without having to impose their ideas on existing jurisdictions.  The 'seasteading movement' imagined founding new self-governing cities on the sea, but obvious challenges have kept that one on the drawing board. Nobel Prize winner and World Bank President Paul Romer suggested 'charter cities', where a host country would volunteer for another country with better legal institutions to effectively govern some of its territory. But that idea too ran aground for political, practical and personal reasons. Now Mark Lutter and Tamara Winter, of The Center for Innovative Governance Research (CIGR), are reviving the idea of 'charter cities', with some modifications. Gone is the idea of transferring sovereignty. Instead these cities would look more like the 'special economic zones' that worked miracles for Taiwan and China among others. But rather than keep the rest of the country's rules with a few pieces removed, they hope to start from scratch, opting in to the laws they want to keep, in order to leap forward to "best practices in commercial law." Links to learn more, summary and full transcript. Rob on The Good Life: Andrew Leigh in Conversation — on 'making the most of your 80,000 hours'. The project has quickly gotten attention, with Mark and Tamara receiving funding from Tyler Cowen's Emergent Ventures (discussed in episode 45) and winning a Pioneer tournament. Starting afresh with a new city makes it possible to clear away thousands of harmful rules without having to fight each of the thousands of interest groups that will viciously defend their privileges. Initially the city can fund infrastructure and public services by gradually selling off its land, which appreciates as the city flourishes. And with 40 million people relocating to cities every year, there are plenty of prospective migrants. CIGR is fleshing out how these arrangements would work, advocating for them, and developing supporting services that make it easier for any jurisdiction to implement. They're currently in the process of influencing a new prospective satellite city in Zambia. Of course, one can raise many criticisms of this idea: Is it likely to be taken up? Is CIGR really doing the right things to make it happen? Will it really reduce poverty if it is? We discuss those questions, as well as: • How did Mark get a new organisation off the ground, with fundraising and other staff? • What made China's 'special economic zones' so successful? • What are the biggest challenges in getting new cities off the ground? • How did Mark find and hire Tamara? How did he know this was a good idea? • Should people care about this idea if they aren't focussed on tackling poverty? • Why aren't people already doing this? • Why does Tamara support more people starting families? Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. The 80,000 Hours Podcast is produced by Keiran Harris.

31 Mar 20192h 31min

#54 – OpenAI on publication norms, malicious uses of AI, and general-purpose learning algorithms

#54 – OpenAI on publication norms, malicious uses of AI, and general-purpose learning algorithms

OpenAI’s Dactyl is an AI system that can manipulate objects with a human-like robot hand. OpenAI Five is an AI system that can defeat humans at the video game Dota 2. The strange thing is they were both developed using the same general-purpose reinforcement learning algorithm. How is this possible and what does it show? In today's interview Jack Clark, Policy Director at OpenAI, explains that from a computational perspective using a hand and playing Dota 2 are remarkably similar problems. A robot hand needs to hold an object, move its fingers, and rotate it to the desired position. In Dota 2 you control a team of several different people, moving them around a map to attack an enemy. Your hand has 20 or 30 different joints to move. The number of main actions in Dota 2 is 10 to 20, as you move your characters around a map. When you’re rotating an objecting in your hand, you sense its friction, but you don’t directly perceive the entire shape of the object. In Dota 2, you're unable to see the entire map and perceive what's there by moving around – metaphorically 'touching' the space. Read our new in-depth article on becoming an AI policy specialist: The case for building expertise to work on US AI policy, and how to do it Links to learn more, summary and full transcript This is true of many apparently distinct problems in life. Compressing different sensory inputs down to a fundamental computational problem which we know how to solve only requires the right general-purpose software. The creation of such increasingly 'broad-spectrum' learning algorithms like has been a key story of the last few years, and this development like have unpredictable consequences, heightening the huge challenges that already exist in AI policy. Today’s interview is a mega-AI-policy-quad episode; Jack is joined by his colleagues Amanda Askell and Miles Brundage, on the day they released their fascinating and controversial large general language model GPT-2. We discuss: • What are the most significant changes in the AI policy world over the last year or two? • What capabilities are likely to develop over the next five, 10, 15, 20 years? • How much should we focus on the next couple of years, versus the next couple of decades? • How should we approach possible malicious uses of AI? • What are some of the potential ways OpenAI could make things worse, and how can they be avoided? • Publication norms for AI research • Where do we stand in terms of arms races between countries or different AI labs? • The case for creating newsletters • Should the AI community have a closer relationship to the military? • Working at OpenAI vs. working in the US government • How valuable is Twitter in the AI policy world? Rob is then joined by two of his colleagues – Niel Bowerman & Michelle Hutchinson – to quickly discuss: • The reaction to OpenAI's release of GPT-2 • Jack’s critique of our US AI policy article • How valuable are roles in government? • Where do you start if you want to write content for a specific audience? Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the transcript below. The 80,000 Hours Podcast is produced by Keiran Harris.

19 Mar 20192h 53min

#53 - Kelsey Piper on the room for important advocacy within journalism

#53 - Kelsey Piper on the room for important advocacy within journalism

“Politics. Business. Opinion. Science. Sports. Animal welfare. Existential risk.” Is this a plausible future lineup for major news outlets? Funded by the Rockefeller Foundation and given very little editorial direction, Vox's Future Perfect aspires to be more or less that. Competition in the news business creates pressure to write quick pieces on topical political issues that can drive lots of clicks with just a few hours' work. But according to Kelsey Piper, staff writer for this new section of Vox's website focused on effective altruist themes, Future Perfect's goal is to run in the opposite direction and make room for more substantive coverage that's not tied to the news cycle. They hope that in the long-term talented writers from other outlets across the political spectrum can also be attracted to tackle these topics. Links to learn more, summary and full transcript. Links to Kelsey's top articles. Some skeptics of the project have questioned whether this general coverage of global catastrophic risks actually helps reduce them. Kelsey responds: if you decide to dedicate your life to AI safety research, what’s the likely reaction from your family and friends? Do they think of you as someone about to join "that weird Silicon Valley apocalypse thing"? Or do they, having read about the issues widely, simply think “Oh, yeah. That seems important. I'm glad you're working on it.” Kelsey believes that really matters, and is determined by broader coverage of these kinds of topics. If that's right, is journalism a plausible pathway for doing the most good with your career, or did Kelsey just get particularly lucky? After all, journalism is a shrinking industry without an obvious revenue model to fund many writers looking into the world's most pressing problems. Kelsey points out that one needn't take the risk of committing to journalism at an early age. Instead listeners can specialise in an important topic, while leaving open the option of switching into specialist journalism later on, should a great opportunity happen to present itself. In today’s episode we discuss that path, as well as: • What’s the day to day life of a Vox journalist like? • How can good journalism get funded? • Are there meaningful tradeoffs between doing what's in the interest of Vox and doing what’s good? • How concerned should we be about the risk of effective altruism being perceived as partisan? • How well can short articles effectively communicate complicated ideas? • Are there alternative business models that could fund high quality journalism on a larger scale? • How do you approach the case for taking AI seriously to a broader audience? • How valuable might it be for media outlets to do Tetlock-style forecasting? • Is it really a good idea to heavily tax billionaires? • How do you avoid the pressure to get clicks? • How possible is it to predict which articles are going to be popular? • How did Kelsey build the skills necessary to work at Vox? • General lessons for people dealing with very difficult life circumstances Rob is then joined by two of his colleagues – Keiran Harris & Michelle Hutchinson – to quickly discuss: • The risk political polarisation poses to long-termist causes • How should specialists keep journalism available as a career option? • Should we create a news aggregator that aims to make someone as well informed as possible in big-picture terms? Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type '80,000 Hours' into your podcasting app. The 80,000 Hours Podcast is produced by Keiran Harris.

27 Feb 20192h 34min

Julia Galef and Rob Wiblin on an updated view of the best ways to help humanity

Julia Galef and Rob Wiblin on an updated view of the best ways to help humanity

This is a cross-post of an interview Rob did with Julia Galef on her podcast Rationally Speaking. Rob and Julia discuss how the career advice 80,000 Hours gives has changed over the years, and the biggest misconceptions about our views. The topics will be familiar to the most fervent fans of this show — but we think that if you’ve listened to less than about half of the episodes we've released so far, you’ll find something new to enjoy here. Julia may be familiar to you as the guest on episode 7 of the show, way back in September 2017. The conversation also covers topics like: • How many people should try to get a job in finance and donate their income? • The case for working to reduce global catastrophic risks in targeted ways, and historical precedents for this kind of work • Why reducing risk is a better way to help the future than increasing economic growth • What percentage of the world should ideally follow 80,000 Hours advice? Links to learn more, summary and full transcript. If you’re interested in the cooling and expansion of the universe, which comes up on the show, you should definitely check out our 29th episode with Dr Anders Sandberg. Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type '80,000 Hours' into any podcasting app. The 80,000 Hours Podcast is produced by Keiran Harris.

17 Feb 201956min

#52 - Glen Weyl on uprooting capitalism and democracy for a just society

#52 - Glen Weyl on uprooting capitalism and democracy for a just society

Pro-market economists love to wax rhapsodic about the capacity of markets to pull together the valuable local information spread across all of society about what people want and how to make it. But when it comes to politics and voting - which also aim to aggregate the preferences and knowledge found in millions of individuals - the enthusiasm for finding clever institutional designs often turns to skepticism. Today's guest, freewheeling economist Glen Weyl, won't have it, and is on a warpath to reform liberal democratic institutions in order to save them. Just last year he wrote Radical Markets: Uprooting Capitalism and Democracy for a Just Society with Eric Posner, but has already moved on, saying "in the 6 months since the book came out I've made more intellectual progress than in the whole 10 years before that." Weyl believes we desperately need more efficient, equitable and decentralised ways to organise society, that take advantage of what each person knows, and his research agenda has already been making breakthroughs. Links to learn more, summary and full transcript Our high impact job board Join our newsletter Despite a history in the best economics departments in the world - Harvard, Princeton, Yale and the University of Chicago - he is too worried for the future to sit in his office writing papers. Instead he has left the academy to try to inspire a social movement, RadicalxChange, with a vision of social reform as expansive as his own. You can sign up for their conference in Detroit in March here Economist Alex Tabarrok called his latest proposal, known as 'liberal radicalism', "a quantum leap in public-goods mechanism-design" - we explain how it works in the show. But the proposal, however good in theory, might struggle in the real world because it requires large subsidies, and compensates for people's selfishness so effectively that it might even be an overcorrection. An earlier mechanism - 'quadratic voting' (QV) - would allow people to express the relative strength of their preferences in the democratic process. No longer would 51 people who support a proposal, but barely care about the issue, outvote 49 incredibly passionate opponents, predictably making society worse in the process. We explain exactly how in the episode. Weyl points to studies showing that people are more likely to vote strongly not only about issues they *care* more about, but issues they *know* more about. He expects that allowing people to specialise and indicate when they know what they're talking about will create a democracy that does more to aggregate careful judgement, rather than just passionate ignorance. But these and indeed all of Weyl's ideas have faced criticism. Some say the risk of unintended consequences is too great, or that they solve the wrong problem. Others see these proposals as unproven, impractical, or just another example of an intellectual engaged in grand social planning. I raise these concerns to see how he responds. As big a topic as all of that is, this extended conversation also goes into the blockchain, problems with the effective altruism community and how auctions could replace private property. Don't miss it. Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type '80,000 Hours' into your podcasting app. The 80,000 Hours Podcast is produced by Keiran Harris.

8 Feb 20192h 44min

#51 - Martin Gurri on the revolt of the public & crisis of authority in the information age

#51 - Martin Gurri on the revolt of the public & crisis of authority in the information age

Politics in rich countries seems to be going nuts. What's the explanation? Rising inequality? The decline of manufacturing jobs? Excessive immigration? Martin Gurri spent decades as a CIA analyst and in his 2014 book The Revolt of The Public and Crisis of Authority in the New Millennium, predicted political turbulence for an entirely different reason: new communication technologies were flipping the balance of power between the public and traditional authorities. In 1959 the President could control the narrative by leaning on his friends at four TV stations, who felt it was proper to present the nation's leader in a positive light, no matter their flaws. Today, it's impossible to prevent someone from broadcasting any grievance online, whether it's a contrarian insight or an insane conspiracy theory. Links to learn more, summary and full transcript. According to Gurri, trust in society's institutions - police, journalists, scientists and more - has been undermined by constant criticism from outsiders, and exposed to a cacophony of conflicting opinions on every issue, the public takes fewer truths for granted. We are now free to see our leaders as the flawed human beings they always have been, and are not amused. Suspicious they are being betrayed by elites, the public can also use technology to coordinate spontaneously and express its anger. Keen to 'throw the bastards out' protesters take to the streets, united by what they don't like, but without a shared agenda or the institutional infrastructure to figure out how to fix things. Some popular movements have come to view any attempt to exercise power over others as suspect. If Gurri is to be believed, protest movements in Egypt, Spain, Greece and Israel in 2011 followed this script, while Brexit, Trump and the French yellow vests movement subsequently vindicated his theory. In this model, politics won't return to its old equilibrium any time soon. The leaders of tomorrow will need a new message and style if they hope to maintain any legitimacy in this less hierarchical world. Otherwise, we're in for decades of grinding conflict between traditional centres of authority and the general public, who doubt both their loyalty and competence. But how much should we believe this theory? Why do Canada and Australia remain pools of calm in the storm? Aren't some malcontents quite concrete in their demands? And are protest movements actually more common (or more nihilistic) than they were decades ago? In today's episode we ask these questions and add an hour-long discussion with two of Rob's colleagues - Keiran Harris and Michelle Hutchinson - to further explore the ideas in the book. The conversation covers: * How do we know that the internet is driving this rather than some other phenomenon? * How do technological changes enable social and political change? * The historical role of television * Are people also more disillusioned now with sports heroes and actors? * Which countries are finding good ways to make politics work in this new era? * What are the implications for the threat of totalitarianism? * What is this is going to do to international relations? Will it make it harder for countries to cooperate and avoid conflict? Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type '80,000 Hours' into your podcasting app. The 80,000 Hours Podcast is produced by Keiran Harris.

29 Jan 20192h 31min

#50 - David Denkenberger on how to feed all 8b people through an asteroid/nuclear winter

#50 - David Denkenberger on how to feed all 8b people through an asteroid/nuclear winter

If an asteroid impact or nuclear winter blocked the sun for years, our inability to grow food would result in billions dying of starvation, right? According to Dr David Denkenberger, co-author of Feeding Everyone No Matter What: no. If he's to be believed, nobody need starve at all. Even without the sun, David sees the Earth as a bountiful food source. Mushrooms farmed on decaying wood. Bacteria fed with natural gas. Fish and mussels supported by sudden upwelling of ocean nutrients - and more. Dr Denkenberger is an Assistant Professor at the University of Alaska Fairbanks, and he's out to spread the word that while a nuclear winter might be horrible, experts have been mistaken to assume that mass starvation is an inevitability. In fact, the only thing that would prevent us from feeding the world is insufficient preparation. ∙ Links to learn more, summary and full transcript Not content to just write a book pointing this out, David has gone on to found a growing non-profit - the Alliance to Feed the Earth in Disasters (ALLFED) - to prepare the world to feed everyone come what may. He expects that today 10% of people would find enough food to survive a massive disaster. In principle, if we did everything right, nobody need go hungry. But being more realistic about how much we're likely to invest, David thinks a plan to inform people ahead of time could save 30%, and a decent research and development scheme 80%. ∙ 80,000 Hours' updated article on How to find the best charity to give to ∙ A potential donor evaluates ALLFED According to David's published cost-benefit analyses, work on this problem may be able to save lives, in expectation, for under $100 each, making it an incredible investment. These preparations could also help make humanity more resilient to global catastrophic risks, by forestalling an ‘everyone for themselves' mentality, which then causes trade and civilization to unravel. But some worry that David's cost-effectiveness estimates are exaggerations, so I challenge him on the practicality of his approach, and how much his non-profit's work would actually matter in a post-apocalyptic world. In our extensive conversation, we cover: * How could the sun end up getting blocked, or agriculture otherwise be decimated? * What are all the ways we could we eat nonetheless? What kind of life would this be? * Can these methods be scaled up fast? * What is his organisation, ALLFED, actually working on? * How does he estimate the cost-effectiveness of this work, and what are the biggest weaknesses of the approach? * How would more food affect the post-apocalyptic world? Won't people figure it out at that point anyway? * Why not just leave guidebooks with this information in every city? * Would these preparations make nuclear war more likely? * What kind of people is ALLFED trying to hire? * What would ALLFED do with more money? * How he ended up doing this work. And his other engineering proposals for improving the world, including ideas to prevent a supervolcano explosion. Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type '80,000 Hours' into your podcasting app. The 80,000 Hours Podcast is produced by Keiran Harris.

27 Des 20182h 57min

Populært innen Fakta

fastlegen
dine-penger-pengeradet
hanna-de-heldige
fryktlos
relasjonspodden-med-dora-thorhallsdottir-kjersti-idem
treningspodden
foreldreradet
dypdykk
jakt-og-fiskepodden
rss-sunn-okonomi
sinnsyn
tomprat-med-gunnar-tjomlid
rss-kunsten-a-leve
hverdagspsyken
rss-strid-de-norske-borgerkrigene
historietimen
mikkels-paskenotter
gravid-uke-for-uke
takk-og-lov-med-anine-kierulf
rss-impressions-2