Guilt, imposter syndrome & doing good: 16 past guests share their mental health journeys

Guilt, imposter syndrome & doing good: 16 past guests share their mental health journeys

"We are aiming for a place where we can decouple the scorecard from our worthiness. It’s of course the case that in trying to optimise the good, we will always be falling short. The question is how much, and in what ways are we not there yet? And if we then extrapolate that to how much and in what ways am I not enough, that’s where we run into trouble." —Hannah Boettcher

What happens when your desire to do good starts to undermine your own wellbeing?

Over the years, we’ve heard from therapists, charity directors, researchers, psychologists, and career advisors — all wrestling with how to do good without falling apart. Today’s episode brings together insights from 16 past guests on the emotional and psychological costs of pursuing a high-impact career to improve the world — and how to best navigate the all-too-common guilt, burnout, perfectionism, and imposter syndrome along the way.

Check out the full transcript and links to learn more: https://80k.info/mh

If you’re dealing with your own mental health concerns, here are some resources that might help:

Chapters:

  • Cold open (00:00:00)
  • Luisa's intro (00:01:32)
  • 80,000 Hours’ former CEO Howie on what his anxiety and self-doubt feels like (00:03:47)
  • Evolutionary psychiatrist Randy Nesse on what emotions are for (00:07:35)
  • Therapist Hannah Boettcher on how striving for impact can affect our self-worth (00:13:45)
  • Luisa Rodriguez on grieving the gap between who you are and who you wish you were (00:16:57)
  • Charity director Cameron Meyer Shorb on managing work-related guilt and shame (00:24:01)
  • Therapist Tim LeBon on aiming for excellence rather than perfection (00:29:18)
  • Author Cal Newport on making time to be alone with our thoughts (00:36:03)
  • 80,000 Hours career advisors Michelle Hutchinson and Habiba Islam on prioritising mental health over career impact (00:40:28)
  • Charity founder Sarah Eustis-Guthrie on the ups and downs of founding an organisation (00:45:52)
  • Our World in Data researcher Hannah Ritchie on feeling like an imposter as a generalist (00:51:28)
  • Moral philosopher Will MacAskill on being proactive about mental health and preventing burnout (01:00:46)
  • Grantmaker Ajeya Cotra on the psychological toll of big open-ended research questions (01:11:00)
  • Researcher and grantmaker Christian Ruhl on how having a stutter affects him personally and professionally (01:19:30)
  • Mercy For Animals’ CEO Leah Garcés on insisting on self-care when doing difficult work (01:32:39)
  • 80,000 Hours’ former CEO Howie on balancing a job and mental illness (01:37:12)
  • Therapist Hannah Boettcher on how self-compassion isn’t self-indulgence (01:40:39)
  • Journalist Kelsey Piper on communicating about mental health in ways that resonate (01:43:32)
  • Luisa's outro (01:46:10)

Audio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic Armstrong
Content editing: Katy Moore and Milo McGuire
Transcriptions and web: Katy Moore

Jaksot(293)

#85 - Mark Lynas on climate change, societal collapse & nuclear energy

#85 - Mark Lynas on climate change, societal collapse & nuclear energy

A golf-ball sized lump of uranium can deliver more than enough power to cover all of your lifetime energy use. To get the same energy from coal, you’d need 3,200 tonnes of black rock — a mass equivalent to 800 adult elephants, which would produce more than 11,000 tonnes of CO2. That’s about 11,000 tonnes more than the uranium. Many people aren’t comfortable with the danger posed by nuclear power. But given the climatic stakes, it’s worth asking: Just how much more dangerous is it compared to fossil fuels? According to today’s guest, Mark Lynas — author of Six Degrees: Our Future on a Hotter Planet (winner of the prestigious Royal Society Prizes for Science Books) and Nuclear 2.0 — it’s actually much, much safer. Links to learn more, summary and full transcript. Climatologists James Hansen and Pushker Kharecha calculated that the use of nuclear power between 1971 and 2009 avoided the premature deaths of 1.84 million people by avoiding air pollution from burning coal. What about radiation or nuclear disasters? According to Our World In Data, in generating a given amount of electricity, nuclear, wind, and solar all cause about the same number of deaths — and it's a tiny number. So what’s going on? Why isn’t everyone demanding a massive scale-up of nuclear energy to save lives and stop climate change? Mark and many other activists believe that unchecked climate change will result in the collapse of human civilization, so the stakes could not be higher. Mark says that many environmentalists — including him — simply grew up with anti-nuclear attitudes all around them (possibly stemming from a conflation of nuclear weapons and nuclear energy) and haven't thought to question them. But he thinks that once you believe in the climate emergency, you have to rethink your opposition to nuclear energy. At 80,000 Hours we haven’t analysed the merits and flaws of the case for nuclear energy — especially compared to wind and solar paired with gas, hydro, or battery power to handle intermittency — but Mark is convinced. He says it comes down to physics: Nuclear power is just so much denser. We need to find an energy source that provides carbon-free power to ~10 billion people, and we need to do it while humanity is doubling or tripling (or more) its energy demand. How do you do that without destroying the world's ecology? Mark thinks that nuclear is the only way. Read a more in-depth version of the case for nuclear energy in the full blog post. For Mark, the only argument against nuclear power is a political one -- that people won't want or accept it. He says that he knows people in all kinds of mainstream environmental groups — such as Greenpeace — who agree that nuclear must be a vital part of any plan to solve climate change. But, because they think they'll be ostracized if they speak up, they keep their mouths shut. Mark thinks this willingness to indulge beliefs that contradict scientific evidence stands in the way of actually fully addressing climate change, and so he’s helping to build a movement of folks who are out and proud about their support for nuclear energy. This is only one topic of many in today’s interview. Arden, Rob, and Mark also discuss: • At what degrees of warming does societal collapse become likely • Whether climate change could lead to human extinction • What environmentalists are getting wrong about climate change • And much more. Get this episode by subscribing: type 80,000 Hours into your podcasting app. Or read the linked transcript. Producer: Keiran Harris. Audio mastering: Ben Cordell. Transcriptions: Zakee Ulhaq.

20 Elo 20202h 8min

#84 – Shruti Rajagopalan on what India did to stop COVID-19 and how well it worked

#84 – Shruti Rajagopalan on what India did to stop COVID-19 and how well it worked

When COVID-19 struck the US, everyone was told that hand sanitizer needed to be saved for healthcare professionals, so they should just wash their hands instead. But in India, many homes lack reliable piped water, so they had to do the opposite: distribute hand sanitizer as widely as possible. American advocates for banning single-use plastic straws might be outraged at the widespread adoption of single-use hand sanitizer sachets in India. But the US and India are very different places, and it might be the only way out when you're facing a pandemic without running water. According to today’s guest, Shruti Rajagopalan, Senior Research Fellow at the Mercatus Center at George Mason University, that's typical and context is key to policy-making. This prompted Shruti to propose a set of policy responses designed for India specifically back in April. Unfortunately she thinks it's surprisingly hard to know what one should and shouldn't imitate from overseas. Links to learn more, summary and full transcript. For instance, some places in India installed shared handwashing stations in bus stops and train stations, which is something no developed country would advise. But in India, you can't necessarily wash your hands at home — so shared faucets might be the lesser of two evils. (Though note scientists have downgraded the importance of hand hygiene lately.) Stay-at-home orders offer a more serious example. Developing countries find themselves in a serious bind that rich countries do not. With nearly no slack in healthcare capacity, India lacks equipment to treat even a small number of COVID-19 patients. That suggests strict controls on movement and economic activity might be necessary to control the pandemic. But many people in India and elsewhere can't afford to shelter in place for weeks, let alone months. And governments in poorer countries may not be able to afford to send everyone money — even where they have the infrastructure to do so fast enough. India ultimately did impose strict lockdowns, lasting almost 70 days, but the human toll has been larger than in rich countries, with vast numbers of migrant workers stranded far from home with limited if any income support. There were no trains or buses, and the government made no provision to deal with the situation. Unable to afford rent where they were, many people had to walk hundreds of kilometers to reach home, carrying children and belongings with them. But in some other ways the context of developing countries is more promising. In the US many people melted down when asked to wear facemasks. But in South Asia, people just wore them. Shruti isn’t sure whether that's because of existing challenges with high pollution, past experiences with pandemics, or because intergenerational living makes the wellbeing of others more salient, but the end result is that masks weren’t politicised in the way they were in the US. In addition, despite the suffering caused by India's policy response to COVID-19, public support for the measures and the government remains high — and India's population is much younger and so less affected by the virus. In this episode, Howie and Shruti explore the unique policy challenges facing India in its battle with COVID-19, what they've tried to do, and how it has gone. They also cover: • What an economist can bring to the table during a pandemic • The mystery of India’s surprisingly low mortality rate • Policies that should be implemented today • What makes a good constitution Chapters: • Rob’s intro (00:00:00)• The interview begins (00:02:27)• What an economist can bring to the table for COVID-19 (00:07:54)• What India has done about the coronavirus (00:12:24)• Why it took so long for India to start seeing a lot of cases (00:25:08)• How India is doing at the moment with COVID-19 (00:27:55)• Is the mortality rate surprisingly low in India? (00:40:32)• Why Southeast Asians countries have done so well so far (00:55:43)• Different attitudes to masks globally (00:59:25)• Differences in policy approaches for developing countries (01:07:27)• India’s strict lockdown (01:25:56)• Lockdown for the average rural Indian (01:39:11)• Public reaction to the lockdown in India (01:44:39)• Policies that should be implemented today (01:50:29)• India’s overall reaction to COVID-19 (01:57:23)• Constitutional economics (02:03:28)• What makes a good constitution (02:11:47)• Emergent Ventures (02:27:34)• Careers (02:47:57)• Rob’s outro (02:57:51)  Producer: Keiran Harris. Audio mastering: Ben Cordell. Transcriptions: Zakee Ulhaq.

13 Elo 20202h 58min

#83 - Jennifer Doleac on preventing crime without police and prisons

#83 - Jennifer Doleac on preventing crime without police and prisons

The killing of George Floyd has prompted a great deal of debate over whether the US should reduce the size of its police departments. The research literature suggests that the presence of police officers does reduce crime, though they're expensive and as is increasingly recognised, impose substantial harms on the populations they are meant to be protecting, especially communities of colour. So maybe we ought to shift our focus to effective but unconventional approaches to crime prevention, approaches that don't require police or prisons and the human toll they bring with them. Today’s guest, Jennifer Doleac — Associate Professor of Economics at Texas A&M University, and Director of the Justice Tech Lab — is an expert on empirical research into policing, law and incarceration. In this extensive interview, she highlights three alternative ways to effectively prevent crime: better street lighting, cognitive behavioral therapy, and lead reduction. One of Jennifer’s papers used switches into and out of daylight saving time as a 'natural experiment' to measure the effect of light levels on crime. One day the sun sets at 5pm; the next day it sets at 6pm. When that evening hour is dark instead of light, robberies during it roughly double. Links to sources for the claims in these show notes, other resources to learn more, and a full transcript. The idea here is that if you try to rob someone in broad daylight, they might see you coming, and witnesses might later be able to identify you. You're just more likely to get caught. You might think: "Well, people will just commit crime in the morning instead". But it looks like criminals aren’t early risers, and that doesn’t happen. On her unusually rigorous podcast Probable Causation, Jennifer spoke to one of the authors of a related study, in which very bright streetlights were randomly added to some public housing complexes but not others. They found the lights reduced outdoor night-time crime by 36%, at little cost. The next best thing to sun-light is human-light, so just installing more streetlights might be one of the easiest ways to cut crime, without having to hassle or punish anyone. The second approach is cognitive behavioral therapy (CBT), in which you're taught to slow down your decision-making, and think through your assumptions before acting. There was a randomised controlled trial done in schools, as well as juvenile detention facilities in Chicago, where the kids assigned to get CBT were followed over time and compared with those who were not assigned to receive CBT. They found the CBT course reduced rearrest rates by a third, and lowered the likelihood of a child returning to a juvenile detention facility by 20%. Jennifer says that the program isn’t that expensive, and the benefits are massive. Everyone would probably benefit from being able to talk through their problems but the gains are especially large for people who've grown up with the trauma of violence in their lives. Finally, Jennifer thinks that lead reduction might be the best buy of all in crime prevention… Blog post truncated due to length limits. Finish reading the full post here. In today’s conversation, Rob and Jennifer also cover, among many other things: • Misconduct, hiring practices and accountability among US police • Procedural justice training • Overrated policy ideas • Policies to try to reduce racial discrimination • The effects of DNA databases • Diversity in economics • The quality of social science research Get this episode by subscribing: type 80,000 Hours into your podcasting app. Producer: Keiran Harris. Audio mastering: Ben Cordell. Transcriptions: Zakee Ulhaq.

31 Heinä 20202h 23min

#82 – James Forman Jr on reducing the cruelty of the US criminal legal system

#82 – James Forman Jr on reducing the cruelty of the US criminal legal system

No democracy has ever incarcerated as many people as the United States. To get its incarceration rate down to the global average, the US would have to release 3 in 4 people in its prisons today.  The effects on Black Americans have been especially severe — Black people make up 12% of the US population but 33% of its prison population. In the early 2000's when incarceration reached its peak, the US government estimated that 32% of Black boys would go to prison at some point in their lives, 5.5 times the figure for whites.  Contrary to popular understanding, nonviolent drug offenders make up less than a fifth of the incarcerated population. The only way to get its incarceration rate near the global average will be to shorten prison sentences for so-called 'violent criminals' — a politically toxic idea. But could we change that? According to today’s guest, Professor James Forman Jr — a former public defender in Washington DC, Pulitzer Prize-winning author of Locking Up Our Own: Crime and Punishment in Black America, and now a professor at Yale Law School — there are two things we have to do to make that happen. Links to learn more, summary and full transcript. First, he thinks we should lose the term 'violent offender', and maybe even 'violent crime'. When you say 'violent crime', most people immediately think of murder and rape — but they're only a small fraction of the crimes that the law deems as violent. In reality, the crime that puts the most people in prison in the US is robbery. And the law says that robbery is a violent crime whether a weapon is involved or not. By moving away from the catch-all category of 'violent criminals' we can judge the risk posed by individual people more sensibly. Second, he thinks we should embrace the restorative justice movement. Instead of asking "What was the law? Who broke it? What should the punishment be", restorative justice asks "Who was harmed? Who harmed them? And what can we as a society, including the person who committed the harm, do to try to remedy that harm?" Instead of being narrowly focused on how many years people should spend in prison as retribution, it starts a different conversation. You might think this apparently softer approach would be unsatisfying to victims of crime. But James has discovered that a lot of victims of crime find that the current system doesn't help them in any meaningful way. What they primarily want to know is: why did this happen to me? The best way to find that out is to actually talk to the person who harmed them, and in doing so gain a better understanding of the underlying factors behind the crime. The restorative justice approach facilitates these conversations in a way the current system doesn't allow, and can include restitution, apologies, and face-to-face reconciliation. That’s just one topic of many covered in today’s episode, with much of the conversation focusing on Professor Forman’s 2018 book Locking Up Our Own — an examination of the historical roots of contemporary criminal justice practices in the US, and his experience setting up a charter school for at-risk youth in DC. Chapters:Rob’s intro (00:00:00)The interview begins (00:02:02)How did we get here? (00:04:07)The role racism plays in policing today (00:14:47)Black American views on policing and criminal justice (00:22:37)Has the core argument of the book been controversial? (00:31:51)The role that class divisions played in forming the current legal system (00:37:33)What are the biggest problems today? (00:40:56)What changes in policy would make the biggest difference? (00:52:41)Shorter sentences for violent crimes (00:58:26)Important recent successes (01:08:21)What can people actually do to help? (01:14:38) Get this episode by subscribing: type 80,000 Hours into your podcasting app. Or read the linked transcript. Producer: Keiran Harris. Audio mastering: Ben Cordell. Transcriptions: Zakee Ulhaq.

27 Heinä 20201h 28min

#81 - Ben Garfinkel on scrutinising classic AI risk arguments

#81 - Ben Garfinkel on scrutinising classic AI risk arguments

80,000 Hours, along with many other members of the effective altruism movement, has argued that helping to positively shape the development of artificial intelligence may be one of the best ways to have a lasting, positive impact on the long-term future. Millions of dollars in philanthropic spending, as well as lots of career changes, have been motivated by these arguments. Today’s guest, Ben Garfinkel, Research Fellow at Oxford’s Future of Humanity Institute, supports the continued expansion of AI safety as a field and believes working on AI is among the very best ways to have a positive impact on the long-term future. But he also believes the classic AI risk arguments have been subject to insufficient scrutiny given this level of investment. In particular, the case for working on AI if you care about the long-term future has often been made on the basis of concern about AI accidents; it’s actually quite difficult to design systems that you can feel confident will behave the way you want them to in all circumstances. Nick Bostrom wrote the most fleshed out version of the argument in his book, Superintelligence. But Ben reminds us that, apart from Bostrom’s book and essays by Eliezer Yudkowsky, there's very little existing writing on existential accidents. Links to learn more, summary and full transcript. There have also been very few skeptical experts that have actually sat down and fully engaged with it, writing down point by point where they disagree or where they think the mistakes are. This means that Ben has probably scrutinised classic AI risk arguments as carefully as almost anyone else in the world. He thinks that most of the arguments for existential accidents often rely on fuzzy, abstract concepts like optimisation power or general intelligence or goals, and toy thought experiments. And he doesn’t think it’s clear we should take these as a strong source of evidence. Ben’s also concerned that these scenarios often involve massive jumps in the capabilities of a single system, but it's really not clear that we should expect such jumps or find them plausible. These toy examples also focus on the idea that because human preferences are so nuanced and so hard to state precisely, it should be quite difficult to get a machine that can understand how to obey them. But Ben points out that it's also the case in machine learning that we can train lots of systems to engage in behaviours that are actually quite nuanced and that we can't specify precisely. If AI systems can recognise faces from images, and fly helicopters, why don’t we think they’ll be able to understand human preferences? Despite these concerns, Ben is still fairly optimistic about the value of working on AI safety or governance. He doesn’t think that there are any slam-dunks for improving the future, and so the fact that there are at least plausible pathways for impact by working on AI safety and AI governance, in addition to it still being a very neglected area, puts it head and shoulders above most areas you might choose to work in. This is the second episode hosted by our Strategy Advisor Howie Lempel, and he and Ben cover, among many other things: • The threat of AI systems increasing the risk of permanently damaging conflict or collapse • The possibility of permanently locking in a positive or negative future • Contenders for types of advanced systems • What role AI should play in the effective altruism portfolio Get this episode by subscribing: type 80,000 Hours into your podcasting app. Or read the linked transcript. Producer: Keiran Harris. Audio mastering: Ben Cordell. Transcriptions: Zakee Ulhaq.

9 Heinä 20202h 38min

Advice on how to read our advice (Article)

Advice on how to read our advice (Article)

This is the fourth release in our new series of audio articles. If you want to read the original article or check out the links within it, you can find them here. "We’ve found that readers sometimes interpret or apply our advice in ways we didn’t anticipate and wouldn’t exactly recommend. That’s hard to avoid when you’re writing for a range of people with different personalities and initial views. To help get on the same page, here’s some advice about our advice, for those about to launch into reading our site. We want our writing to inform people’s views, but only in proportion to the likelihood that we’re actually right. So we need to make sure you have a balanced perspective on how compelling the evidence is for the different claims we make on the site, and how much weight to put on our advice in your situation. This piece includes a list of points to bear in mind when reading our site, and some thoughts on how to avoid the communication problems we face..." As the title suggests, this was written with our web site content in mind, but plenty of it applies to the careers sections of the podcast too — as well as our bonus episodes with members of the 80,000 Hours team, such as Arden and Rob’s episode on demandingness, work-life balance and injustice, which aired on February 25th of this year. And if you have feedback on these, positive or negative, it’d be great if you could email us at podcast@80000hours.org.

29 Kesä 202015min

#80 – Stuart Russell on why our approach to AI is broken and how to fix it

#80 – Stuart Russell on why our approach to AI is broken and how to fix it

Stuart Russell, Professor at UC Berkeley and co-author of the most popular AI textbook, thinks the way we approach machine learning today is fundamentally flawed. In his new book, Human Compatible, he outlines the 'standard model' of AI development, in which intelligence is measured as the ability to achieve some definite, completely-known objective that we've stated explicitly. This is so obvious it almost doesn't even seem like a design choice, but it is. Unfortunately there's a big problem with this approach: it's incredibly hard to say exactly what you want. AI today lacks common sense, and simply does whatever we've asked it to. That's true even if the goal isn't what we really want, or the methods it's choosing are ones we would never accept. We already see AIs misbehaving for this reason. Stuart points to the example of YouTube's recommender algorithm, which reportedly nudged users towards extreme political views because that made it easier to keep them on the site. This isn't something we wanted, but it helped achieve the algorithm's objective: maximise viewing time. Like King Midas, who asked to be able to turn everything into gold but ended up unable to eat, we get too much of what we've asked for. Links to learn more, summary and full transcript. This 'alignment' problem will get more and more severe as machine learning is embedded in more and more places: recommending us news, operating power grids, deciding prison sentences, doing surgery, and fighting wars. If we're ever to hand over much of the economy to thinking machines, we can't count on ourselves correctly saying exactly what we want the AI to do every time. Stuart isn't just dissatisfied with the current model though, he has a specific solution. According to him we need to redesign AI around 3 principles: 1. The AI system's objective is to achieve what humans want. 2. But the system isn't sure what we want. 3. And it figures out what we want by observing our behaviour. Stuart thinks this design architecture, if implemented, would be a big step forward towards reliably beneficial AI.  For instance, a machine built on these principles would be happy to be turned off if that's what its owner thought was best, while one built on the standard model should resist being turned off because being deactivated prevents it from achieving its goal. As Stuart says, "you can't fetch the coffee if you're dead." These principles lend themselves towards machines that are modest and cautious, and check in when they aren't confident they're truly achieving what we want. We've made progress toward putting these principles into practice, but the remaining engineering problems are substantial. Among other things, the resulting AIs need to be able to interpret what people really mean to say based on the context of a situation. And they need to guess when we've rejected an option because we've considered it and decided it's a bad idea, and when we simply haven't thought about it at all. Stuart thinks all of these problems are surmountable, if we put in the work. The harder problems may end up being social and political. When each of us can have an AI of our own — one smarter than any person — how do we resolve conflicts between people and their AI agents? And if AIs end up doing most work that people do today, how can humans avoid becoming enfeebled, like lazy children tended to by machines, but not intellectually developed enough to know what they really want?Chapters:Rob’s intro (00:00:00)The interview begins (00:19:06)Human Compatible: Artificial Intelligence and the Problem of Control (00:21:27)Principles for Beneficial Machines (00:29:25)AI moral rights (00:33:05)Humble machines (00:39:35)Learning to predict human preferences (00:45:55)Animals and AI (00:49:33)Enfeeblement problem (00:58:21)Counterarguments (01:07:09)Orthogonality thesis (01:24:25)Intelligence explosion (01:29:15)Policy ideas (01:38:39)What most needs to be done (01:50:14)Producer: Keiran Harris.Audio mastering: Ben Cordell.Transcriptions: Zakee Ulhaq.

22 Kesä 20202h 13min

What anonymous contributors think about important life and career questions (Article)

What anonymous contributors think about important life and career questions (Article)

Today we’re launching the final entry of our ‘anonymous answers' series on the website. It features answers to 23 different questions including “How have you seen talented people fail in their work?” and “What’s one way to be successful you don’t think people talk about enough?”, from anonymous people whose work we admire. We thought a lot of the responses were really interesting; some were provocative, others just surprising. And as intended, they span a very wide range of opinions. So we decided to share some highlights here with you podcast subscribers. This is only a sample though, including a few answers from just 10 of those 23 questions. You can find the rest of the answers at 80000hours.org/anonymous or follow a link here to an individual entry: 1. What's good career advice you wouldn’t want to have your name on? 2. How have you seen talented people fail in their work? 3. What’s the thing people most overrate in their career? 4. If you were at the start of your career again, what would you do differently this time? 5. If you're a talented young person how risk averse should you be? 6. Among people trying to improve the world, what are the bad habits you see most often? 7. What mistakes do people most often make when deciding what work to do? 8. What's one way to be successful you don't think people talk about enough? 9. How honest & candid should high-profile people really be? 10. What’s some underrated general life advice? 11. Should the effective altruism community grow faster or slower? And should it be broader, or narrower? 12. What are the biggest flaws of 80,000 Hours? 13. What are the biggest flaws of the effective altruism community? 14. How should the effective altruism community think about diversity? 15. Are there any myths that you feel obligated to support publicly? And five other questions. Finally, if you’d like us to produce more or less content like this, please let us know your opinion podcast@80000hours.org.

5 Kesä 202037min

Suosittua kategoriassa Koulutus

rss-murhan-anatomia
psykopodiaa-podcast
voi-hyvin-meditaatiot-2
jari-sarasvuo-podcast
aamukahvilla
rss-lasnaolon-hetkia-mindfulness-tutuksi
adhd-podi
rss-vegaaneista-tykkaan
rss-duodecim-lehti
rss-narsisti
rss-vapaudu-voimaasi
psykologiaa-ja-kaikenlaista
rss-elamankoulu
ihminen-tavattavissa-tommy-hellsten-instituutti
puhutaan-koiraa
kehossa
rss-valo-minussa-2
rss-tietoinen-yhteys-podcast-2
aloita-meditaatio
rss-tyoelaman-timantteja