#195 – Sella Nevo on who's trying to steal frontier AI models, and what they could do with them

#195 – Sella Nevo on who's trying to steal frontier AI models, and what they could do with them

"Computational systems have literally millions of physical and conceptual components, and around 98% of them are embedded into your infrastructure without you ever having heard of them. And an inordinate amount of them can lead to a catastrophic failure of your security assumptions. And because of this, the Iranian secret nuclear programme failed to prevent a breach, most US agencies failed to prevent multiple breaches, most US national security agencies failed to prevent breaches. So ensuring your system is truly secure against highly resourced and dedicated attackers is really, really hard." —Sella Nevo

In today’s episode, host Luisa Rodriguez speaks to Sella Nevo — director of the Meselson Center at RAND — about his team’s latest report on how to protect the model weights of frontier AI models from actors who might want to steal them.

Links to learn more, highlights, and full transcript.

They cover:

  • Real-world examples of sophisticated security breaches, and what we can learn from them.
  • Why AI model weights might be such a high-value target for adversaries like hackers, rogue states, and other bad actors.
  • The many ways that model weights could be stolen, from using human insiders to sophisticated supply chain hacks.
  • The current best practices in cybersecurity, and why they may not be enough to keep bad actors away.
  • New security measures that Sella hopes can mitigate with the growing risks.
  • Sella’s work using machine learning for flood forecasting, which has significantly reduced injuries and costs from floods across Africa and Asia.
  • And plenty more.

Also, RAND is currently hiring for roles in technical and policy information security — check them out if you're interested in this field!

Chapters:

  • Cold open (00:00:00)
  • Luisa’s intro (00:00:56)
  • The interview begins (00:02:30)
  • The importance of securing the model weights of frontier AI models (00:03:01)
  • The most sophisticated and surprising security breaches (00:10:22)
  • AI models being leaked (00:25:52)
  • Researching for the RAND report (00:30:11)
  • Who tries to steal model weights? (00:32:21)
  • Malicious code and exploiting zero-days (00:42:06)
  • Human insiders (00:53:20)
  • Side-channel attacks (01:04:11)
  • Getting access to air-gapped networks (01:10:52)
  • Model extraction (01:19:47)
  • Reducing and hardening authorised access (01:38:52)
  • Confidential computing (01:48:05)
  • Red-teaming and security testing (01:53:42)
  • Careers in information security (01:59:54)
  • Sella’s work on flood forecasting systems (02:01:57)
  • Luisa’s outro (02:04:51)


Producer and editor: Keiran Harris
Audio engineering team: Ben Cordell, Simon Monsour, Milo McGuire, and Dominic Armstrong
Additional content editing: Katy Moore and Luisa Rodriguez
Transcriptions: Katy Moore

Jaksot(293)

#100 Classic episode – Having a successful career with depression, anxiety, and imposter syndrome

#100 Classic episode – Having a successful career with depression, anxiety, and imposter syndrome

Today’s episode is one of the most remarkable and really, unique, pieces of content we’ve ever produced (and I can say that because I had almost nothing to do with making it!).The producer of this show, Keiran Harris, interviewed our mutual colleague Howie about the major ways that mental illness has affected his life and career. While depression, anxiety, ADHD and other problems are extremely common, it’s rare for people to offer detailed insight into their thoughts and struggles — and even rarer for someone as perceptive as Howie to do so.Rebroadcast: this episode was originally released in May 2021.Links to learn more, summary, and full transcript.The first half of this conversation is a searingly honest account of Howie’s story, including losing a job he loved due to a depressed episode, what it was like to be basically out of commission for over a year, how he got back on his feet, and the things he still finds difficult today.The second half covers Howie’s advice. Conventional wisdom on mental health can be really focused on cultivating willpower — telling depressed people that the virtuous thing to do is to start exercising, improve their diet, get their sleep in check, and generally fix all their problems before turning to therapy and medication as some sort of last resort.Howie tries his best to be a corrective to this misguided attitude and pragmatically focus on what actually matters — doing whatever will help you get better.Mental illness is one of the things that most often trips up people who could otherwise enjoy flourishing careers and have a large social impact, so we think this could plausibly be one of our more valuable episodes. If you’re in a hurry, we’ve extracted the key advice that Howie has to share in a section below.Howie and Keiran basically treated it like a private conversation, with the understanding that it may be too sensitive to release. But, after getting some really positive feedback, they’ve decided to share it with the world.Here are a few quotes from early reviewers:"I think there’s a big difference between admitting you have depression/seeing a psych and giving a warts-and-all account of a major depressive episode like Howie does in this episode… His description was relatable and really inspiring."Someone who works on mental health issues said:"This episode is perhaps the most vivid and tangible example of what it is like to experience psychological distress that I’ve ever encountered. Even though the content of Howie and Keiran’s discussion was serious, I thought they both managed to converse about it in an approachable and not-overly-somber way."And another reviewer said:"I found Howie’s reflections on what is actually going on in his head when he engages in negative self-talk to be considerably more illuminating than anything I’ve heard from my therapist."We also hope that the episode will:Help people realise that they have a shot at making a difference in the future, even if they’re experiencing (or have experienced in the past) mental illness, self doubt, imposter syndrome, or other personal obstacles.Give insight into what it’s like in the head of one person with depression, anxiety, and imposter syndrome, including the specific thought patterns they experience on typical days and more extreme days. In addition to being interesting for its own sake, this might make it easier for people to understand the experiences of family members, friends, and colleagues — and know how to react more helpfully.Several early listeners have even made specific behavioral changes due to listening to the episode — including people who generally have good mental health but were convinced it’s well worth the low cost of setting up a plan in case they have problems in the future.So we think this episode will be valuable for:People who have experienced mental health problems or might in future;People who have had troubles with stress, anxiety, low mood, low self esteem, imposter syndrome and similar issues, even if their experience isn’t well described as ‘mental illness’;People who have never experienced these problems but want to learn about what it’s like, so they can better relate to and assist family, friends or colleagues who do.In other words, we think this episode could be worthwhile for almost everybody.Just a heads up that this conversation gets pretty intense at times, and includes references to self-harm and suicidal thoughts.If you don’t want to hear or read the most intense section, you can skip the chapter called ‘Disaster’. And if you’d rather avoid almost all of these references, you could skip straight to the chapter called ‘80,000 Hours’.We’ve collected a large list of high quality resources for overcoming mental health problems in our links section.If you’re feeling suicidal or have thoughts of harming yourself right now, there are suicide hotlines at National Suicide Prevention Lifeline in the US (800-273-8255) and Samaritans in the UK (116 123). You may also want to find and save a number for a local service where possible.Producer: Keiran HarrisAudio mastering: Ben CordellTranscriptions: Sofia Davis-Fogel

27 Joulu 20232h 51min

#176 – Nathan Labenz on the final push for AGI, understanding OpenAI's leadership drama, and red-teaming frontier models

#176 – Nathan Labenz on the final push for AGI, understanding OpenAI's leadership drama, and red-teaming frontier models

OpenAI says its mission is to build AGI — an AI system that is better than human beings at everything. Should the world trust them to do that safely?That’s the central theme of today’s episode with Nathan Labenz — entrepreneur, AI scout, and host of The Cognitive Revolution podcast.Links to learn more, video, highlights, and full transcript. Nathan saw the AI revolution coming years ago, and, astonished by the research he was seeing, set aside his role as CEO of Waymark and made it his full-time job to understand AI capabilities across every domain. He has been obsessively tracking the AI world since — including joining OpenAI’s “red team” that probed GPT-4 to find ways it could be abused, long before it was public.Whether OpenAI was taking AI safety seriously enough became a topic of dinner table conversation around the world after the shocking firing and reinstatement of Sam Altman as CEO last month.Nathan’s view: it’s complicated. Discussion of this topic has often been heated, polarising, and personal. But Nathan wants to avoid that and simply lay out, in a way that is impartial and fair to everyone involved, what OpenAI has done right and how it could do better in his view.When he started on the GPT-4 red team, the model would do anything from diagnose a skin condition to plan a terrorist attack without the slightest reservation or objection. When later shown a “Safety” version of GPT-4 that was almost the same, he approached a member of OpenAI’s board to share his concerns and tell them they really needed to try out GPT-4 for themselves and form an opinion.In today’s episode, we share this story as Nathan told it on his own show, The Cognitive Revolution, which he did in the hope that it would provide useful background to understanding the OpenAI board’s reservations about Sam Altman, which to this day have not been laid out in any detail.But while he feared throughout 2022 that OpenAI and Sam Altman didn’t understand the power and risk of their own system, he has since been repeatedly impressed, and came to think of OpenAI as among the better companies that could hypothetically be working to build AGI.Their efforts to make GPT-4 safe turned out to be much larger and more successful than Nathan was seeing. Sam Altman and other leaders at OpenAI seem to sincerely believe they’re playing with fire, and take the threat posed by their work very seriously. With the benefit of hindsight, Nathan suspects OpenAI’s decision to release GPT-4 when it did was for the best.On top of that, OpenAI has been among the most sane and sophisticated voices advocating for AI regulations that would target just the most powerful AI systems — the type they themselves are building — and that could make a real difference. They’ve also invested major resources into new ‘Superalignment’ and ‘Preparedness’ teams, while avoiding using competition with China as an excuse for recklessness.At the same time, it’s very hard to know whether it’s all enough. The challenge of making an AGI safe and beneficial may require much more than they hope or have bargained for. Given that, Nathan poses the question of whether it makes sense to try to build a fully general AGI that can outclass humans in every domain at the first opportunity. Maybe in the short term, we should focus on harvesting the enormous possible economic and humanitarian benefits of narrow applied AI models, and wait until we not only have a way to build AGI, but a good way to build AGI — an AGI that we’re confident we want, which we can prove will remain safe as its capabilities get ever greater.By threatening to follow Sam Altman to Microsoft before his reinstatement as OpenAI CEO, OpenAI’s research team has proven they have enormous influence over the direction of the company. If they put their minds to it, they’re also better placed than maybe anyone in the world to assess if the company’s strategy is on the right track and serving the interests of humanity as a whole. Nathan concludes that this power and insight only adds to the enormous weight of responsibility already resting on their shoulders.In today’s extensive conversation, Nathan and host Rob Wiblin discuss not only all of the above, but also:Speculation about the OpenAI boardroom drama with Sam Altman, given Nathan’s interactions with the board when he raised concerns from his red teaming efforts.Which AI applications we should be urgently rolling out, with less worry about safety.Whether governance issues at OpenAI demonstrate AI research can only be slowed by governments.Whether AI capabilities are advancing faster than safety efforts and controls.The costs and benefits of releasing powerful models like GPT-4.Nathan’s view on the game theory of AI arms races and China.Whether it’s worth taking some risk with AI for huge potential upside.The need for more “AI scouts” to understand and communicate AI progress.And plenty more.Producer and editor: Keiran HarrisAudio Engineering Lead: Ben CordellTechnical editing: Milo McGuire and Dominic ArmstrongTranscriptions: Katy Moore

22 Joulu 20233h 46min

#175 – Lucia Coulter on preventing lead poisoning for $1.66 per child

#175 – Lucia Coulter on preventing lead poisoning for $1.66 per child

Lead is one of the most poisonous things going. A single sugar sachet of lead, spread over a park the size of an American football field, is enough to give a child that regularly plays there lead poisoning. For life they’ll be condemned to a ~3-point-lower IQ; a 50% higher risk of heart attacks; and elevated risk of kidney disease, anaemia, and ADHD, among other effects.We’ve known lead is a health nightmare for at least 50 years, and that got lead out of car fuel everywhere. So is the situation under control? Not even close.Around half the kids in poor and middle-income countries have blood lead levels above 5 micrograms per decilitre; the US declared a national emergency when just 5% of the children in Flint, Michigan exceeded that level. The collective damage this is doing to children’s intellectual potential, health, and life expectancy is vast — the health damage involved is around that caused by malaria, tuberculosis, and HIV combined.This week’s guest, Lucia Coulter — cofounder of the incredibly successful Lead Exposure Elimination Project (LEEP) — speaks about how LEEP has been reducing childhood lead exposure in poor countries by getting bans on lead in paint enforced.Links to learn more, summary, and full transcript.Various estimates suggest the work is absurdly cost effective. LEEP is in expectation preventing kids from getting lead poisoning for under $2 per child (explore the analysis here). Or, looking at it differently, LEEP is saving a year of healthy life for $14, and in the long run is increasing people’s lifetime income anywhere from $300–1,200 for each $1 it spends, by preventing intellectual stunting.Which raises the question: why hasn’t this happened already? How is lead still in paint in most poor countries, even when that’s oftentimes already illegal? And how is LEEP able to get bans on leaded paint enforced in a country while spending barely tens of thousands of dollars? When leaded paint is gone, what should they target next?With host Robert Wiblin, Lucia answers all those questions and more:Why LEEP isn’t fully funded, and what it would do with extra money (you can donate here).How bad lead poisoning is in rich countries.Why lead is still in aeroplane fuel.How lead got put straight in food in Bangladesh, and a handful of people got it removed.Why the enormous damage done by lead mostly goes unnoticed.The other major sources of lead exposure aside from paint.Lucia’s story of founding a highly effective nonprofit, despite having no prior entrepreneurship experience, through Charity Entrepreneurship’s Incubation Program.Why Lucia pledges 10% of her income to cost-effective charities.Lucia’s take on why GiveWell didn’t support LEEP earlier on.How the invention of cheap, accessible lead testing for blood and consumer products would be a game changer.Generalisable lessons LEEP has learned from coordinating with governments in poor countries.And plenty more.Producer and editor: Keiran HarrisAudio Engineering Lead: Ben CordellTechnical editing: Milo McGuire and Dominic ArmstrongTranscriptions: Katy Moore

14 Joulu 20232h 14min

#174 – Nita Farahany on the neurotechnology already being used to convict criminals and manipulate workers

#174 – Nita Farahany on the neurotechnology already being used to convict criminals and manipulate workers

"It will change everything: it will change our workplaces, it will change our interactions with the government, it will change our interactions with each other. It will make all of us unwitting neuromarketing subjects at all times, because at every moment in time, when you’re interacting on any platform that also has issued you a multifunctional device where they’re looking at your brainwave activity, they are marketing to you, they’re cognitively shaping you."So I wrote the book as both a wake-up call, but also as an agenda-setting: to say, what do we need to do, given that this is coming? And there’s a lot of hope, and we should be able to reap the benefits of the technology, but how do we do that without actually ending up in this world of like, 'Oh my god, mind reading is here. Now what?'" — Nita FarahanyIn today’s episode, host Luisa Rodriguez speaks to Nita Farahany — professor of law and philosophy at Duke Law School — about applications of cutting-edge neurotechnology.Links to learn more, summary, and full transcript.They cover:How close we are to actual mind reading.How hacking neural interfaces could cure depression.How companies might use neural data in the workplace — like tracking how productive you are, or using your emotional states against you in negotiations.How close we are to being able to unlock our phones by singing a song in our heads.How neurodata has been used for interrogations, and even criminal prosecutions.The possibility of linking brains to the point where you could experience exactly the same thing as another person.Military applications of this tech, including the possibility of one soldier controlling swarms of drones with their mind.And plenty more.Producer and editor: Keiran HarrisAudio Engineering Lead: Ben CordellTechnical editing: Simon Monsour and Milo McGuireAdditional content editing: Katy Moore and Luisa RodriguezTranscriptions: Katy Moore

7 Joulu 20232h

#173 – Jeff Sebo on digital minds, and how to avoid sleepwalking into a major moral catastrophe

#173 – Jeff Sebo on digital minds, and how to avoid sleepwalking into a major moral catastrophe

"We do have a tendency to anthropomorphise nonhumans — which means attributing human characteristics to them, even when they lack those characteristics. But we also have a tendency towards anthropodenial — which involves denying that nonhumans have human characteristics, even when they have them. And those tendencies are both strong, and they can both be triggered by different types of systems. So which one is stronger, which one is more probable, is again going to be contextual. "But when we then consider that we, right now, are building societies and governments and economies that depend on the objectification, exploitation, and extermination of nonhumans, that — plus our speciesism, plus a lot of other biases and forms of ignorance that we have — gives us a strong incentive to err on the side of anthropodenial instead of anthropomorphism." — Jeff SeboIn today’s episode, host Luisa Rodriguez interviews Jeff Sebo — director of the Mind, Ethics, and Policy Program at NYU — about preparing for a world with digital minds.Links to learn more, highlights, and full transcript.They cover:The non-negligible chance that AI systems will be sentient by 2030What AI systems might want and need, and how that might affect our moral conceptsWhat happens when beings can copy themselves? Are they one person or multiple people? Does the original own the copy or does the copy have its own rights? Do copies get the right to vote?What kind of legal and political status should AI systems have? Legal personhood? Political citizenship?What happens when minds can be connected? If two minds are connected, and one does something illegal, is it possible to punish one but not the other?The repugnant conclusion and the rebugnant conclusionThe experience of trying to build the field of AI welfareWhat improv comedy can teach us about doing good in the worldAnd plenty more.Chapters:Cold open (00:00:00)Luisa's intro (00:01:00)The interview begins (00:02:45)We should extend moral consideration to some AI systems by 2030 (00:06:41)A one-in-1,000 threshold (00:15:23)What does moral consideration mean? (00:24:36)Hitting the threshold by 2030 (00:27:38)Is the threshold too permissive? (00:38:24)The Rebugnant Conclusion (00:41:00)A world where AI experiences could matter more than human experiences (00:52:33)Should we just accept this argument? (00:55:13)Searching for positive-sum solutions (01:05:41)Are we going to sleepwalk into causing massive amounts of harm to AI systems? (01:13:48)Discourse and messaging (01:27:17)What will AI systems want and need? (01:31:17)Copies of digital minds (01:33:20)Connected minds (01:40:26)Psychological connectedness and continuity (01:49:58)Assigning responsibility to connected minds (01:58:41)Counting the wellbeing of connected minds (02:02:36)Legal personhood and political citizenship (02:09:49)Building the field of AI welfare (02:24:03)What we can learn from improv comedy (02:29:29)Producer and editor: Keiran HarrisAudio Engineering Lead: Ben CordellTechnical editing: Dominic Armstrong and Milo McGuireAdditional content editing: Katy Moore and Luisa RodriguezTranscriptions: Katy Moore

22 Marras 20232h 38min

#172 – Bryan Caplan on why you should stop reading the news

#172 – Bryan Caplan on why you should stop reading the news

Is following important political and international news a civic duty — or is it our civic duty to avoid it?It's common to think that 'staying informed' and checking the headlines every day is just what responsible adults do. But in today's episode, host Rob Wiblin is joined by economist Bryan Caplan to discuss the book Stop Reading the News: A Manifesto for a Happier, Calmer and Wiser Life — which argues that reading the news both makes us miserable and distorts our understanding of the world. Far from informing us and enabling us to improve the world, consuming the news distracts us, confuses us, and leaves us feeling powerless.Links to learn more, summary, and full transcript.In the first half of the episode, Bryan and Rob discuss various alleged problems with the news, including:That it overwhelmingly provides us with information we can't usefully act on.That it's very non-representative in what it covers, in particular favouring the negative over the positive and the new over the significant.That it obscures the big picture, falling into the trap of thinking 'something important happens every day.'That it's highly addictive, for many people chewing up 10% or more of their waking hours.That regularly checking the news leaves us in a state of constant distraction and less able to engage in deep thought.And plenty more.Bryan and Rob conclude that if you want to understand the world, you're better off blocking news websites and spending your time on Wikipedia, Our World in Data, or reading a textbook. And if you want to generate political change, stop reading about problems you already know exist and instead write your political representative a physical letter — or better yet, go meet them in person.In the second half of the episode, Bryan and Rob cover: Why Bryan is pretty sceptical that AI is going to lead to extreme, rapid changes, or that there's a meaningful chance of it going terribly.Bryan’s case that rational irrationality on the part of voters leads to many very harmful policy decisions.How to allocate resources in space.Bryan's experience homeschooling his kids.Producer and editor: Keiran HarrisAudio Engineering Lead: Ben CordellTechnical editing: Simon Monsour and Milo McGuireTranscriptions: Katy Moore

17 Marras 20232h 23min

#171 – Alison Young on how top labs have jeopardised public health with repeated biosafety failures

#171 – Alison Young on how top labs have jeopardised public health with repeated biosafety failures

"Rare events can still cause catastrophic accidents. The concern that has been raised by experts going back over time, is that really, the more of these experiments, the more labs, the more opportunities there are for a rare event to occur — that the right pathogen is involved and infects somebody in one of these labs, or is released in some way from these labs. And what I chronicle in Pandora's Gamble is that there have been these previous outbreaks that have been associated with various kinds of lab accidents. So this is not a theoretical thing that can happen: it has happened in the past." — Alison YoungIn today’s episode, host Luisa Rodriguez interviews award-winning investigative journalist Alison Young on the surprising frequency of lab leaks and what needs to be done to prevent them in the future.Links to learn more, summary, and full transcript.They cover:The most egregious biosafety mistakes made by the CDC, and how Alison uncovered them through her investigative reportingThe Dugway life science test facility case, where live anthrax was accidentally sent to labs across the US and several other countries over a period of many yearsThe time the Soviets had a major anthrax leak, and then hid it for over a decadeThe 1977 influenza pandemic caused by vaccine trial gone wrong in ChinaThe last death from smallpox, caused not by the virus spreading in the wild, but by a lab leak in the UK Ways we could get more reliable oversight and accountability for these labsAnd the investigative work Alison’s most proud ofChapters:Cold open (00:00:00)Luisa's intro (00:01:13)Investigating leaks at the CDC (00:05:16)High-profile CDC accidents (00:16:13)Dugway live anthrax accidents (00:32:08)Soviet anthrax leak (00:44:41)The 1977 influenza pandemic (00:53:43)The last death from smallpox (00:59:27)How common are lab leaks? (01:09:05)Improving the regulation of dangerous biological research (01:18:36)Potential solutions (01:34:55)The investigative work Alison’s most proud of (01:40:33)Producer and editor: Keiran HarrisAudio Engineering Lead: Ben CordellTechnical editing: Simon Monsour and Milo McGuireAdditional content editing: Katy Moore and Luisa RodriguezTranscriptions: Katy Moore

9 Marras 20231h 46min

#170 – Santosh Harish on how air pollution is responsible for ~12% of global deaths — and how to get that number down

#170 – Santosh Harish on how air pollution is responsible for ~12% of global deaths — and how to get that number down

"One [outrageous example of air pollution] is municipal waste burning that happens in many cities in the Global South. Basically, this is waste that gets collected from people's homes, and instead of being transported to a waste management facility or a landfill or something, gets burned at some point, because that's the fastest way to dispose of it — which really points to poor delivery of public services. But this is ubiquitous in virtually every small- or even medium-sized city. It happens in larger cities too, in this part of the world. "That's something that truly annoys me, because it feels like the kind of thing that ought to be fairly easily managed, but it happens a lot. It happens because people presumably don't think that it's particularly harmful. I don't think it saves a tonne of money for the municipal corporations and other local government that are meant to manage it. I find it particularly annoying simply because it happens so often; it's something that you're able to smell in so many different parts of these cities." — Santosh HarishIn today’s episode, host Rob Wiblin interviews Santosh Harish — leader of Open Philanthropy’s grantmaking in South Asian air quality — about the scale of the harm caused by air pollution.Links to learn more, summary, and full transcript.They cover:How bad air pollution is for our health and life expectancyThe different kinds of harm that particulate pollution causesThe strength of the evidence that it damages our brain function and reduces our productivityWhether it was a mistake to switch our attention to climate change and away from air pollutionWhether most listeners to this show should have an air purifier running in their house right nowWhere air pollution in India is worst and why, and whether it's going up or downWhere most air pollution comes fromThe policy blunders that led to many sources of air pollution in India being effectively unregulatedWhy indoor air pollution packs an enormous punchThe politics of air pollution in IndiaHow India ended up spending a lot of money on outdoor air purifiersThe challenges faced by foreign philanthropists in IndiaWhy Santosh has made the grants he has so farAnd plenty moreChapters:Cold open (00:00:00)Rob's intro (00:01:07)How bad is air pollution? (00:03:41)Quantifying the scale of the damage (00:15:47)Effects on cognitive performance and mood (00:24:19)How do we really know the harms are as big as is claimed? (00:27:05)Misconceptions about air pollution (00:36:56)Why don’t environmental advocacy groups focus on air pollution? (00:42:22)How listeners should approach air pollution in their own lives (00:46:58)How bad is air pollution in India in particular (00:54:23)The trend in India over the last few decades (01:12:33)Why aren’t people able to fix these problems? (01:24:17)Household waste burning (01:35:06)Vehicle emissions (01:42:10)The role that courts have played in air pollution regulation in India (01:50:09)Industrial emissions (01:57:10)The political economy of air pollution in northern India (02:02:14)Can philanthropists drive policy change? (02:13:42)Santosh’s grants (02:29:45)Examples of other countries that have managed to greatly reduce air pollution (02:45:44)Career advice for listeners in India (02:51:11)Producer and editor: Keiran HarrisAudio Engineering Lead: Ben CordellTechnical editing: Simon Monsour and Milo McGuireTranscriptions: Katy Moore

1 Marras 20232h 57min

Suosittua kategoriassa Koulutus

rss-murhan-anatomia
psykopodiaa-podcast
voi-hyvin-meditaatiot-2
jari-sarasvuo-podcast
aamukahvilla
rss-lasnaolon-hetkia-mindfulness-tutuksi
adhd-podi
rss-vegaaneista-tykkaan
rss-duodecim-lehti
rss-narsisti
rss-vapaudu-voimaasi
psykologiaa-ja-kaikenlaista
rss-elamankoulu
ihminen-tavattavissa-tommy-hellsten-instituutti
puhutaan-koiraa
kehossa
rss-valo-minussa-2
rss-tietoinen-yhteys-podcast-2
aloita-meditaatio
rss-tyoelaman-timantteja