
#16 - Michelle Hutchinson on global priorities research & shaping the ideas of intellectuals
In the 40s and 50s neoliberalism was a fringe movement within economics. But by the 80s it had become a dominant school of thought in public policy, and achieved major policy changes across the English speaking world. How did this happen? In part because its leaders invested heavily in training academics to study and develop their ideas. Whether you think neoliberalism was good or bad, its history demonstrates the impact building a strong intellectual base within universities can have. Michelle Hutchinson is working to get a different set of ideas a hearing in academia by setting up the Global Priorities Institute (GPI) at Oxford University. The Institute, which is currently hiring for three roles, aims to bring together outstanding philosophers and economists to research how to most improve the world. The hope is that it will spark widespread academic engagement with effective altruist thinking, which will hone the ideas and help them gradually percolate into society more broadly. Link to the full blog post about this episode including transcript and links to learn more Its research agenda includes questions like: * How do we compare the good done by focussing on really different types of causes? * How does saving lives actually affect the world relative to other things we could do? * What are the biggest wins governments should be focussed on getting? Before moving to GPI, Michelle was the Executive Director of Giving What We Can and a founding figure of the effective altruism movement. She has a PhD in Applied Ethics from Oxford on prioritization and global health. We discuss: * What is global priorities research and why does it matter? * How is effective altruism seen in academia? Is it important to convince academics of the value of your work, or is it OK to ignore them? * Operating inside a university is quite expensive, so is it even worth doing? Who can pay for this kind of thing? * How hard is it to do something innovative inside a university? How serious are the administrative and other barriers? * Is it harder to fundraise for a new institute, or hire the right people? * Have other social movements benefitted from having a prominent academic arm? * How can people prepare themselves to get research roles at a place like GPI? * Many people want to have roles doing this kind of research. How many are actually cut out for it? What should those who aren’t do instead? * What are the odds of the Institute’s work having an effect on the real world? Get free, one-on-one career advice We’ve helped hundreds of people compare their options, get introductions, and find high impact jobs. If you want to work on global priorities research or other important questions in academia, find out if our coaching can help you.
22 Joulu 201755min

#15 - Phil Tetlock on how chimps beat Berkeley undergrads and when it’s wise to defer to the wise
Prof Philip Tetlock is a social science legend. Over forty years he has researched whose predictions we can trust, whose we can’t and why - and developed methods that allow all of us to be better at predicting the future. After the Iraq WMDs fiasco, the US intelligence services hired him to figure out how to ensure they’d never screw up that badly again. The result of that work – Superforecasting – was a media sensation in 2015. Full transcript, brief summary, apply for coaching and links to learn more. It described Tetlock’s Good Judgement Project, which found forecasting methods so accurate they beat everyone else in open competition, including thousands of people in the intelligence services with access to classified information. Today he’s working to develop the best forecasting process ever, by combining top human and machine intelligence in the Hybrid Forecasting Competition, which you can sign up and participate in. We start by describing his key findings, and then push to the edge of what is known about how to foresee the unforeseeable: * Should people who want to be right just adopt the views of experts rather than apply their own judgement? * Why are Berkeley undergrads worse forecasters than dart-throwing chimps? * Should I keep my political views secret, so it will be easier to change them later? * How can listeners contribute to his latest cutting-edge research? * What do we know about our accuracy at predicting low-probability high-impact disasters? * Does his research provide an intellectual basis for populist political movements? * Was the Iraq War caused by bad politics, or bad intelligence methods? * What can we learn about forecasting from the 2016 election? * Can experience help people avoid overconfidence and underconfidence? * When does an AI easily beat human judgement? * Could more accurate forecasting methods make the world more dangerous? * How much does demographic diversity line up with cognitive diversity? * What are the odds we’ll go to war with China? * Should we let prediction tournaments run most of the government? Listen to it. Get free, one-on-one career advice. Want to work on important social science research like Tetlock? We’ve helped hundreds of people compare their options and get introductions. Find out if our coaching can help you.
20 Marras 20171h 24min

#14 - Sharon Nunez & Jose Valle on going undercover to expose animal abuse
What if you knew that ducks were being killed with pitchforks? Rabbits dumped alive into containers? Or pigs being strangled with forklifts? Would you be willing to go undercover to expose the crime? That’s a real question that confronts volunteers at Animal Equality (AE). In this episode we speak to Sharon Nunez and Jose Valle, who founded AE in 2006 and then grew it into a multi-million dollar international animal rights organisation. They’ve been chosen as one of the most effective animal protection orgs in the world by Animal Charity Evaluators for the last 3 consecutive years. Blog post about the episode, including links and full transcript. A related previous episode, strongly recommended: Lewis Bollard on how to end factory farming as soon as possible. In addition to undercover investigations AE has also designed a 3D virtual-reality farm experience called iAnimal360. People get to experience being trapped in a cage – in a room designed to kill then - and can’t just look away. How big an impact is this having on users? Sharon Nuñez and Jose Valle also tackle: * How do they track their goals and metrics week to week? * How much does an undercover investigation cost? * Why don’t people donate more to factory farmed animals, given that they’re the vast majority of animals harmed directly by humans? * How risky is it to attempt to build a career in animal advocacy? * What led to a change in their focus from bullfighting in Spain to animal farming? * How does working with governments or corporate campaigns compare with early strategies like creating new vegans/vegetarians? * Has their very rapid growth been difficult to handle? * What should our listeners study or do if they want to work in this area? * How can we get across the message that horrific cases are a feature - not a bug - of factory farming? * Do the owners or workers of factory farms ever express shame at what they do?
13 Marras 20171h 25min

#13 - Claire Walsh on testing which policies work & how to get governments to listen to the results
In both rich and poor countries, government policy is often based on no evidence at all and many programs don’t work. This has particularly harsh effects on the global poor - in some countries governments only spend $100 on each citizen a year so they can’t afford to waste a single dollar. Enter MIT’s Poverty Action Lab (J-PAL). Since 2003 they’ve conducted experiments to figure out what policies actually help recipients, and then tried to get them implemented by governments and non-profits. Claire Walsh leads J-PAL’s Government Partnership Initiative, which works to evaluate policies and programs in collaboration with developing world governments, scale policies that have been shown to work, and generally promote a culture of evidence-based policymaking. Summary, links to career opportunities and topics discussed in the show. We discussed (her views only, not J-PAL’s): * How can they get evidence backed policies adopted? Do politicians in the developing world even care whether their programs actually work? Is the norm evidence-based policy, or policy-based evidence? * Is evidence-based policy an evidence-based strategy itself? * Which policies does she think would have a particularly large impact on human welfare relative to their cost? * How did she come to lead one of J-PAL’s departments at 29? * How do you evaluate the effectiveness of energy and environment programs (Walsh’s area of expertise), and what are the standout approaches in that area? * 80,000 Hours has warned people about the downsides of starting your career in a non-profit. Walsh started her career in a non-profit and has thrived, so are we making a mistake? * Other than J-PAL, what are the best places to work in development? What are the best subjects to study? Where can you go network to break into the sector? * Is living in poverty as bad as we think? And plenty of other things besides. We haven’t run an RCT to test whether this episode will actually help your career, but I suggest you listen anyway. Trust my intuition on this one.
31 Loka 201752min

#12 - Beth Cameron works to stop you dying in a pandemic. Here’s what keeps her up at night.
“When you're in the middle of a crisis and you have to ask for money, you're already too late.” That’s Dr Beth Cameron, who leads Global Biological Policy and Programs at the Nuclear Threat Initiative. Beth should know. She has years of experience preparing for and fighting the diseases of our nightmares, on the White House Ebola Taskforce, in the National Security Council staff, and as the Assistant Secretary of Defense for Nuclear, Chemical and Biological Defense Programs. Summary, list of career opportunities, extra links to learn more and coaching application. Unfortunately, the countries of the world aren’t prepared for a crisis - and like children crowded into daycare, there’s a good chance something will make us all sick at once. During past pandemics countries have dragged their feet over who will pay to contain them, or struggled to move people and supplies where they needed to be. At the same time advanced biotechnology threatens to make it possible for terrorists to bring back smallpox - or create something even worse. In this interview we look at the current state of play in disease control, what needs to change, and how you can build the career capital necessary to make those changes yourself. That includes: * What and where to study, and where to begin a career in pandemic preparedness. Below you’ll find a lengthy list of people and places mentioned in the interview, and others we’ve had recommended to us. * How the Nuclear Threat Initiative, with just 50 people, collaborates with governments around the world to reduce the risk of nuclear or biological catastrophes, and whether they might want to hire you. * The best strategy for containing pandemics. * Why we lurch from panic, to neglect, to panic again when it comes to protecting ourselves from contagious diseases. * Current reform efforts within the World Health Organisation, and attempts to prepare partial vaccines ahead of time. * Which global health security groups most impress Beth, and what they’re doing. * What new technologies could be invented to make us safer. * Whether it’s possible to help solve the problem through mass advocacy. * Much more besides. Get free, one-on-one career advice to improve biosecurity Considering a relevant grad program like a biology PhD, medicine, or security studies? Able to apply for a relevant job already? We’ve helped dozens of people plan their careers to work on pandemic preparedness and put them in touch with mentors. If you want to work on the problem discussed in this episode, you should apply for coaching: Read more
25 Loka 20171h 45min

#11 - Spencer Greenberg on speeding up social science 10-fold & why plenty of startups cause harm
Do most meat eaters think it’s wrong to hurt animals? Do Americans think climate change is likely to cause human extinction? What is the best, state-of-the-art therapy for depression? How can we make academics more intellectually honest, so we can actually trust their findings? How can we speed up social science research ten-fold? Do most startups improve the world, or make it worse? If you’re interested in these question, this interview is for you. Click for a full transcript, links discussed in the show, etc. A scientist, entrepreneur, writer and mathematician, Spencer Greenberg is constantly working to create tools to speed up and improve research and critical thinking. These include: * Rapid public opinion surveys to find out what most people actually think about animal consciousness, farm animal welfare, the impact of developing world charities and the likelihood of extinction by various different means; * Tools to enable social science research to be run en masse very cheaply; * ClearerThinking.org, a highly popular site for improving people’s judgement and decision-making; * Ways to transform data analysis methods to ensure that papers only show true findings; * Innovative research methods; * Ways to decide which research projects are actually worth pursuing. In this interview, Spencer discusses all of these and more. If you don’t feel like listening, that just shows that you have poor judgement and need to benefit from his wisdom even more! Get free, one-on-one career advice We’ve helped hundreds of people compare their options, get introductions, and find high impact jobs. If you want to work on any of the problems discussed in this episode, find out if our coaching can help you.
17 Loka 20171h 29min

#10 - Nick Beckstead on how to spend billions of dollars preventing human extinction
What if you were in a position to give away billions of dollars to improve the world? What would you do with it? This is the problem facing Program Officers at the Open Philanthropy Project - people like Dr Nick Beckstead. Following a PhD in philosophy, Nick works to figure out where money can do the most good. He’s been involved in major grants in a wide range of areas, including ending factory farming through technological innovation, safeguarding the world from advances in biotechnology and artificial intelligence, and spreading rational compassion. Full transcript, coaching application form, overview of the conversation, and links to resources discussed in the episode: This episode is a tour through some of the toughest questions ‘effective altruists’ face when figuring out how to best improve the world, including: * * Should we mostly try to help people currently alive, or future generations? Nick studied this question for years in his PhD thesis, On the Overwhelming Importance of Shaping the Far Future. (The first 31 minutes is a snappier version of my conversation with Toby Ord.) * Is clean meat (aka *in vitro* meat) technologically feasible any time soon, or should we be looking for plant-based alternatives? * What are the greatest risks to human civilisation? * To stop malaria is it more cost-effective to use technology to eliminate mosquitos than to distribute bed nets? * Should people who want to improve the future work for changes that will be very useful in a specific scenario, or just generally try to improve how well humanity makes decisions? * What specific jobs should our listeners take in order for Nick to be able to spend more money in useful ways to improve the world? * Should we expect the future to be better if the economy grows more quickly - or more slowly? Get free, one-on-one career advice We’ve helped dozens of people compare between their options, get introductions, and jobs important for the the long-run future. If you want to work on any of the problems discussed in this episode, find out if our coaching can help you.
11 Loka 20171h 51min

#9 - Christine Peterson on how insecure computers could lead to global disaster, and how to fix it
Take a trip to Silicon Valley in the 70s and 80s, when going to space sounded like a good way to get around environmental limits, people started cryogenically freezing themselves, and nanotechnology looked like it might revolutionise industry – or turn us all into grey goo. Full transcript, coaching application form, overview of the conversation, and extra resources to learn more: In this episode of the 80,000 Hours Podcast Christine Peterson takes us back to her youth in the Bay Area, the ideas she encountered there, and what the dreamers she met did as they grew up. We also discuss how she came up with the term ‘open source software’ (and how she had to get someone else to propose it). Today Christine helps runs the Foresight Institute, which fills a gap left by for-profit technology companies – predicting how new revolutionary technologies could go wrong, and ensuring we steer clear of the downsides. We dive into: * Whether the poor security of computer systems poses a catastrophic risk for the world. Could all our essential services be taken down at once? And if so, what can be done about it? * Can technology ‘move fast and break things’ without eventually breaking the world? Would it be better for technology to advance more quickly, or more slowly? * How Christine came up with the term ‘open source software’ (and why someone else had to propose it). * Will AIs designed for wide-scale automated hacking make computers more or less secure? * Would it be good to radically extend human lifespan? Is it sensible to cryogenically freeze yourself in the hope of being resurrected in the future? * Could atomically precise manufacturing (nanotechnology) really work? Why was it initially so controversial and why did people stop worrying about it? * Should people who try to do good in their careers work long hours and take low salaries? Or should they take care of themselves first of all? * How she thinks the the effective altruism community resembles the scene she was involved with when she was wrong, and where it might be going wrong. Get free, one-on-one career advice We’ve helped dozens of people compare between their options, get introductions, and jobs important for the the long-run future. If you want to work on any of the problems discussed in this episode, find out if our coaching can help you.
4 Loka 20171h 45min