#219 – Toby Ord on graphs AI companies would prefer you didn't (fully) understand

#219 – Toby Ord on graphs AI companies would prefer you didn't (fully) understand

The era of making AI smarter just by making it bigger is ending. But that doesn’t mean progress is slowing down — far from it. AI models continue to get much more powerful, just using very different methods, and those underlying technical changes force a big rethink of what coming years will look like.

Toby Ord — Oxford philosopher and bestselling author of The Precipice — has been tracking these shifts and mapping out the implications both for governments and our lives.

Links to learn more, video, highlights, and full transcript: https://80k.info/to25

As he explains, until recently anyone can access the best AI in the world “for less than the price of a can of Coke.” But unfortunately, that’s over.

What changed? AI companies first made models smarter by throwing a million times as much computing power at them during training, to make them better at predicting the next word. But with high quality data drying up, that approach petered out in 2024.

So they pivoted to something radically different: instead of training smarter models, they’re giving existing models dramatically more time to think — leading to the rise in “reasoning models” that are at the frontier today.

The results are impressive but this extra computing time comes at a cost: OpenAI’s o3 reasoning model achieved stunning results on a famous AI test by writing an Encyclopedia Britannica’s worth of reasoning to solve individual problems at a cost of over $1,000 per question.

This isn’t just technical trivia: if this improvement method sticks, it will change much about how the AI revolution plays out, starting with the fact that we can expect the rich and powerful to get access to the best AI models well before the rest of us.

Toby and host Rob discuss the implications of all that, plus the return of reinforcement learning (and resulting increase in deception), and Toby's commitment to clarifying the misleading graphs coming out of AI companies — to separate the snake oil and fads from the reality of what's likely a "transformative moment in human history."

Recorded on May 23, 2025.

Chapters:

  • Cold open (00:00:00)
  • Toby Ord is back — for a 4th time! (00:01:20)
  • Everything has changed (and changed again) since 2020 (00:01:37)
  • Is x-risk up or down? (00:07:47)
  • The new scaling era: compute at inference (00:09:12)
  • Inference scaling means less concentration (00:31:21)
  • Will rich people get access to AGI first? Will the rest of us even know? (00:35:11)
  • The new regime makes 'compute governance' harder (00:41:08)
  • How 'IDA' might let AI blast past human level — or not (00:50:14)
  • Reinforcement learning brings back 'reward hacking' agents (01:04:56)
  • Will we get warning shots? Will they even help? (01:14:41)
  • The scaling paradox (01:22:09)
  • Misleading charts from AI companies (01:30:55)
  • Policy debates should dream much bigger (01:43:04)
  • Scientific moratoriums have worked before (01:56:04)
  • Might AI 'go rogue' early on? (02:13:16)
  • Lamps are regulated much more than AI (02:20:55)
  • Companies made a strategic error shooting down SB 1047 (02:29:57)
  • Companies should build in emergency brakes for their AI (02:35:49)
  • Toby's bottom lines (02:44:32)


Tell us what you thought! https://forms.gle/enUSk8HXiCrqSA9J8

Video editing: Simon Monsour
Audio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic Armstrong
Music: Ben Cordell
Camera operator: Jeremy Chevillotte
Transcriptions and web: Katy Moore

Jaksot(293)

#68 - Will MacAskill on the paralysis argument, whether we're at the hinge of history, & his new priorities

#68 - Will MacAskill on the paralysis argument, whether we're at the hinge of history, & his new priorities

You’re given a box with a set of dice in it. If you roll an even number, a person's life is saved. If you roll an odd number, someone else will die. Each time you shake the box you get $10. Should you do it?  A committed consequentialist might say, "Sure! Free money!" But most will think it obvious that you should say no. You've only gotten a tiny benefit, in exchange for moral responsibility over whether other people live or die. And yet, according to today’s return guest, philosophy Prof Will MacAskill, in a real sense we’re shaking this box every time we leave the house, and those who think shaking the box is wrong should probably also be shutting themselves indoors and minimising their interactions with others. • Links to learn more, summary and full transcript. • Job opportunities at the Global Priorities Institute. To see this, imagine you’re deciding whether to redeem a coupon for a free movie. If you go, you’ll need to drive to the cinema. By affecting traffic throughout the city, you’ll have slightly impacted the schedules of thousands or tens of thousands of people. The average life is about 30,000 days, and over the course of a life the average person will have about two children. So — if you’ve impacted at least 7,500 days — then, statistically speaking, you've probably influenced the exact timing of a conception event. With 200 million sperm in the running each time, changing the moment of copulation, even by a fraction of a second, will almost certainly mean you've changed the identity of a future person. That different child will now impact all sorts of things as they go about their life, including future conception events. And then those new people will impact further future conceptions events, and so on. After 100 or maybe 200 years, basically everybody alive will be a different person because you went to the movies. As a result, you’ll have changed when many people die. Take car crashes as one example: about 1.3% of people die in car crashes. Over that century, as the identities of everyone change as a result of your action, many of the 'new' people will cause car crashes that wouldn't have occurred in their absence, including crashes that prematurely kill people alive today. Of course, in expectation, exactly the same number of people will have been saved from car crashes, and will die later than they would have otherwise. So, if you go for this drive, you’ll save hundreds of people from premature death, and cause the early death of an equal number of others. But you’ll get to see a free movie, worth $10. Should you do it? This setup forms the basis of ‘the paralysis argument’, explored in one of Will’s recent papers. Because most 'non-consequentialists' endorse an act/omission distinction… post truncated due to character limit, finish reading the full explanation here. So what's the best way to fix this strange conclusion? We discuss a few options, but the most promising might bring people a lot closer to full consequentialism than is immediately apparent. In this episode Will and I also cover: • Are, or are we not, living in the most influential time in history? • The culture of the effective altruism community • Will's new lower estimate of the risk of human extinction • Why Will is now less focused on AI • The differences between Americans and Brits • Why feeling guilty about characteristics you were born with is crazy • And plenty more. Chapters:Rob’s intro (00:00:00)The interview begins (00:04:03)The paralysis argument (00:15:42)The case for strong longtermism (00:55:21)Longtermism for risk-averse altruists (00:58:01)Are we living in the most influential time in history? (01:14:37)The risk of human extinction in the next hundred years (02:15:20)Implications for the effective altruism community (02:50:03)Culture of the effective altruism community (03:06:28)Producer: Keiran Harris. Audio mastering: Ben Cordell. Transcriptions: Zakee Ulhaq.

24 Tammi 20203h 25min

#44 Classic episode - Paul Christiano on finding real solutions to the AI alignment problem

#44 Classic episode - Paul Christiano on finding real solutions to the AI alignment problem

Rebroadcast: this episode was originally released in October 2018. Paul Christiano is one of the smartest people I know. After our first session produced such great material, we decided to do a second recording, resulting in our longest interview so far. While challenging at times I can strongly recommend listening — Paul works on AI himself and has a very unusually thought through view of how it will change the world. This is now the top resource I'm going to refer people to if they're interested in positively shaping the development of AI, and want to understand the problem better. Even though I'm familiar with Paul's writing I felt I was learning a great deal and am now in a better position to make a difference to the world. A few of the topics we cover are:• Why Paul expects AI to transform the world gradually rather than explosively and what that would look like • Several concrete methods OpenAI is trying to develop to ensure AI systems do what we want even if they become more competent than us • Why AI systems will probably be granted legal and property rights • How an advanced AI that doesn't share human goals could still have moral value • Why machine learning might take over science research from humans before it can do most other tasks • Which decade we should expect human labour to become obsolete, and how this should affect your savings plan. • Links to learn more, summary and full transcript. • Rohin Shah's AI alignment newsletter. Here's a situation we all regularly confront: you want to answer a difficult question, but aren't quite smart or informed enough to figure it out for yourself. The good news is you have access to experts who *are* smart enough to figure it out. The bad news is that they disagree. If given plenty of time — and enough arguments, counterarguments and counter-counter-arguments between all the experts — should you eventually be able to figure out which is correct? What if one expert were deliberately trying to mislead you? And should the expert with the correct view just tell the whole truth, or will competition force them to throw in persuasive lies in order to have a chance of winning you over? In other words: does 'debate', in principle, lead to truth? According to Paul Christiano — researcher at the machine learning research lab OpenAI and legendary thinker in the effective altruism and rationality communities — this question is of more than mere philosophical interest. That's because 'debate' is a promising method of keeping artificial intelligence aligned with human goals, even if it becomes much more intelligent and sophisticated than we are. It's a method OpenAI is actively trying to develop, because in the long-term it wants to train AI systems to make decisions that are too complex for any human to grasp, but without the risks that arise from a complete loss of human oversight. If AI-1 is free to choose any line of argument in order to attack the ideas of AI-2, and AI-2 always seems to successfully defend them, it suggests that every possible line of argument would have been unsuccessful. But does that mean that the ideas of AI-2 were actually right? It would be nice if the optimal strategy in debate were to be completely honest, provide good arguments, and respond to counterarguments in a valid way. But we don't know that's the case. The 80,000 Hours Podcast is produced by Keiran Harris.

15 Tammi 20203h 51min

#33 Classic episode - Anders Sandberg on cryonics, solar flares, and the annual odds of nuclear war

#33 Classic episode - Anders Sandberg on cryonics, solar flares, and the annual odds of nuclear war

Rebroadcast: this episode was originally released in May 2018. Joseph Stalin had a life-extension program dedicated to making himself immortal. What if he had succeeded? According to Bryan Caplan in episode #32, there’s an 80% chance that Stalin would still be ruling Russia today. Today’s guest disagrees. Like Stalin he has eyes for his own immortality - including an insurance plan that will cover the cost of cryogenically freezing himself after he dies - and thinks the technology to achieve it might be around the corner. Fortunately for humanity though, that guest is probably one of the nicest people on the planet: Dr Anders Sandberg of Oxford University. Full transcript of the conversation, summary, and links to learn more. The potential availability of technology to delay or even stop ageing means this disagreement matters, so he has been trying to model what would really happen if both the very best and the very worst people in the world could live forever - among many other questions. Anders, who studies low-probability high-stakes risks and the impact of technological change at the Future of Humanity Institute, is the first guest to appear twice on the 80,000 Hours Podcast and might just be the most interesting academic at Oxford. His research interests include more or less everything, and bucking the academic trend towards intense specialization has earned him a devoted fan base. Last time we asked him why we don’t see aliens, and how to most efficiently colonise the universe. In today’s episode we ask about Anders’ other recent papers, including: • Is it worth the money to freeze your body after death in the hope of future revival, like Anders has done? • How much is our perception of the risk of nuclear war biased by the fact that we wouldn’t be alive to think about it had one happened? • If biomedical research lets us slow down ageing would culture stagnate under the crushing weight of centenarians? • What long-shot drugs can people take in their 70s to stave off death? • Can science extend human (waking) life by cutting our need to sleep? • How bad would it be if a solar flare took down the electricity grid? Could it happen? • If you’re a scientist and you discover something exciting but dangerous, when should you keep it a secret and when should you share it? • Will lifelike robots make us more inclined to dehumanise one another? Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: search for '80,000 Hours' in your podcasting app. The 80,000 Hours Podcast is produced by Keiran Harris.

8 Tammi 20201h 25min

#17 Classic episode - Will MacAskill on moral uncertainty, utilitarianism & how to avoid being a moral monster

#17 Classic episode - Will MacAskill on moral uncertainty, utilitarianism & how to avoid being a moral monster

Rebroadcast: this episode was originally released in January 2018. Immanuel Kant is a profoundly influential figure in modern philosophy, and was one of the earliest proponents for universal democracy and international cooperation. He also thought that women have no place in civil society, that it was okay to kill illegitimate children, and that there was a ranking in the moral worth of different races. Throughout history we’ve consistently believed, as common sense, truly horrifying things by today’s standards. According to University of Oxford Professor Will MacAskill, it’s extremely likely that we’re in the same boat today. If we accept that we’re probably making major moral errors, how should we proceed?• Full transcript, key points & links to articles discussed in the show. If our morality is tied to common sense intuitions, we’re probably just preserving these biases and moral errors. Instead we need to develop a moral view that criticises common sense intuitions, and gives us a chance to move beyond them. And if humanity is going to spread to the stars it could be worth dedicating hundreds or thousands of years to moral reflection, lest we spread our errors far and wide. Will is an Associate Professor in Philosophy at Oxford University, author of Doing Good Better, and one of the co-founders of the effective altruism (EA) community. In this interview we discuss a wide range of topics: • How would we go about a ‘long reflection’ to fix our moral errors? • Will’s forthcoming book on how one should reason and act if you don't know which moral theory is correct. What are the practical implications of so-called ‘moral uncertainty’? • If we basically solve existential risks, what does humanity do next? • What are some of Will’s most unusual philosophical positions? • What are the best arguments for and against utilitarianism? • Given disagreements among philosophers, how much should we believe the findings of philosophy as a field? • What are some the biases we should be aware of within academia? • What are some of the downsides of becoming a professor? • What are the merits of becoming a philosopher? • How does the media image of EA differ to the actual goals of the community? • What kinds of things would you like to see the EA community do differently? • How much should we explore potentially controversial ideas? • How focused should we be on diversity? • What are the best arguments against effective altruism? Get this episode by subscribing: type '80,000 Hours' into your podcasting app.  The 80,000 Hours Podcast is produced by Keiran Harris.

31 Joulu 20191h 52min

#67 – David Chalmers on the nature and ethics of consciousness

#67 – David Chalmers on the nature and ethics of consciousness

What is it like to be you right now? You're seeing this text on the screen, smelling the coffee next to you, and feeling the warmth of the cup. There’s a lot going on in your head — your conscious experience. Now imagine beings that are identical to humans, but for one thing: they lack this conscious experience. If you spill your coffee on them, they’ll jump like anyone else, but inside they'll feel no pain and have no thoughts: the lights are off. The concept of these so-called 'philosophical zombies' was popularised by today’s guest — celebrated philosophy professor David Chalmers — in order to explore the nature of consciousness. In a forthcoming book he poses a classic 'trolley problem': "Suppose you have a conscious human on one train track, and five non-conscious humanoid zombies on another. If you do nothing, a trolley will hit and kill the conscious human. If you flip a switch to redirect the trolley, you can save the conscious human, but in so doing kill the five non-conscious humanoid zombies. What should you do?" Many people think you should divert the trolley, precisely because the lack of conscious experience means the moral status of the zombies is much reduced or absent entirely. So, which features of consciousness qualify someone for moral consideration? One view is that the only conscious states that matter are those that have a positive or negative quality, like pleasure and suffering. But Dave’s intuitions are quite different. • Links to learn more, summary and full transcript. • Advice on how to read our advice. • Anonymous answers on: bad habits, risk and failure. Instead of zombies he asks us to consider 'Vulcans', who can see and hear and reflect on the world around them, but are incapable of experiencing pleasure or pain. Now imagine a further trolley problem: suppose you have a normal human on one track, and five Vulcans on the other. Should you divert the trolley to kill the five Vulcans in order to save the human? Dave firmly believes the answer is no, and if he's right, pleasure and suffering can’t be the only things required for moral status. The fact that Vulcans are conscious in other ways must matter in itself. Dave is one of the world's top experts on the philosophy of consciousness. He helped return the question 'what is consciousness?' to the centre stage of philosophy with his 1996 book 'The Conscious Mind', which argued against then-dominant materialist theories of consciousness.  This comprehensive interview, at over four hours long, outlines each contemporary theory of consciousness, what they have going for them, and their likely ethical implications. Those theories span the full range from illusionism, the idea that consciousness is in some sense an 'illusion', to panpsychism, according to which it's a fundamental physical property present in all matter.  These questions are absolutely central for anyone who wants to build a positive future. If insects were conscious our treatment of them could already be an atrocity. If computer simulations of people will one day be conscious, how will we know, and how should we treat them? And what is it about consciousness that matters, if anything?  Dave Chalmers is probably the best person on the planet to ask these questions, and Rob & Arden cover this and much more over the course of what is both our longest ever episode, and our personal favourite so far.  Chapters:Rob's intro (00:00:00)The interview begins (00:02:11)Philosopher’s survey (00:06:37)Free will (00:13:37)Survey correlations (00:20:06)Progress in philosophy (00:35:01)Simulations (00:51:30)The problem of consciousness (01:13:01)Dualism and panpsychism (01:26:52)Is consciousness an illusion? (01:34:52)Idealism (01:43:13)Integrated information theory (01:51:08)Moral status and consciousness (02:06:10)Higher order views of consciousness (02:11:46)The views of philosophers on eating meat (02:20:23)Artificial consciousness (02:34:25)The zombie and vulcan trolley problems (02:38:43)Illusionism and moral status (02:56:12)Panpsychism and moral status (03:06:19)Mind uploading (03:15:58)Personal identity (03:22:51)Virtual reality and the experience machine (03:28:56)Singularity (03:42:44)AI alignment (04:07:39)Careers in academia (04:23:37)Having fun disagreements (04:32:54)Rob’s outro (04:42:14) Producer: Keiran Harris.

16 Joulu 20194h 41min

#66 – Peter Singer on being provocative, effective altruism, & how his moral views have changed

#66 – Peter Singer on being provocative, effective altruism, & how his moral views have changed

In 1989, the professor of moral philosophy Peter Singer was all over the news for his inflammatory opinions about abortion. But the controversy stemmed from Practical Ethics — a book he’d actually released way back in 1979. It took a German translation ten years on for protests to kick off. According to Singer, he honestly didn’t expect this view to be as provocative as it became, and he certainly wasn’t aiming to stir up trouble and get attention. But after the protests and the increasing coverage of his work in German media, the previously flat sales of Practical Ethics shot up. And the negative attention he received ultimately led him to a weekly opinion column in The New York Times. • Singer's book The Life You Can Save has just been re-released as a 10th anniversary edition, available as a free e-book and audiobook, read by a range of celebrities. Get it here. • Links to learn more, summary and full transcript. Singer points out that as a result of this increased attention, many more people also read the rest of the book — which includes chapters with a real ability to do good, covering global poverty, animal ethics, and other important topics. So should people actively try to court controversy with one view, in order to gain attention for another more important one? Perhaps sometimes, but controversy can also just have bad consequences. His critics may view him as someone who says whatever he thinks, hang the consequences, but Singer says that he gives public relations considerations plenty of thought. One example is that Singer opposes efforts to advocate for open borders. Not because he thinks a world with freedom of movement is a bad idea per se, but rather because it may help elect leaders like Mr Trump. Another is the focus of the effective altruism community. Singer certainly respects those who are focused on improving the long-term future of humanity, and thinks this is important work that should continue. But he’s troubled by the possibility of extinction risks becoming the public face of the movement. He suspects there's a much narrower group of people who are likely to respond to that kind of appeal, compared to those who are drawn to work on global poverty or preventing animal suffering. And that to really transform philanthropy and culture more generally, the effective altruism community needs to focus on smaller donors with more conventional concerns. Rob is joined in this interview by Arden Koehler, the newest addition to the 80,000 Hours team, both for the interview and a post-episode discussion. They only had an hour with Peter, but also cover: • What does he think is the most plausible alternatives to consequentialism? • Is it more humane to eat wild caught animals than farmed animals? • The re-release of The Life You Can Save • His most and least strategic career decisions • Population ethics, and other arguments for and against prioritising the long-term future • What led to his changing his mind on significant questions in moral philosophy? • And more. In the post-episode discussion, Rob and Arden continue talking about: • The pros and cons of keeping EA as one big movement • Singer’s thoughts on immigration • And consequentialism with side constraints. Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Or read the linked transcript.  Producer: Keiran Harris.  Audio mastering: Ben Cordell.  Transcriptions: Zakee Ulhaq.  Illustration of Singer: Matthias Seifarth.

5 Joulu 20192h 1min

#65 – Ambassador Bonnie Jenkins on 8 years pursuing WMD arms control, & diversity in diplomacy

#65 – Ambassador Bonnie Jenkins on 8 years pursuing WMD arms control, & diversity in diplomacy

"…it started when the Soviet Union fell apart and there was a real desire to ensure security of nuclear materials and pathogens, and that scientists with [WMD-related] knowledge could get paid so that they wouldn't go to countries and sell that knowledge."  Ambassador Bonnie Jenkins has had an incredible career in diplomacy and global security.  Today she’s a nonresident senior fellow at the Brookings Institution and president of Global Connections Empowering Global Change, where she works on global health, infectious disease and defence innovation. In 2017 she founded her own nonprofit, the Women of Color Advancing Peace, Security and Conflict Transformation (WCAPS). But in this interview we focus on her time as Ambassador at the U.S. Department of State under the Obama administration, where she worked for eight years as Coordinator for Threat Reduction Programs in the Bureau of International Security and Nonproliferation. In that role, Bonnie coordinated the Department of State’s work to prevent weapons of mass destruction (WMD) terrorism with programmes funded by other U.S. departments and agencies, and as well as other countries. • Links to learn more, summary and full transcript. • Talks from over 100 other speakers at EA Global. • Having trouble with podcast 'chapters' on this episode? Please report any problems to keiran at 80000hours dot org. What was it like to be an ambassador focusing on an issue, rather than an ambassador of a country? Bonnie says the travel was exhausting. She could find herself in Africa one week, and Indonesia the next. She’d meet with folks going to New York for meetings at the UN one day, then hold her own meetings at the White House the next. Each event would have a distinct purpose. For one, she’d travel to Germany as a US Representative, talking about why the two countries should extend their partnership. For another, she could visit the Food and Agriculture Organization to talk about why they need to think more about biosecurity issues. No day was like the previous one. Bonnie was also a leading U.S. official in the launch and implementation of the Global Health Security Agenda discussed at length in episode 27. Before returning to government in 2009, Bonnie served as program officer for U.S. Foreign and Security Policy at the Ford Foundation. She also served as counsel on the 9/11 Commission. Bonnie was the lead staff member conducting research, interviews, and preparing commission reports on counterterrorism policies in the Office of the Secretary of Defense and on U.S. military plans targeting al-Qaeda before 9/11.  And as if that all weren't curious enough four years ago Bonnie decided to go vegan. We talk about her work so far as well as:  • How listeners can start a career like hers  • Mistakes made by Mr Obama and Mr Trump • Networking, the value of attention, and being a vegan in DC  • And 2020 Presidential candidates.Chapters:Rob’s intro (00:00:00)The interview begins (00:01:54)What is Bonnie working on at the moment? (00:02:45)Bonnie’s time at the Department of State (00:04:08)The history of Cooperative Threat Reduction work (00:08:48)Biggest uncontrolled nuclear material threats today (00:11:36)Biggest security issues in the world today (00:13:57)The Biological Weapons Convention (00:17:52)Projects Bonnie worked on that she’s particularly proud of (00:20:55)The day to day life of an Ambassador on an issue (00:23:03)Biggest misunderstandings of the field (00:25:41)How do we get more done in this area? (00:29:48)The Global Health Security Agenda (00:32:52)The implications for countries who give up WMDs (00:34:55)The fallout from a change in government (00:38:40)Listener submitted questions (00:39:39)How might listeners be able to contribute to solving these problems with their own careers? (00:54:55)Is Bonnie glad she went into the military early in her career? (01:06:25)Networking in DC (01:12:27)What are the downsides to pursuing a career like Bonnie’s? (01:15:27)Being a vegan in DC (01:16:47)Women of Color Advancing Peace, Security and Conflict Transformation (01:19:15)The value of attention in DC (01:28:25)Any ways WCAPS could accidentally make things worse? (01:30:08)Message for women of colour in the audience (01:33:05)TV shows relevant to Bonnie’s work (01:35:19)Candidates for 2020 (01:36:57) The 80,000 Hours Podcast is produced by Keiran Harris.

19 Marras 20191h 40min

#64 – Bruce Schneier on how insecure electronic voting could break the United States — and surveillance without tyranny

#64 – Bruce Schneier on how insecure electronic voting could break the United States — and surveillance without tyranny

November 3 2020, 10:32PM: CNN, NBC, and FOX report that Donald Trump has narrowly won Florida, and with it, re-election.  November 3 2020, 11:46PM: The NY Times and Wall Street Journal report that some group has successfully hacked electronic voting systems across the country, including Florida. The malware has spread to tens of thousands of machines and deletes any record of its activity, so the returning officer of Florida concedes they actually have no idea who won the state — and don't see how they can figure it out.  What on Earth happens next?  Today’s guest — world-renowned computer security expert Bruce Schneier — thinks this scenario is plausible, and the ensuing chaos would sow so much distrust that half the country would never accept the election result.  Unfortunately the US has no recovery system for a situation like this, unlike parliamentary democracies, which can just rerun the election a few weeks later.Links to learn more, summary and full transcript.Motivating article: Information security careers for global catastrophic risk reduction by Zabel and MuehlhauserThe Constitution says the state legislature decides, and they can do so however they like; one tied local election in Texas was settled by playing a hand of poker. Elections serve two purposes. The first is the obvious one: to pick a winner. The second, but equally important, is to convince the loser to go along with it — which is why hacks often focus on convincing the losing side that the election wasn't fair. Schneier thinks there's a need to agree how this situation should be handled before something like it happens, and America falls into severe infighting as everyone tries to turn the situation to their political advantage. And to fix our voting systems, we urgently need two things: a voter-verifiable paper ballot and risk-limiting audits. According to Schneier, computer security experts look at current electronic voting machines and can barely believe their eyes. But voting machine designers never understand the security weakness of what they're designing, because they have a bureaucrat's rather than a hacker's mindset. The ideal computer security expert walks into a shop and thinks, "You know, here's how I would shoplift." They automatically see where the cameras are, whether there are alarms, and where the security guards aren't watching. In this episode we discuss this hacker mindset, and how to use a career in security to protect democracy and guard dangerous secrets from people who shouldn't get access to them.We also cover:  • How can we have surveillance of dangerous actors, without falling back into authoritarianism?  • When if ever should information about weaknesses in society's security be kept secret?  • How secure are nuclear weapons systems around the world?  • How worried should we be about deep-fakes?  • Schneier’s critiques of blockchain technology  • How technologists should be vital in shaping policy  • What are the most consequential computer security problems today?  • Could a career in information security be very useful for reducing global catastrophic risks?  • And more.Chapters:Rob’s intro (00:00:00)Bruce’s Codex talk (00:02:23)The interview begins (00:15:42)What is Bruce working on at the moment? (00:16:35)How technologists could be vital in shaping policy (00:18:52)Most consequential computer security problems today (00:24:12)How secure are nuclear weapons systems around the world? (00:34:41)Stuxnet and NotPetya (00:42:29)Messing with democracy (00:44:44)How worried should we be about deepfakes? (00:50:02)The similarities between hacking computers and potentially hacking biology in the future (00:55:08)Bruce’s critiques of crypto (01:00:05)What are some of the most kind of widely-held but incorrect beliefs among computer security people? (01:03:04)The hacking mindset (01:05:35)Voting machines (01:09:22)How secretive should people be about potentially harmful information? (01:16:48)Could a career in information security be very useful for reducing global catastrophic risks? (01:21:46)How to develop the skills needed in computer security (01:33:44)Ubiquitous surveillance (01:52:46)Why is Bruce optimistic? (02:05:28)Rob’s outro (02:06:43)The 80,000 Hours Podcast is produced by Keiran Harris.

25 Loka 20192h 11min

Suosittua kategoriassa Koulutus

rss-murhan-anatomia
psykopodiaa-podcast
voi-hyvin-meditaatiot-2
jari-sarasvuo-podcast
aamukahvilla
rss-lasnaolon-hetkia-mindfulness-tutuksi
adhd-podi
rss-vegaaneista-tykkaan
rss-duodecim-lehti
rss-narsisti
rss-vapaudu-voimaasi
psykologiaa-ja-kaikenlaista
rss-elamankoulu
ihminen-tavattavissa-tommy-hellsten-instituutti
puhutaan-koiraa
kehossa
rss-valo-minussa-2
rss-tietoinen-yhteys-podcast-2
aloita-meditaatio
rss-tyoelaman-timantteja