Daniel Kokotajlo on what a hyperspeed robot economy might look like

Daniel Kokotajlo on what a hyperspeed robot economy might look like

When Daniel Kokotajlo talks to security experts at major AI labs, they tell him something chilling: “Of course we’re probably penetrated by the CCP already, and if they really wanted something, they could take it.”

This isn’t paranoid speculation. It’s the working assumption of people whose job is to protect frontier AI models worth billions of dollars. And they’re not even trying that hard to stop it — because the security measures that might actually work would slow them down in the race against competitors.

Full transcript, highlights, and links to learn more: https://80k.info/dk

Daniel is the founder of the AI Futures Project and author of AI 2027, a detailed scenario showing how we might get from today’s AI systems to superintelligence by the end of the decade. Over a million people read it in the first few weeks, including US Vice President JD Vance. When Daniel talks to researchers at Anthropic, OpenAI, and DeepMind, they tell him the scenario feels less wild to them than to the general public — because many of them expect something like this to happen.

Daniel’s median timeline? 2029. But he’s genuinely uncertain, putting 10–20% probability on AI progress hitting a long plateau.

When he first published AI 2027, his median forecast for when superintelligence would arrive was 2027, rather than 2029. So what shifted his timelines recently? Partly a fascinating study from METR showing that AI coding assistants might actually be making experienced programmers slower — even though the programmers themselves think they’re being sped up. The study suggests a systematic bias toward overestimating AI effectiveness — which, ironically, is good news for timelines, because it means we have more breathing room than the hype suggests.

But Daniel is also closely tracking another METR result: AI systems can now reliably complete coding tasks that take humans about an hour. That capability has been doubling every six months in a remarkably straight line. Extrapolate a couple more years and you get systems completing month-long tasks. At that point, Daniel thinks we’re probably looking at genuine AI research automation — which could cause the whole process to accelerate dramatically.

At some point, superintelligent AI will be limited by its inability to directly affect the physical world. That’s when Daniel thinks superintelligent systems will pour resources into robotics, creating a robot economy in months.

Daniel paints a vivid picture: imagine transforming all car factories (which have similar components to robots) into robot production factories — much like historical wartime efforts to redirect production of domestic goods to military goods. Then imagine the frontier robots of today hooked up to a data centre running superintelligences controlling the robots’ movements to weld, screw, and build. Or an intermediate step might even be unskilled human workers coached through construction tasks by superintelligences via their phones.

There’s no reason that an effort like this isn’t possible in principle. And there would be enormous pressure to go this direction: whoever builds a superintelligence-powered robot economy first will get unheard-of economic and military advantages.

From there, Daniel expects the default trajectory to lead to AI takeover and human extinction — not because superintelligent AI will hate humans, but because it can better pursue its goals without us.

But Daniel has a better future in mind — one he puts roughly 25–30% odds that humanity will achieve. This future involves international coordination and hardware verification systems to enforce AI development agreements, plus democratic processes for deciding what values superintelligent AIs should have — because in a world with just a handful of superintelligent AI systems, those few minds will effectively control everything: the robot armies, the information people see, the shape of civilisation itself.

Right now, nobody knows how to specify what values those minds will have. We haven’t solved alignment. And we might only have a few more years to figure it out.

Daniel and host Luisa Rodriguez dive deep into these stakes in today’s interview.

What did you think of the episode? https://forms.gle/HRBhjDZ9gfM8woG5A

This episode was recorded on September 9, 2025.

Chapters:

  • Cold open (00:00:00)
  • Who’s Daniel Kokotajlo? (00:00:37)
  • Video: We’re Not Ready for Superintelligence (00:01:31)
  • Interview begins: Could China really steal frontier model weights? (00:36:26)
  • Why we might get a robot economy incredibly fast (00:42:34)
  • AI 2027’s alternate ending: The slowdown (01:01:29)
  • How to get to even better outcomes (01:07:18)
  • Updates Daniel’s made since publishing AI 2027 (01:15:13)
  • How plausible are longer timelines? (01:20:22)
  • What empirical evidence is Daniel looking out for to decide which way things are going? (01:40:27)
  • What post-AGI looks like (01:49:41)
  • Whistleblower protections and Daniel’s unsigned NDA (02:04:28)

Audio engineering: Milo McGuire, Simon Monsour, and Dominic Armstrong
Music: CORBIT
Coordination, transcriptions, and web: Katy Moore

Jaksot(301)

#20 - Bruce Friedrich on inventing outstanding meat substitutes to end speciesism & factory farming

#20 - Bruce Friedrich on inventing outstanding meat substitutes to end speciesism & factory farming

Before the US Civil War, it was easier for the North to morally oppose slavery. Why? Because unlike the South they weren’t profiting much from its existence. The fight for abolition was partly won because many no longer saw themselves as having a selfish stake in its continuation. Bruce Friedrich, executive director of The Good Food Institute (GFI), thinks the same may be true in the fight against speciesism. 98% of people currently eat meat. But if eating meat stops being part of most people’s daily lives -- it should be a lot easier to convince them that farming practices are just as cruel as they look, and that the suffering of these animals really matters. Full transcript, related links, job opportunities and summary of the interview. That’s why GFI is “working with scientists, investors, and entrepreneurs” to create plant-based meat, dairy and eggs as well as clean meat alternatives to animal products. In 2016, Animal Charity Evaluators named GFI one of its recommended charities. In this interview I’m joined by my colleague Natalie Cargill, and we ask Bruce about: * What’s the best meat replacement product out there right now? * How effective is meat substitute research for people who want to reduce animal suffering as much as possible? * When will we get our hands on clean meat? And why does Bruce call it clean meat, rather than in vitro meat or cultured meat? * What are the challenges of producing something structurally identical to meat? * Can clean meat be healthier than conventional meat? * Do plant-based alternatives have a better shot at success than clean meat? * Is there a concern that, even if the product is perfect, people still won’t eat it? Why might that happen? * What’s it like being a vegan in a family made up largely of hunters and meat-eaters? * What kind of pushback should be expected from the meat industry? Keiran Harris helped produce today’s episode.

19 Helmi 20181h 18min

#19 - Samantha Pitts-Kiefer on working next to the White House trying to prevent nuclear war

#19 - Samantha Pitts-Kiefer on working next to the White House trying to prevent nuclear war

Rogue elements within a state’s security forces enrich dozens of kilograms of uranium. It’s then assembled into a crude nuclear bomb. The bomb is transported on a civilian aircraft to Washington D.C, and loaded onto a delivery truck. The truck is driven by an American citizen midway between the White House and the Capitol Building. The driver casually steps out of the vehicle, and detonates the weapon. There are more than 80,000 instant deaths. There are also at least 100,000 seriously wounded, with nowhere left to treat them. Full blog post about this episode, including a transcript, summary and links to resources mentioned in the show It’s likely that one of those immediately killed would be Samantha Pitts-Kiefer, who works only one block away from the White House. Samantha serves as Senior Director of The Global Nuclear Policy Program at the Nuclear Threat Initiative, and warns that the chances of a nuclear terrorist attack are alarmingly high. Terrorist groups have expressed a desire for nuclear weapons, and the material required to build those weapons is scattered throughout the world at a diverse range of sites – some of which lack the necessary security. When you combine the massive death toll with the accompanying social panic and economic disruption – the consequences of a nuclear 9/11 would be a disasterare almost unthinkable. And yet, Samantha reminds us – we must confront the possibility. Clearly, this is far from the only nuclear nightmare. We also discuss: * In the case of nuclear war, what fraction of the world's population would die? * What is the biggest nuclear threat? * How concerned should we be about North Korea? * How often has the world experienced nuclear near misses? * How might a conflict between India and Pakistan escalate to the nuclear level? * How quickly must a president make a decision in the result of a suspected first strike? * Are global sources of nuclear material safely secured? * What role does cyber security have in preventing nuclear disasters? * How can we improve relations between nuclear armed states? * What do you think about the campaign for complete nuclear disarmament? * If you could tell the US government to do three things, what are the key priorities today? * Is it practical to get members of congress to pay attention to nuclear risks? * Could modernisation of nuclear weapons actually make the world safer?

14 Helmi 20181h 4min

#18 - Ofir Reich on using data science to end poverty & the spurious action-inaction distinction

#18 - Ofir Reich on using data science to end poverty & the spurious action-inaction distinction

Ofir Reich started out doing math in the military, before spending 8 years in tech startups - but then made a sharp turn to become a data scientist focussed on helping the global poor. At UC Berkeley’s Center for Effective Global Action he helps prevent tax evasion by identifying fake companies in India, enable Afghanistan to pay its teachers electronically, and raise yields for Ethiopian farmers by messaging them when local conditions make it ideal to apply fertiliser. Or at least that’s the hope - he’s also working on ways to test whether those interventions actually work. Full post about this episode, including a transcript and relevant links to learn more. Why dedicate his life to helping the global poor? Ofir sees little moral difference between harming people and failing to help them. After all, if you had to press a button to keep all of your money from going to charity, and you pressed that button, would that be an action, or an inaction? Is there even an answer? After reflecting on cases like this, he decided that to not engage with a problem is an active choice, one whose consequences he is just as morally responsible for as if he were directly involved. On top of his life philosophy we also discuss: * The benefits of working in a top academic environment * How best to start a career in global development * Are RCTs worth the money? Should we focus on big picture policy change instead? Or more economic theory? * How the delivery standards of nonprofits compare to top universities * Why he doesn’t enjoy living in the San Francisco bay area * How can we fix the problem of most published research being false? * How good a career path is data science? * How important is experience in development versus technical skills? * How he learned much of what he needed to know in the army * How concerned should effective altruists be about burnout? Keiran Harris helped produce today’s episode.

31 Tammi 20181h 18min

#17 - Will MacAskill on moral uncertainty, utilitarianism & how to avoid being a moral monster

#17 - Will MacAskill on moral uncertainty, utilitarianism & how to avoid being a moral monster

Immanuel Kant is a profoundly influential figure in modern philosophy, and was one of the earliest proponents for universal democracy and international cooperation. He also thought that women have no place in civil society, that it was okay to kill illegitimate children, and that there was a ranking in the moral worth of different races. Throughout history we’ve consistently believed, as common sense, truly horrifying things by today’s standards. According to University of Oxford Professor Will MacAskill, it’s extremely likely that we’re in the same boat today. If we accept that we’re probably making major moral errors, how should we proceed? Full transcript, key points and links to articles and career guides discussed in the show. If our morality is tied to common sense intuitions, we’re probably just preserving these biases and moral errors. Instead we need to develop a moral view that criticises common sense intuitions, and gives us a chance to move beyond them. And if humanity is going to spread to the stars it could be worth dedicating hundreds or thousands of years to moral reflection, lest we spread our errors far and wide. Will is an Associate Professor in Philosophy at Oxford University, author of Doing Good Better, and one of the co-founders of the effective altruism community. In this interview we discuss a wide range of topics: * How would we go about a ‘long reflection’ to fix our moral errors? * Will’s forthcoming book on how one should reason and act if you don't know which moral theory is correct. What are the practical implications of so-called ‘moral uncertainty’? * If we basically solve existential risks, what does humanity do next? * What are some of Will’s most unusual philosophical positions? * What are the best arguments for and against utilitarianism? * Given disagreements among philosophers, how much should we believe the findings of philosophy as a field? * What are some the biases we should be aware of within academia? * What are some of the downsides of becoming a professor? * What are the merits of becoming a philosopher? * How does the media image of EA differ to the actual goals of the community? * What kinds of things would you like to see the EA community do differently? * How much should we explore potentially controversial ideas? * How focused should we be on diversity? * What are the best arguments against effective altruism? Get free, one-on-one career advice We’ve helped hundreds of people compare their options, get introductions, and find high impact jobs. If you want to work on global priorities research or other important questions in academia, find out if our coaching can help you.

19 Tammi 20181h 52min

#16 - Michelle Hutchinson on global priorities research & shaping the ideas of intellectuals

#16 - Michelle Hutchinson on global priorities research & shaping the ideas of intellectuals

In the 40s and 50s neoliberalism was a fringe movement within economics. But by the 80s it had become a dominant school of thought in public policy, and achieved major policy changes across the English speaking world. How did this happen? In part because its leaders invested heavily in training academics to study and develop their ideas. Whether you think neoliberalism was good or bad, its history demonstrates the impact building a strong intellectual base within universities can have. Michelle Hutchinson is working to get a different set of ideas a hearing in academia by setting up the Global Priorities Institute (GPI) at Oxford University. The Institute, which is currently hiring for three roles, aims to bring together outstanding philosophers and economists to research how to most improve the world. The hope is that it will spark widespread academic engagement with effective altruist thinking, which will hone the ideas and help them gradually percolate into society more broadly. Link to the full blog post about this episode including transcript and links to learn more Its research agenda includes questions like: * How do we compare the good done by focussing on really different types of causes? * How does saving lives actually affect the world relative to other things we could do? * What are the biggest wins governments should be focussed on getting? Before moving to GPI, Michelle was the Executive Director of Giving What We Can and a founding figure of the effective altruism movement. She has a PhD in Applied Ethics from Oxford on prioritization and global health. We discuss: * What is global priorities research and why does it matter? * How is effective altruism seen in academia? Is it important to convince academics of the value of your work, or is it OK to ignore them? * Operating inside a university is quite expensive, so is it even worth doing? Who can pay for this kind of thing? * How hard is it to do something innovative inside a university? How serious are the administrative and other barriers? * Is it harder to fundraise for a new institute, or hire the right people? * Have other social movements benefitted from having a prominent academic arm? * How can people prepare themselves to get research roles at a place like GPI? * Many people want to have roles doing this kind of research. How many are actually cut out for it? What should those who aren’t do instead? * What are the odds of the Institute’s work having an effect on the real world? Get free, one-on-one career advice We’ve helped hundreds of people compare their options, get introductions, and find high impact jobs. If you want to work on global priorities research or other important questions in academia, find out if our coaching can help you.

22 Joulu 201755min

#15 - Phil Tetlock on how chimps beat Berkeley undergrads and when it’s wise to defer to the wise

#15 - Phil Tetlock on how chimps beat Berkeley undergrads and when it’s wise to defer to the wise

Prof Philip Tetlock is a social science legend. Over forty years he has researched whose predictions we can trust, whose we can’t and why - and developed methods that allow all of us to be better at predicting the future. After the Iraq WMDs fiasco, the US intelligence services hired him to figure out how to ensure they’d never screw up that badly again. The result of that work – Superforecasting – was a media sensation in 2015. Full transcript, brief summary, apply for coaching and links to learn more. It described Tetlock’s Good Judgement Project, which found forecasting methods so accurate they beat everyone else in open competition, including thousands of people in the intelligence services with access to classified information. Today he’s working to develop the best forecasting process ever, by combining top human and machine intelligence in the Hybrid Forecasting Competition, which you can sign up and participate in. We start by describing his key findings, and then push to the edge of what is known about how to foresee the unforeseeable: * Should people who want to be right just adopt the views of experts rather than apply their own judgement? * Why are Berkeley undergrads worse forecasters than dart-throwing chimps? * Should I keep my political views secret, so it will be easier to change them later? * How can listeners contribute to his latest cutting-edge research? * What do we know about our accuracy at predicting low-probability high-impact disasters? * Does his research provide an intellectual basis for populist political movements? * Was the Iraq War caused by bad politics, or bad intelligence methods? * What can we learn about forecasting from the 2016 election? * Can experience help people avoid overconfidence and underconfidence? * When does an AI easily beat human judgement? * Could more accurate forecasting methods make the world more dangerous? * How much does demographic diversity line up with cognitive diversity? * What are the odds we’ll go to war with China? * Should we let prediction tournaments run most of the government? Listen to it. Get free, one-on-one career advice. Want to work on important social science research like Tetlock? We’ve helped hundreds of people compare their options and get introductions. Find out if our coaching can help you.

20 Marras 20171h 24min

#14 - Sharon Nunez & Jose Valle on going undercover to expose animal abuse

#14 - Sharon Nunez & Jose Valle on going undercover to expose animal abuse

What if you knew that ducks were being killed with pitchforks? Rabbits dumped alive into containers? Or pigs being strangled with forklifts? Would you be willing to go undercover to expose the crime? That’s a real question that confronts volunteers at Animal Equality (AE). In this episode we speak to Sharon Nunez and Jose Valle, who founded AE in 2006 and then grew it into a multi-million dollar international animal rights organisation. They’ve been chosen as one of the most effective animal protection orgs in the world by Animal Charity Evaluators for the last 3 consecutive years. Blog post about the episode, including links and full transcript. A related previous episode, strongly recommended: Lewis Bollard on how to end factory farming as soon as possible. In addition to undercover investigations AE has also designed a 3D virtual-reality farm experience called iAnimal360. People get to experience being trapped in a cage – in a room designed to kill then - and can’t just look away. How big an impact is this having on users? Sharon Nuñez and Jose Valle also tackle: * How do they track their goals and metrics week to week? * How much does an undercover investigation cost? * Why don’t people donate more to factory farmed animals, given that they’re the vast majority of animals harmed directly by humans? * How risky is it to attempt to build a career in animal advocacy? * What led to a change in their focus from bullfighting in Spain to animal farming? * How does working with governments or corporate campaigns compare with early strategies like creating new vegans/vegetarians? * Has their very rapid growth been difficult to handle? * What should our listeners study or do if they want to work in this area? * How can we get across the message that horrific cases are a feature - not a bug - of factory farming? * Do the owners or workers of factory farms ever express shame at what they do?

13 Marras 20171h 25min

#13 - Claire Walsh on testing which policies work & how to get governments to listen to the results

#13 - Claire Walsh on testing which policies work & how to get governments to listen to the results

In both rich and poor countries, government policy is often based on no evidence at all and many programs don’t work. This has particularly harsh effects on the global poor - in some countries governments only spend $100 on each citizen a year so they can’t afford to waste a single dollar. Enter MIT’s Poverty Action Lab (J-PAL). Since 2003 they’ve conducted experiments to figure out what policies actually help recipients, and then tried to get them implemented by governments and non-profits. Claire Walsh leads J-PAL’s Government Partnership Initiative, which works to evaluate policies and programs in collaboration with developing world governments, scale policies that have been shown to work, and generally promote a culture of evidence-based policymaking. Summary, links to career opportunities and topics discussed in the show. We discussed (her views only, not J-PAL’s): * How can they get evidence backed policies adopted? Do politicians in the developing world even care whether their programs actually work? Is the norm evidence-based policy, or policy-based evidence? * Is evidence-based policy an evidence-based strategy itself? * Which policies does she think would have a particularly large impact on human welfare relative to their cost? * How did she come to lead one of J-PAL’s departments at 29? * How do you evaluate the effectiveness of energy and environment programs (Walsh’s area of expertise), and what are the standout approaches in that area? * 80,000 Hours has warned people about the downsides of starting your career in a non-profit. Walsh started her career in a non-profit and has thrived, so are we making a mistake? * Other than J-PAL, what are the best places to work in development? What are the best subjects to study? Where can you go network to break into the sector? * Is living in poverty as bad as we think? And plenty of other things besides. We haven’t run an RCT to test whether this episode will actually help your career, but I suggest you listen anyway. Trust my intuition on this one.

31 Loka 201752min

Suosittua kategoriassa Koulutus

rss-murhan-anatomia
psykopodiaa-podcast
voi-hyvin-meditaatiot-2
rss-valo-minussa-2
psykologia
rss-sisalto-kuntoon
rss-narsisti
aamukahvilla
rss-lasnaolon-hetkia-mindfulness-tutuksi
avara-mieli
leikitaanko-laakaria
ihminen-tavattavissa-tommy-hellsten-instituutti
rss-duodecim-lehti
rss-niinku-asia-on
aloita-meditaatio
jari-sarasvuo-podcast
salainen-paivakirja
rss-tripsteri
rss-elamankoulu
rss-pedatalk