Daniel Kokotajlo on what a hyperspeed robot economy might look like

Daniel Kokotajlo on what a hyperspeed robot economy might look like

When Daniel Kokotajlo talks to security experts at major AI labs, they tell him something chilling: “Of course we’re probably penetrated by the CCP already, and if they really wanted something, they could take it.”

This isn’t paranoid speculation. It’s the working assumption of people whose job is to protect frontier AI models worth billions of dollars. And they’re not even trying that hard to stop it — because the security measures that might actually work would slow them down in the race against competitors.

Full transcript, highlights, and links to learn more: https://80k.info/dk

Daniel is the founder of the AI Futures Project and author of AI 2027, a detailed scenario showing how we might get from today’s AI systems to superintelligence by the end of the decade. Over a million people read it in the first few weeks, including US Vice President JD Vance. When Daniel talks to researchers at Anthropic, OpenAI, and DeepMind, they tell him the scenario feels less wild to them than to the general public — because many of them expect something like this to happen.

Daniel’s median timeline? 2029. But he’s genuinely uncertain, putting 10–20% probability on AI progress hitting a long plateau.

When he first published AI 2027, his median forecast for when superintelligence would arrive was 2027, rather than 2029. So what shifted his timelines recently? Partly a fascinating study from METR showing that AI coding assistants might actually be making experienced programmers slower — even though the programmers themselves think they’re being sped up. The study suggests a systematic bias toward overestimating AI effectiveness — which, ironically, is good news for timelines, because it means we have more breathing room than the hype suggests.

But Daniel is also closely tracking another METR result: AI systems can now reliably complete coding tasks that take humans about an hour. That capability has been doubling every six months in a remarkably straight line. Extrapolate a couple more years and you get systems completing month-long tasks. At that point, Daniel thinks we’re probably looking at genuine AI research automation — which could cause the whole process to accelerate dramatically.

At some point, superintelligent AI will be limited by its inability to directly affect the physical world. That’s when Daniel thinks superintelligent systems will pour resources into robotics, creating a robot economy in months.

Daniel paints a vivid picture: imagine transforming all car factories (which have similar components to robots) into robot production factories — much like historical wartime efforts to redirect production of domestic goods to military goods. Then imagine the frontier robots of today hooked up to a data centre running superintelligences controlling the robots’ movements to weld, screw, and build. Or an intermediate step might even be unskilled human workers coached through construction tasks by superintelligences via their phones.

There’s no reason that an effort like this isn’t possible in principle. And there would be enormous pressure to go this direction: whoever builds a superintelligence-powered robot economy first will get unheard-of economic and military advantages.

From there, Daniel expects the default trajectory to lead to AI takeover and human extinction — not because superintelligent AI will hate humans, but because it can better pursue its goals without us.

But Daniel has a better future in mind — one he puts roughly 25–30% odds that humanity will achieve. This future involves international coordination and hardware verification systems to enforce AI development agreements, plus democratic processes for deciding what values superintelligent AIs should have — because in a world with just a handful of superintelligent AI systems, those few minds will effectively control everything: the robot armies, the information people see, the shape of civilisation itself.

Right now, nobody knows how to specify what values those minds will have. We haven’t solved alignment. And we might only have a few more years to figure it out.

Daniel and host Luisa Rodriguez dive deep into these stakes in today’s interview.

What did you think of the episode? https://forms.gle/HRBhjDZ9gfM8woG5A

This episode was recorded on September 9, 2025.

Chapters:

  • Cold open (00:00:00)
  • Who’s Daniel Kokotajlo? (00:00:37)
  • Video: We’re Not Ready for Superintelligence (00:01:31)
  • Interview begins: Could China really steal frontier model weights? (00:36:26)
  • Why we might get a robot economy incredibly fast (00:42:34)
  • AI 2027’s alternate ending: The slowdown (01:01:29)
  • How to get to even better outcomes (01:07:18)
  • Updates Daniel’s made since publishing AI 2027 (01:15:13)
  • How plausible are longer timelines? (01:20:22)
  • What empirical evidence is Daniel looking out for to decide which way things are going? (01:40:27)
  • What post-AGI looks like (01:49:41)
  • Whistleblower protections and Daniel’s unsigned NDA (02:04:28)

Audio engineering: Milo McGuire, Simon Monsour, and Dominic Armstrong
Music: CORBIT
Coordination, transcriptions, and web: Katy Moore

Jaksot(301)

#12 - Beth Cameron works to stop you dying in a pandemic. Here’s what keeps her up at night.

#12 - Beth Cameron works to stop you dying in a pandemic. Here’s what keeps her up at night.

“When you're in the middle of a crisis and you have to ask for money, you're already too late.” That’s Dr Beth Cameron, who leads Global Biological Policy and Programs at the Nuclear Threat Initiative. Beth should know. She has years of experience preparing for and fighting the diseases of our nightmares, on the White House Ebola Taskforce, in the National Security Council staff, and as the Assistant Secretary of Defense for Nuclear, Chemical and Biological Defense Programs. Summary, list of career opportunities, extra links to learn more and coaching application. Unfortunately, the countries of the world aren’t prepared for a crisis - and like children crowded into daycare, there’s a good chance something will make us all sick at once. During past pandemics countries have dragged their feet over who will pay to contain them, or struggled to move people and supplies where they needed to be. At the same time advanced biotechnology threatens to make it possible for terrorists to bring back smallpox - or create something even worse. In this interview we look at the current state of play in disease control, what needs to change, and how you can build the career capital necessary to make those changes yourself. That includes: * What and where to study, and where to begin a career in pandemic preparedness. Below you’ll find a lengthy list of people and places mentioned in the interview, and others we’ve had recommended to us. * How the Nuclear Threat Initiative, with just 50 people, collaborates with governments around the world to reduce the risk of nuclear or biological catastrophes, and whether they might want to hire you. * The best strategy for containing pandemics. * Why we lurch from panic, to neglect, to panic again when it comes to protecting ourselves from contagious diseases. * Current reform efforts within the World Health Organisation, and attempts to prepare partial vaccines ahead of time. * Which global health security groups most impress Beth, and what they’re doing. * What new technologies could be invented to make us safer. * Whether it’s possible to help solve the problem through mass advocacy. * Much more besides. Get free, one-on-one career advice to improve biosecurity Considering a relevant grad program like a biology PhD, medicine, or security studies? Able to apply for a relevant job already? We’ve helped dozens of people plan their careers to work on pandemic preparedness and put them in touch with mentors. If you want to work on the problem discussed in this episode, you should apply for coaching: Read more

25 Loka 20171h 45min

#11 - Spencer Greenberg on speeding up social science 10-fold & why plenty of startups cause harm

#11 - Spencer Greenberg on speeding up social science 10-fold & why plenty of startups cause harm

Do most meat eaters think it’s wrong to hurt animals? Do Americans think climate change is likely to cause human extinction? What is the best, state-of-the-art therapy for depression? How can we make academics more intellectually honest, so we can actually trust their findings? How can we speed up social science research ten-fold? Do most startups improve the world, or make it worse? If you’re interested in these question, this interview is for you. Click for a full transcript, links discussed in the show, etc. A scientist, entrepreneur, writer and mathematician, Spencer Greenberg is constantly working to create tools to speed up and improve research and critical thinking. These include: * Rapid public opinion surveys to find out what most people actually think about animal consciousness, farm animal welfare, the impact of developing world charities and the likelihood of extinction by various different means; * Tools to enable social science research to be run en masse very cheaply; * ClearerThinking.org, a highly popular site for improving people’s judgement and decision-making; * Ways to transform data analysis methods to ensure that papers only show true findings; * Innovative research methods; * Ways to decide which research projects are actually worth pursuing. In this interview, Spencer discusses all of these and more. If you don’t feel like listening, that just shows that you have poor judgement and need to benefit from his wisdom even more! Get free, one-on-one career advice We’ve helped hundreds of people compare their options, get introductions, and find high impact jobs. If you want to work on any of the problems discussed in this episode, find out if our coaching can help you.

17 Loka 20171h 29min

#10 - Nick Beckstead on how to spend billions of dollars preventing human extinction

#10 - Nick Beckstead on how to spend billions of dollars preventing human extinction

What if you were in a position to give away billions of dollars to improve the world? What would you do with it? This is the problem facing Program Officers at the Open Philanthropy Project - people like Dr Nick Beckstead. Following a PhD in philosophy, Nick works to figure out where money can do the most good. He’s been involved in major grants in a wide range of areas, including ending factory farming through technological innovation, safeguarding the world from advances in biotechnology and artificial intelligence, and spreading rational compassion. Full transcript, coaching application form, overview of the conversation, and links to resources discussed in the episode: This episode is a tour through some of the toughest questions ‘effective altruists’ face when figuring out how to best improve the world, including: * * Should we mostly try to help people currently alive, or future generations? Nick studied this question for years in his PhD thesis, On the Overwhelming Importance of Shaping the Far Future. (The first 31 minutes is a snappier version of my conversation with Toby Ord.) * Is clean meat (aka *in vitro* meat) technologically feasible any time soon, or should we be looking for plant-based alternatives? * What are the greatest risks to human civilisation? * To stop malaria is it more cost-effective to use technology to eliminate mosquitos than to distribute bed nets? * Should people who want to improve the future work for changes that will be very useful in a specific scenario, or just generally try to improve how well humanity makes decisions? * What specific jobs should our listeners take in order for Nick to be able to spend more money in useful ways to improve the world? * Should we expect the future to be better if the economy grows more quickly - or more slowly? Get free, one-on-one career advice We’ve helped dozens of people compare between their options, get introductions, and jobs important for the the long-run future. If you want to work on any of the problems discussed in this episode, find out if our coaching can help you.

11 Loka 20171h 51min

#9 - Christine Peterson on how insecure computers could lead to global disaster, and how to fix it

#9 - Christine Peterson on how insecure computers could lead to global disaster, and how to fix it

Take a trip to Silicon Valley in the 70s and 80s, when going to space sounded like a good way to get around environmental limits, people started cryogenically freezing themselves, and nanotechnology looked like it might revolutionise industry – or turn us all into grey goo. Full transcript, coaching application form, overview of the conversation, and extra resources to learn more: In this episode of the 80,000 Hours Podcast Christine Peterson takes us back to her youth in the Bay Area, the ideas she encountered there, and what the dreamers she met did as they grew up. We also discuss how she came up with the term ‘open source software’ (and how she had to get someone else to propose it). Today Christine helps runs the Foresight Institute, which fills a gap left by for-profit technology companies – predicting how new revolutionary technologies could go wrong, and ensuring we steer clear of the downsides. We dive into: * Whether the poor security of computer systems poses a catastrophic risk for the world. Could all our essential services be taken down at once? And if so, what can be done about it? * Can technology ‘move fast and break things’ without eventually breaking the world? Would it be better for technology to advance more quickly, or more slowly? * How Christine came up with the term ‘open source software’ (and why someone else had to propose it). * Will AIs designed for wide-scale automated hacking make computers more or less secure? * Would it be good to radically extend human lifespan? Is it sensible to cryogenically freeze yourself in the hope of being resurrected in the future? * Could atomically precise manufacturing (nanotechnology) really work? Why was it initially so controversial and why did people stop worrying about it? * Should people who try to do good in their careers work long hours and take low salaries? Or should they take care of themselves first of all? * How she thinks the the effective altruism community resembles the scene she was involved with when she was wrong, and where it might be going wrong. Get free, one-on-one career advice We’ve helped dozens of people compare between their options, get introductions, and jobs important for the the long-run future. If you want to work on any of the problems discussed in this episode, find out if our coaching can help you.

4 Loka 20171h 45min

#8 - Lewis Bollard on how to end factory farming in our lifetimes

#8 - Lewis Bollard on how to end factory farming in our lifetimes

Every year tens of billions of animals are raised in terrible conditions in factory farms before being killed for human consumption. Over the last two years Lewis Bollard – Project Officer for Farm Animal Welfare at the Open Philanthropy Project – has conducted extensive research into the best ways to eliminate animal suffering in farms as soon as possible. This has resulted in $30 million in grants to farm animal advocacy. Full transcript, coaching application form, overview of the conversation, and extra resources to learn more: We covered almost every approach being taken, which ones work, and how individuals can best contribute through their careers. We also had time to venture into a wide range of issues that are less often discussed, including: * Why Lewis thinks insect farming would be worse than the status quo, and whether we should look for ‘humane’ insecticides; * How young people can set themselves up to contribute to scientific research into meat alternatives; * How genetic manipulation of chickens has caused them to suffer much more than their ancestors, but could also be used to make them better off; * Why Lewis is skeptical of vegan advocacy; * Why he doubts that much can be done to tackle factory farming through legal advocacy or electoral politics; * Which species of farm animals is best to focus on first; * Whether fish and crustaceans are conscious, and if so what can be done for them; * Many other issues listed below in the Overview of the discussion. Get free, one-on-one career advice We’ve helped dozens of people compare between their options, get introductions, and jobs important for the the long-run future. If you want to work on any of the problems discussed in this episode, find out if our coaching can help you. Overview of the discussion **2m40s** What originally drew you to dedicate your career to helping animals and why did Open Philanthropy end up focusing on it? **5m40s** Do you have any concrete way of assessing the severity of animal suffering? **7m10s** Do you think the environmental gains are large compared to those that we might hope to get from animal welfare improvement? **7m55s** What grants have you made at Open Phil? How did you go about deciding which groups to fund and which ones not to fund? **9m50s** Why does Open Phil focus on chickens and fish? Is this the right call? More...

27 Syys 20173h 16min

#7 - Julia Galef on making humanity more rational, what EA does wrong, and why Twitter isn’t all bad

#7 - Julia Galef on making humanity more rational, what EA does wrong, and why Twitter isn’t all bad

The scientific revolution in the 16th century was one of the biggest societal shifts in human history, driven by the discovery of new and better methods of figuring out who was right and who was wrong. Julia Galef - a well-known writer and researcher focused on improving human judgment, especially about high stakes questions - believes that if we could again develop new techniques to predict the future, resolve disagreements and make sound decisions together, it could dramatically improve the world across the board. We brought her in to talk about her ideas. This interview complements a new detailed review of whether and how to follow Julia’s career path. Apply for personalised coaching, see what questions are asked when, and read extra resources to learn more. Julia has been host of the Rationally Speaking podcast since 2010, co-founder of the Center for Applied Rationality in 2012, and is currently working for the Open Philanthropy Project on an investigation of expert disagreements. In our conversation we ended up speaking about a wide range of topics, including: * Her research on how people can have productive intellectual disagreements. * Why she once planned to become an urban designer. * Why she doubts people are more rational than 200 years ago. * What makes her a fan of Twitter (while I think it’s dystopian). * Whether people should write more books. * Whether it’s a good idea to run a podcast, and how she grew her audience. * Why saying you don’t believe X often won’t convince people you don’t. * Why she started a PhD in economics but then stopped. * Whether she would recommend an unconventional career like her own. * Whether the incentives in the intelligence community actually support sound thinking. * Whether big institutions will actually pick up new tools for improving decision-making if they are developed. * How to start out pursuing a career in which you enhance human judgement and foresight. Get free, one-on-one career advice to help you improve judgement and decision-making We’ve helped dozens of people compare between their options, get introductions, and jobs important for the the long-run future. **If you want to work on any of the problems discussed in this episode, find out if our coaching can help you:** APPLY FOR COACHING Overview of the conversation **1m30s** So what projects are you working on at the moment? **3m50s** How are you working on the problem of expert disagreement? **6m0s** Is this the same method as the double crux process that was developed at the Center for Applied Rationality? **10m** Why did the Open Philanthropy Project decide this was a very valuable project to fund? **13m** Is the double crux process actually that effective? **14m50s** Is Facebook dangerous? **17m** What makes for a good life? Can you be mistaken about having a good life? **19m** Should more people write books? Read more...

13 Syys 20171h 14min

#6 - Toby Ord on why the long-term future matters more than anything else & what to do about it

#6 - Toby Ord on why the long-term future matters more than anything else & what to do about it

Of all the people whose well-being we should care about, only a small fraction are alive today. The rest are members of future generations who are yet to exist. Whether they’ll be born into a world that is flourishing or disintegrating – and indeed, whether they will ever be born at all – is in large part up to us. As such, the welfare of future generations should be our number one moral concern. This conclusion holds true regardless of whether your moral framework is based on common sense, consequences, rules of ethical conduct, cooperating with others, virtuousness, keeping options open – or just a sense of wonder about the universe we find ourselves in. That’s the view of Dr Toby Ord, a philosophy Fellow at the University of Oxford and co-founder of the effective altruism community. In this episode of the 80,000 Hours Podcast Dr Ord makes the case that aiming for a positive long-term future is likely the best way to improve the world. Apply for personalised coaching, see what questions are asked when, and read extra resources to learn more. We then discuss common objections to long-termism, such as the idea that benefits to future generations are less valuable than those to people alive now, or that we can’t meaningfully benefit future generations beyond taking the usual steps to improve the present. Later the conversation turns to how individuals can and have changed the course of history, what could go wrong and why, and whether plans to colonise Mars would actually put humanity in a safer position than it is today. This episode goes deep into the most distinctive features of our advice. It’s likely the most in-depth discussion of how 80,000 Hours and the effective altruism community think about the long term future and why - and why we so often give it top priority. It’s best to subscribe, so you can listen at leisure on your phone, speed up the conversation if you like, and get notified about future episodes. You can do so by searching ‘80,000 Hours’ wherever you get your podcasts. Want to help ensure humanity has a positive future instead of destroying itself? We want to help. We’ve helped 100s of people compare between their options, get introductions, and jobs important for the the long-run future. If you want to work on any of the problems discussed in this episode, such as artificial intelligence or biosecurity, find out if our coaching can help you. Overview of the discussion 3m30s - Why is the long-term future of humanity such a big deal, and perhaps the most important issue for us to be thinking about? 9m05s - Five arguments that future generations matter 21m50s - How bad would it be if humanity went extinct or civilization collapses? 26m40s - Why do people start saying such strange things when this topic comes up? 30m30s - Are there any other reasons to prioritize thinking about the long-term future of humanity that you wanted to raise before we move to objections? 36m10s - What is this school of thought called? Read more...

6 Syys 20172h 8min

#5 - Alex Gordon-Brown on how to donate millions in your 20s working in quantitative trading

#5 - Alex Gordon-Brown on how to donate millions in your 20s working in quantitative trading

Quantitative financial trading is one of the highest paying parts of the world’s highest paying industry. 25 to 30 year olds with outstanding maths skills can earn millions a year in an obscure set of ‘quant trading’ firms, where they program computers with predefined algorithms to allow them to trade very quickly and effectively. Update: we're headhunting people for quant trading roles Want to be kept up to date about particularly promising roles we're aware of for earning to give in quantitative finance? Get notified by letting us know here. This makes it an attractive place to work for people who want to ‘earn to give’, and we know several people who are able to donate over a million dollars a year to effective charities by working in quant trading. Who are these people? What is the job like? And is there a risk that their work harms the world in other ways? Apply for personalised coaching, see what questions are asked when, and read extra resources to learn more. I spoke at length with Alexander Gordon-Brown, who has worked as a quant trader in London for the last three and a half years and donated hundreds of thousands of pounds. We covered: * What quant traders do and how much they earn. * Whether their work is beneficial or harmful for the world. * How to figure out if you’re a good personal fit for quant trading, and if so how to break into the industry. * Whether he enjoys the work and finds it motivating, and what other careers he considered. * What variety of positions are on offer, and what the culture is like in different firms. * How he decides where to donate, and whether he has persuaded his colleagues to join him. Want to earn to give for effective charities in quantitative trading? We want to help. We’ve helped dozens of people plan their earning to give careers, and put them in touch with mentors. If you want to work in quant trading, apply for our free coaching service. APPLY FOR COACHING What questions are asked when? 1m30s - What is quant trading and how much do they earn? 4m45s - How do quant trading firms manage the risks they face and avoid bankruptcy? 7m05s - Do other traders also donate to charity and has Alex convinced them? 9m45s - How do they track the performance of each trader? 13m00s - What does the daily schedule of a quant trader look like? What do you do in the morning, afternoon, etc? More...

28 Elo 20171h 45min

Suosittua kategoriassa Koulutus

rss-murhan-anatomia
psykopodiaa-podcast
voi-hyvin-meditaatiot-2
rss-valo-minussa-2
psykologia
rss-sisalto-kuntoon
rss-narsisti
aamukahvilla
rss-lasnaolon-hetkia-mindfulness-tutuksi
avara-mieli
leikitaanko-laakaria
ihminen-tavattavissa-tommy-hellsten-instituutti
rss-duodecim-lehti
rss-niinku-asia-on
aloita-meditaatio
jari-sarasvuo-podcast
salainen-paivakirja
rss-tripsteri
rss-elamankoulu
rss-pedatalk