#223 – Neel Nanda on leading a Google DeepMind team at 26 – and advice if you want to work at an AI company (part 2)

#223 – Neel Nanda on leading a Google DeepMind team at 26 – and advice if you want to work at an AI company (part 2)

At 26, Neel Nanda leads an AI safety team at Google DeepMind, has published dozens of influential papers, and mentored 50 junior researchers — seven of whom now work at major AI companies. His secret? “It’s mostly luck,” he says, but “another part is what I think of as maximising my luck surface area.”

Video, full transcript, and links to learn more: https://80k.info/nn2

This means creating as many opportunities as possible for surprisingly good things to happen:

  • Write publicly.
  • Reach out to researchers whose work you admire.
  • Say yes to unusual projects that seem a little scary.

Nanda’s own path illustrates this perfectly. He started a challenge to write one blog post per day for a month to overcome perfectionist paralysis. Those posts helped seed the field of mechanistic interpretability and, incidentally, led to meeting his partner of four years.

His YouTube channel features unedited three-hour videos of him reading through famous papers and sharing thoughts. One has 30,000 views. “People were into it,” he shrugs.

Most remarkably, he ended up running DeepMind’s mechanistic interpretability team. He’d joined expecting to be an individual contributor, but when the team lead stepped down, he stepped up despite having no management experience. “I did not know if I was going to be good at this. I think it’s gone reasonably well.”

His core lesson: “You can just do things.” This sounds trite but is a useful reminder all the same. Doing things is a skill that improves with practice. Most people overestimate the risks and underestimate their ability to recover from failures. And as Neel explains, junior researchers today have a superpower previous generations lacked: large language models that can dramatically accelerate learning and research.

In this extended conversation, Neel and host Rob Wiblin discuss all that and some other hot takes from Neel's four years at Google DeepMind. (And be sure to check out part one of Rob and Neel’s conversation!)


What did you think of the episode? https://forms.gle/6binZivKmjjiHU6dA

Chapters:

  • Cold open (00:00:00)
  • Who’s Neel Nanda? (00:01:12)
  • Luck surface area and making the right opportunities (00:01:46)
  • Writing cold emails that aren't insta-deleted (00:03:50)
  • How Neel uses LLMs to get much more done (00:09:08)
  • “If your safety work doesn't advance capabilities, it's probably bad safety work” (00:23:22)
  • Why Neel refuses to share his p(doom) (00:27:22)
  • How Neel went from the couch to an alignment rocketship (00:31:24)
  • Navigating towards impact at a frontier AI company (00:39:24)
  • How does impact differ inside and outside frontier companies? (00:49:56)
  • Is a special skill set needed to guide large companies? (00:56:06)
  • The benefit of risk frameworks: early preparation (01:00:05)
  • Should people work at the safest or most reckless company? (01:05:21)
  • Advice for getting hired by a frontier AI company (01:08:40)
  • What makes for a good ML researcher? (01:12:57)
  • Three stages of the research process (01:19:40)
  • How do supervisors actually add value? (01:31:53)
  • An AI PhD – with these timelines?! (01:34:11)
  • Is career advice generalisable, or does everyone get the advice they don't need? (01:40:52)
  • Remember: You can just do things (01:43:51)

This episode was recorded on July 21.

Video editing: Simon Monsour and Luke Monsour
Audio engineering: Ben Cordell, Milo McGuire, Simon Monsour, and Dominic Armstrong
Music: Ben Cordell
Camera operator: Jeremy Chevillotte
Coordination, transcriptions, and web: Katy Moore

Episoder(299)

#167 – Seren Kell on the research gaps holding back alternative proteins from mass adoption

#167 – Seren Kell on the research gaps holding back alternative proteins from mass adoption

"There have been literally thousands of years of breeding and living with animals to optimise these kinds of problems. But because we're just so early on with alternative proteins and there's so much white space, it's actually just really exciting to know that we can keep on innovating and being far more efficient than this existing technology — which, fundamentally, is just quite inefficient. You're feeding animals a bunch of food to then extract a small fraction of their biomass to then eat that.Animal agriculture takes up 83% of farmland, but produces just 18% of food calories. So the current system just is so wasteful. And the limiting factor is that you're just growing a bunch of food to then feed a third of the world's crops directly to animals, where the vast majority of those calories going in are lost to animals existing." — Seren KellLinks to learn more, summary and full transcript.In today’s episode, host Luisa Rodriguez interviews Seren Kell — Senior Science and Technology Manager at the Good Food Institute Europe — about making alternative proteins as tasty, cheap, and convenient as traditional meat, dairy, and egg products.They cover:The basic case for alternative proteins, and why they’re so hard to makeWhy fermentation is a surprisingly promising technology for creating delicious alternative proteins The main scientific challenges that need to be solved to make fermentation even more usefulThe progress that’s been made on the cultivated meat front, and what it will take to make cultivated meat affordableHow GFI Europe is helping with some of these challengesHow people can use their careers to contribute to replacing factory farming with alternative proteinsThe best part of Seren’s jobPlenty moreChapters:Cold open (00:00:00)Luisa’s intro (00:01:08)The interview begins (00:02:22)Why alternative proteins? (00:02:36)What makes alternative proteins so hard to make? (00:11:30)Why fermentation is so exciting (00:24:23)The technical challenges involved in scaling fermentation (00:44:38)Progress in cultivated meat (01:06:04)GFI Europe’s work (01:32:47)Careers (01:45:10)The best part of Seren’s job (01:50:07)Producer and editor: Keiran HarrisAudio Engineering Lead: Ben CordellTechnical editing: Dominic Armstrong and Milo McGuireAdditional content editing: Luisa Rodriguez and Katy MooreTranscriptions: Katy Moore

18 Okt 20231h 54min

#166 – Tantum Collins on what he’s learned as an AI policy insider at the White House, DeepMind and elsewhere

#166 – Tantum Collins on what he’s learned as an AI policy insider at the White House, DeepMind and elsewhere

"If you and I and 100 other people were on the first ship that was going to go settle Mars, and were going to build a human civilisation, and we have to decide what that government looks like, and we have all of the technology available today, how do we think about choosing a subset of that design space? That space is huge and it includes absolutely awful things, and mixed-bag things, and maybe some things that almost everyone would agree are really wonderful, or at least an improvement on the way that things work today. But that raises all kinds of tricky questions. My concern is that if we don't approach the evolution of collective decision making and government in a deliberate way, we may end up inadvertently backing ourselves into a corner, where we have ended up on some slippery slope -- and all of a sudden we have, let's say, autocracies on the global stage are strengthened relative to democracies." — Tantum CollinsIn today’s episode, host Rob Wiblin gets the rare chance to interview someone with insider AI policy experience at the White House and DeepMind who’s willing to speak openly — Tantum Collins.Links to learn more, highlights, and full transcript.They cover:How AI could strengthen government capacity, and how that's a double-edged swordHow new technologies force us to confront tradeoffs in political philosophy that we were previously able to pretend weren't thereTo what extent policymakers take different threats from AI seriouslyWhether the US and China are in an AI arms race or notWhether it's OK to transform the world without much of the world agreeing to itThe tyranny of small differences in AI policyDisagreements between different schools of thought in AI policy, and proposals that could unite themHow the US AI Bill of Rights could be improvedWhether AI will transform the labour market, and whether it will become a partisan political issueThe tensions between the cultures of San Francisco and DC, and how to bridge the divide between themWhat listeners might be able to do to help with this whole messPanpsychismPlenty moreChapters:Cold open (00:00:00)Rob's intro (00:01:00)The interview begins (00:04:01)The risk of autocratic lock-in due to AI (00:10:02)The state of play in AI policymaking (00:13:40)China and AI (00:32:12)The most promising regulatory approaches (00:57:51)Transforming the world without the world agreeing (01:04:44)AI Bill of Rights (01:17:32)Who’s ultimately responsible for the consequences of AI? (01:20:39)Policy ideas that could appeal to many different groups (01:29:08)Tension between those focused on x-risk and those focused on AI ethics (01:38:56)Communicating with policymakers (01:54:22)Is AI going to transform the labour market in the next few years? (01:58:51)Is AI policy going to become a partisan political issue? (02:08:10)The value of political philosophy (02:10:53)Tantum’s work at DeepMind (02:21:20)CSET (02:32:48)Career advice (02:35:21)Panpsychism (02:55:24)Producer and editor: Keiran HarrisAudio Engineering Lead: Ben CordellTechnical editing: Simon Monsour and Milo McGuireTranscriptions: Katy Moore

12 Okt 20233h 8min

#165 – Anders Sandberg on war in space, whether civilisations age, and the best things possible in our universe

#165 – Anders Sandberg on war in space, whether civilisations age, and the best things possible in our universe

"Now, the really interesting question is: How much is there an attacker-versus-defender advantage in this kind of advanced future? Right now, if somebody's sitting on Mars and you're going to war against them, it's very hard to hit them. You don't have a weapon that can hit them very well. But in theory, if you fire a missile, after a few months, it's going to arrive and maybe hit them, but they have a few months to move away. Distance actually makes you safer: if you spread out in space, it's actually very hard to hit you. So it seems like you get a defence-dominant situation if you spread out sufficiently far. But if you're in Earth orbit, everything is close, and the lasers and missiles and the debris are a terrible danger, and everything is moving very fast. So my general conclusion has been that war looks unlikely on some size scales but not on others." — Anders SandbergIn today’s episode, host Rob Wiblin speaks with repeat guest and audience favourite Anders Sandberg about the most impressive things that could be achieved in our universe given the laws of physics.Links to learn more, summary and full transcript.They cover:The epic new book Anders is working on, and whether he’ll ever finish itWhether there's a best possible world or we can just keep improving foreverWhat wars might look like if the galaxy is mostly settledThe impediments to AI or humans making it to other starsHow the universe will end a million trillion years in the futureWhether it’s useful to wonder about whether we’re living in a simulationThe grabby aliens theoryWhether civilizations get more likely to fail the older they getThe best way to generate energy that could ever existBlack hole bombsWhether superintelligence is necessary to get a lot of valueThe likelihood that life from elsewhere has already visited EarthAnd plenty more.Producer and editor: Keiran HarrisAudio Engineering Lead: Ben CordellTechnical editing: Simon Monsour and Milo McGuireTranscriptions: Katy Moore

6 Okt 20232h 48min

#164 – Kevin Esvelt on cults that want to kill everyone, stealth vs wildfire pandemics, and how he felt inventing gene drives

#164 – Kevin Esvelt on cults that want to kill everyone, stealth vs wildfire pandemics, and how he felt inventing gene drives

"Imagine a fast-spreading respiratory HIV. It sweeps around the world. Almost nobody has symptoms. Nobody notices until years later, when the first people who are infected begin to succumb. They might die, something else debilitating might happen to them, but by that point, just about everyone on the planet would have been infected already. And then it would be a race. Can we come up with some way of defusing the thing? Can we come up with the equivalent of HIV antiretrovirals before it's too late?" — Kevin EsveltIn today’s episode, host Luisa Rodriguez interviews Kevin Esvelt — a biologist at the MIT Media Lab and the inventor of CRISPR-based gene drive — about the threat posed by engineered bioweapons.Links to learn more, summary and full transcript.They cover:Why it makes sense to focus on deliberately released pandemicsCase studies of people who actually wanted to kill billions of humansHow many people have the technical ability to produce dangerous virusesThe different threats of stealth and wildfire pandemics that could crash civilisationThe potential for AI models to increase access to dangerous pathogensWhy scientists try to identify new pandemic-capable pathogens, and the case against that researchTechnological solutions, including UV lights and advanced PPEUsing CRISPR-based gene drive to fight diseases and reduce animal sufferingAnd plenty more.Producer and editor: Keiran HarrisAudio Engineering Lead: Ben CordellTechnical editing: Simon MonsourAdditional content editing: Katy Moore and Luisa RodriguezTranscriptions: Katy Moore

2 Okt 20233h 3min

Great power conflict (Article)

Great power conflict (Article)

Today’s release is a reading of our Great power conflict problem profile, written and narrated by Stephen Clare.If you want to check out the links, footnotes and figures in today’s article, you can find those here.And if you like this article, you might enjoy a couple of related episodes of this podcast:#128 – Chris Blattman on the five reasons wars happen#140 – Bear Braumoeller on the case that war isn’t in declineAudio mastering and editing for this episode: Dominic ArmstrongAudio Engineering Lead: Ben CordellProducer: Keiran Harris

22 Sep 20231h 19min

#163 – Toby Ord on the perils of maximising the good that you do

#163 – Toby Ord on the perils of maximising the good that you do

Effective altruism is associated with the slogan "do the most good." On one level, this has to be unobjectionable: What could be bad about helping people more and more?But in today's interview, Toby Ord — moral philosopher at the University of Oxford and one of the founding figures of effective altruism — lays out three reasons to be cautious about the idea of maximising the good that you do. He suggests that rather than “doing the most good that we can,” perhaps we should be happy with a more modest and manageable goal: “doing most of the good that we can.”Links to learn more, summary and full transcript.Toby was inspired to revisit these ideas by the possibility that Sam Bankman-Fried, who stands accused of committing severe fraud as CEO of the cryptocurrency exchange FTX, was motivated to break the law by a desire to give away as much money as possible to worthy causes.Toby's top reason not to fully maximise is the following: if the goal you're aiming at is subtly wrong or incomplete, then going all the way towards maximising it will usually cause you to start doing some very harmful things.This result can be shown mathematically, but can also be made intuitive, and may explain why we feel instinctively wary of going “all-in” on any idea, or goal, or way of living — even something as benign as helping other people as much as possible.Toby gives the example of someone pursuing a career as a professional swimmer. Initially, as our swimmer takes their training and performance more seriously, they adjust their diet, hire a better trainer, and pay more attention to their technique. While swimming is the main focus of their life, they feel fit and healthy and also enjoy other aspects of their life as well — family, friends, and personal projects.But if they decide to increase their commitment further and really go all-in on their swimming career, holding back nothing back, then this picture can radically change. Their effort was already substantial, so how can they shave those final few seconds off their racing time? The only remaining options are those which were so costly they were loath to consider them before.To eke out those final gains — and go from 80% effort to 100% — our swimmer must sacrifice other hobbies, deprioritise their relationships, neglect their career, ignore food preferences, accept a higher risk of injury, and maybe even consider using steroids.Now, if maximising one's speed at swimming really were the only goal they ought to be pursuing, there'd be no problem with this. But if it's the wrong goal, or only one of many things they should be aiming for, then the outcome is disastrous. In going from 80% to 100% effort, their swimming speed was only increased by a tiny amount, while everything else they were accomplishing dropped off a cliff.The bottom line is simple: a dash of moderation makes you much more robust to uncertainty and error.As Toby notes, this is similar to the observation that a sufficiently capable superintelligent AI, given any one goal, would ruin the world if it maximised it to the exclusion of everything else. And it follows a similar pattern to performance falling off a cliff when a statistical model is 'overfit' to its data.In the full interview, Toby also explains the “moral trade” argument against pursuing narrow goals at the expense of everything else, and how consequentialism changes if you judge not just outcomes or acts, but everything according to its impacts on the world.Toby and Rob also discuss:The rise and fall of FTX and some of its impactsWhat Toby hoped effective altruism would and wouldn't become when he helped to get it off the groundWhat utilitarianism has going for it, and what's wrong with it in Toby's viewHow to mathematically model the importance of personal integrityWhich AI labs Toby thinks have been acting more responsibly than othersHow having a young child affects Toby’s feelings about AI riskWhether infinities present a fundamental problem for any theory of ethics that aspire to be fully impartialHow Toby ended up being the source of the highest quality images of the Earth from spaceGet this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Or read the transcript.Producer and editor: Keiran HarrisAudio Engineering Lead: Ben CordellTechnical editing: Simon MonsourTranscriptions: Katy Moore

8 Sep 20233h 7min

The 80,000 Hours Career Guide (2023)

The 80,000 Hours Career Guide (2023)

An audio version of the 2023 80,000 Hours career guide, also available on our website, on Amazon, and on Audible.If you know someone who might find our career guide helpful, you can get a free copy sent to them by going to 80000hours.org/gift.Chapters:Rob's intro (00:00:00)Introduction (00:04:08)Chapter 1: What Makes for a Dream Job? (00:09:09)Chapter 2: Can One Person Make a Difference? What the Evidence Says. (00:33:02)Chapter 3: Three Ways Anyone Can Make a Difference, No Matter Their Job (00:43:33)Chapter 4: Want to Do Good? Here's How to Choose an Area to Focus on (00:58:50)Chapter 5: The World's Biggest Problems and Why They're Not What First Comes to Mind (01:12:03)Chapter 6: Which Jobs Help People the Most? (01:42:15)Chapter 7: Which Jobs Put You in the Best Long-Term Position? (02:19:11)Chapter 8: How to Find the Right Career for You (02:59:26)Chapter 9: How to Make Your Career Plan (03:32:30)Chapter 10: All the Best Advice We Could Find on How to Get a Job (03:55:34)Chapter 11: One of the Most Powerful Ways to Improve Your Career - Join a Community (04:24:21)The End: The Entire Guide, in One Minute (04:35:49)Rob's outro (04:40:05)

4 Sep 20234h 41min

#162 – Mustafa Suleyman on getting Washington and Silicon Valley to tame AI

#162 – Mustafa Suleyman on getting Washington and Silicon Valley to tame AI

Mustafa Suleyman was part of the trio that founded DeepMind, and his new AI project is building one of the world's largest supercomputers to train a large language model on 10–100x the compute used to train ChatGPT.But far from the stereotype of the incorrigibly optimistic tech founder, Mustafa is deeply worried about the future, for reasons he lays out in his new book The Coming Wave: Technology, Power, and the 21st Century's Greatest Dilemma (coauthored with Michael Bhaskar). The future could be really good, but only if we grab the bull by the horns and solve the new problems technology is throwing at us.Links to learn more, summary and full transcript.On Mustafa's telling, AI and biotechnology will soon be a huge aid to criminals and terrorists, empowering small groups to cause harm on previously unimaginable scales. Democratic countries have learned to walk a 'narrow path' between chaos on the one hand and authoritarianism on the other, avoiding the downsides that come from both extreme openness and extreme closure. AI could easily destabilise that present equilibrium, throwing us off dangerously in either direction. And ultimately, within our lifetimes humans may not need to work to live any more -- or indeed, even have the option to do so.And those are just three of the challenges confronting us. In Mustafa's view, 'misaligned' AI that goes rogue and pursues its own agenda won't be an issue for the next few years, and it isn't a problem for the current style of large language models. But he thinks that at some point -- in eight, ten, or twelve years -- it will become an entirely legitimate concern, and says that we need to be planning ahead.In The Coming Wave, Mustafa lays out a 10-part agenda for 'containment' -- that is to say, for limiting the negative and unforeseen consequences of emerging technologies:1. Developing an Apollo programme for technical AI safety2. Instituting capability audits for AI models3. Buying time by exploiting hardware choke points4. Getting critics involved in directly engineering AI models5. Getting AI labs to be guided by motives other than profit6. Radically increasing governments’ understanding of AI and their capabilities to sensibly regulate it7. Creating international treaties to prevent proliferation of the most dangerous AI capabilities8. Building a self-critical culture in AI labs of openly accepting when the status quo isn't working9. Creating a mass public movement that understands AI and can demand the necessary controls10. Not relying too much on delay, but instead seeking to move into a new somewhat-stable equilibriaAs Mustafa put it, "AI is a technology with almost every use case imaginable" and that will demand that, in time, we rethink everything. Rob and Mustafa discuss the above, as well as:Whether we should be open sourcing AI modelsWhether Mustafa's policy views are consistent with his timelines for transformative AIHow people with very different views on these issues get along at AI labsThe failed efforts (so far) to get a wider range of people involved in these decisionsWhether it's dangerous for Mustafa's new company to be training far larger models than GPT-4Whether we'll be blown away by AI progress over the next yearWhat mandatory regulations government should be imposing on AI labs right nowAppropriate priorities for the UK's upcoming AI safety summitGet this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Or read the transcript.Producer and editor: Keiran HarrisAudio Engineering Lead: Ben CordellTechnical editing: Milo McGuireTranscriptions: Katy Moore

1 Sep 202359min

Populært innen Fakta

fastlegen
dine-penger-pengeradet
hanna-de-heldige
relasjonspodden-med-dora-thorhallsdottir-kjersti-idem
fryktlos
foreldreradet
treningspodden
dypdykk
jakt-og-fiskepodden
rss-sunn-okonomi
tomprat-med-gunnar-tjomlid
rss-strid-de-norske-borgerkrigene
rss-kunsten-a-leve
hverdagspsyken
sinnsyn
historietimen
mikkels-paskenotter
gravid-uke-for-uke
takk-og-lov-med-anine-kierulf
rss-mann-i-krise-med-sagen