#194 – Vitalik Buterin on defensive acceleration and how to regulate AI when you fear government
80,000 Hours Podcast26 Heinä 2024

#194 – Vitalik Buterin on defensive acceleration and how to regulate AI when you fear government

"If you’re a power that is an island and that goes by sea, then you’re more likely to do things like valuing freedom, being democratic, being pro-foreigner, being open-minded, being interested in trade. If you are on the Mongolian steppes, then your entire mindset is kill or be killed, conquer or be conquered … the breeding ground for basically everything that all of us consider to be dystopian governance. If you want more utopian governance and less dystopian governance, then find ways to basically change the landscape, to try to make the world look more like mountains and rivers and less like the Mongolian steppes." —Vitalik Buterin

Can ‘effective accelerationists’ and AI ‘doomers’ agree on a common philosophy of technology? Common sense says no. But programmer and Ethereum cofounder Vitalik Buterin showed otherwise with his essay “My techno-optimism,” which both camps agreed was basically reasonable.

Links to learn more, highlights, video, and full transcript.

Seeing his social circle divided and fighting, Vitalik hoped to write a careful synthesis of the best ideas from both the optimists and the apprehensive.

Accelerationists are right: most technologies leave us better off, the human cost of delaying further advances can be dreadful, and centralising control in government hands often ends disastrously.

But the fearful are also right: some technologies are important exceptions, AGI has an unusually high chance of being one of those, and there are options to advance AI in safer directions.

The upshot? Defensive acceleration: humanity should run boldly but also intelligently into the future — speeding up technology to get its benefits, but preferentially developing ‘defensive’ technologies that lower systemic risks, permit safe decentralisation of power, and help both individuals and countries defend themselves against aggression and domination.

Entrepreneur First is running a defensive acceleration incubation programme with $250,000 of investment. If these ideas resonate with you, learn about the programme and apply by August 2, 2024. You don’t need a business idea yet — just the hustle to start a technology company.

In addition to all of that, host Rob Wiblin and Vitalik discuss:

  • AI regulation disagreements being less about AI in particular, and more whether you’re typically more scared of anarchy or totalitarianism.
  • Vitalik’s updated p(doom).
  • Whether the social impact of blockchain and crypto has been a disappointment.
  • Whether humans can merge with AI, and if that’s even desirable.
  • The most valuable defensive technologies to accelerate.
  • How to trustlessly identify what everyone will agree is misinformation
  • Whether AGI is offence-dominant or defence-dominant.
  • Vitalik’s updated take on effective altruism.
  • Plenty more.

Chapters:

  • Cold open (00:00:00)
  • Rob’s intro (00:00:56)
  • The interview begins (00:04:47)
  • Three different views on technology (00:05:46)
  • Vitalik’s updated probability of doom (00:09:25)
  • Technology is amazing, and AI is fundamentally different from other tech (00:15:55)
  • Fear of totalitarianism and finding middle ground (00:22:44)
  • Should AI be more centralised or more decentralised? (00:42:20)
  • Humans merging with AIs to remain relevant (01:06:59)
  • Vitalik’s “d/acc” alternative (01:18:48)
  • Biodefence (01:24:01)
  • Pushback on Vitalik’s vision (01:37:09)
  • How much do people actually disagree? (01:42:14)
  • Cybersecurity (01:47:28)
  • Information defence (02:01:44)
  • Is AI more offence-dominant or defence-dominant? (02:21:00)
  • How Vitalik communicates among different camps (02:25:44)
  • Blockchain applications with social impact (02:34:37)
  • Rob’s outro (03:01:00)

Producer and editor: Keiran Harris
Audio engineering team: Ben Cordell, Simon Monsour, Milo McGuire, and Dominic Armstrong
Transcriptions: Katy Moore

Jaksot(293)

#81 Classic episode - Ben Garfinkel on scrutinising classic AI risk arguments

#81 Classic episode - Ben Garfinkel on scrutinising classic AI risk arguments

Rebroadcast: this episode was originally released in July 2020. 80,000 Hours, along with many other members of the effective altruism movement, has argued that helping to positively shape the development of artificial intelligence may be one of the best ways to have a lasting, positive impact on the long-term future. Millions of dollars in philanthropic spending, as well as lots of career changes, have been motivated by these arguments. Today’s guest, Ben Garfinkel, Research Fellow at Oxford’s Future of Humanity Institute, supports the continued expansion of AI safety as a field and believes working on AI is among the very best ways to have a positive impact on the long-term future. But he also believes the classic AI risk arguments have been subject to insufficient scrutiny given this level of investment. In particular, the case for working on AI if you care about the long-term future has often been made on the basis of concern about AI accidents; it’s actually quite difficult to design systems that you can feel confident will behave the way you want them to in all circumstances. Nick Bostrom wrote the most fleshed out version of the argument in his book, Superintelligence. But Ben reminds us that, apart from Bostrom’s book and essays by Eliezer Yudkowsky, there's very little existing writing on existential accidents. Links to learn more, summary and full transcript. There have also been very few skeptical experts that have actually sat down and fully engaged with it, writing down point by point where they disagree or where they think the mistakes are. This means that Ben has probably scrutinised classic AI risk arguments as carefully as almost anyone else in the world. He thinks that most of the arguments for existential accidents often rely on fuzzy, abstract concepts like optimisation power or general intelligence or goals, and toy thought experiments. And he doesn’t think it’s clear we should take these as a strong source of evidence. Ben’s also concerned that these scenarios often involve massive jumps in the capabilities of a single system, but it's really not clear that we should expect such jumps or find them plausible. These toy examples also focus on the idea that because human preferences are so nuanced and so hard to state precisely, it should be quite difficult to get a machine that can understand how to obey them. But Ben points out that it's also the case in machine learning that we can train lots of systems to engage in behaviours that are actually quite nuanced and that we can't specify precisely. If AI systems can recognise faces from images, and fly helicopters, why don’t we think they’ll be able to understand human preferences? Despite these concerns, Ben is still fairly optimistic about the value of working on AI safety or governance. He doesn’t think that there are any slam-dunks for improving the future, and so the fact that there are at least plausible pathways for impact by working on AI safety and AI governance, in addition to it still being a very neglected area, puts it head and shoulders above most areas you might choose to work in. This is the second episode hosted by Howie Lempel, and he and Ben cover, among many other things: • The threat of AI systems increasing the risk of permanently damaging conflict or collapse • The possibility of permanently locking in a positive or negative future • Contenders for types of advanced systems • What role AI should play in the effective altruism portfolio Get this episode by subscribing: type 80,000 Hours into your podcasting app. Or read the linked transcript. Producer: Keiran Harris. Audio mastering: Ben Cordell. Transcript for this episode: Zakee Ulhaq.

9 Tammi 20232h 37min

#83 Classic episode - Jennifer Doleac on preventing crime without police and prisons

#83 Classic episode - Jennifer Doleac on preventing crime without police and prisons

Rebroadcast: this episode was originally released in July 2020. Today’s guest, Jennifer Doleac — Associate Professor of Economics at Texas A&M University, and Director of the Justice Tech Lab — is an expert on empirical research into policing, law and incarceration. In this extensive interview, she highlights three ways to effectively prevent crime that don't require police or prisons and the human toll they bring with them: better street lighting, cognitive behavioral therapy, and lead reduction. One of Jennifer’s papers used switches into and out of daylight saving time as a 'natural experiment' to measure the effect of light levels on crime. One day the sun sets at 5pm; the next day it sets at 6pm. When that evening hour is dark instead of light, robberies during it roughly double. Links to sources for the claims in these show notes, other resources to learn more, the full blog post, and a full transcript. The idea here is that if you try to rob someone in broad daylight, they might see you coming, and witnesses might later be able to identify you. You're just more likely to get caught. You might think: "Well, people will just commit crime in the morning instead". But it looks like criminals aren’t early risers, and that doesn’t happen. On her unusually rigorous podcast Probable Causation, Jennifer spoke to one of the authors of a related study, in which very bright streetlights were randomly added to some public housing complexes but not others. They found the lights reduced outdoor night-time crime by 36%, at little cost. The next best thing to sun-light is human-light, so just installing more streetlights might be one of the easiest ways to cut crime, without having to hassle or punish anyone. The second approach is cognitive behavioral therapy (CBT), in which you're taught to slow down your decision-making, and think through your assumptions before acting. There was a randomised controlled trial done in schools, as well as juvenile detention facilities in Chicago, where the kids assigned to get CBT were followed over time and compared with those who were not assigned to receive CBT. They found the CBT course reduced rearrest rates by a third, and lowered the likelihood of a child returning to a juvenile detention facility by 20%. Jennifer says that the program isn’t that expensive, and the benefits are massive. Everyone would probably benefit from being able to talk through their problems but the gains are especially large for people who've grown up with the trauma of violence in their lives. Finally, Jennifer thinks that reducing lead levels might be the best buy of all in crime prevention. There is really compelling evidence that lead not only increases crime, but also dramatically reduces educational outcomes. In today’s conversation, Rob and Jennifer also cover, among many other things: • Misconduct, hiring practices and accountability among US police • Procedural justice training • Overrated policy ideas • Policies to try to reduce racial discrimination • The effects of DNA databases • Diversity in economics • The quality of social science research Get this episode by subscribing: type 80,000 Hours into your podcasting app. Producer: Keiran Harris. Audio mastering: Ben Cordell. Transcript for this episode: Zakee Ulhaq.

4 Tammi 20232h 17min

#143 – Jeffrey Lewis on the most common misconceptions about nuclear weapons

#143 – Jeffrey Lewis on the most common misconceptions about nuclear weapons

America aims to avoid nuclear war by relying on the principle of 'mutually assured destruction,' right? Wrong. Or at least... not officially.As today's guest — Jeffrey Lewis, founder of Arms Control Wonk and professor at the Middlebury Institute of International Studies — explains, in its official 'OPLANs' (military operation plans), the US is committed to 'dominating' in a nuclear war with Russia. How would they do that? "That is redacted." Links to learn more, summary and full transcript. We invited Jeffrey to come on the show to lay out what we and our listeners are most likely to be misunderstanding about nuclear weapons, the nuclear posture of major powers, and his field as a whole, and he did not disappoint. As Jeffrey tells it, 'mutually assured destruction' was a slur used to criticise those who wanted to limit the 1960s arms buildup, and was never accepted as a matter of policy in any US administration. But isn't it still the de facto reality? Yes and no. Jeffrey is a specialist on the nuts and bolts of bureaucratic and military decision-making in real-life situations. He suspects that at the start of their term presidents get a briefing about the US' plan to prevail in a nuclear war and conclude that "it's freaking madness." They say to themselves that whatever these silly plans may say, they know a nuclear war cannot be won, so they just won't use the weapons. But Jeffrey thinks that's a big mistake. Yes, in a calm moment presidents can resist pressure from advisors and generals. But that idea of ‘winning’ a nuclear war is in all the plans. Staff have been hired because they believe in those plans. It's what the generals and admirals have all prepared for. What matters is the 'not calm moment': the 3AM phone call to tell the president that ICBMs might hit the US in eight minutes — the same week Russia invades a neighbour or China invades Taiwan. Is it a false alarm? Should they retaliate before their land-based missile silos are hit? There's only minutes to decide. Jeffrey points out that in emergencies, presidents have repeatedly found themselves railroaded into actions they didn't want to take because of how information and options were processed and presented to them. In the heat of the moment, it's natural to reach for the plan you've prepared — however mad it might sound. In this spicy conversation, Jeffrey fields the most burning questions from Rob and the audience, in the process explaining: • Why inter-service rivalry is one of the biggest constraints on US nuclear policy • Two times the US sabotaged nuclear nonproliferation among great powers • How his field uses jargon to exclude outsiders • How the US could prevent the revival of mass nuclear testing by the great powers • Why nuclear deterrence relies on the possibility that something might go wrong • Whether 'salami tactics' render nuclear weapons ineffective • The time the Navy and Air Force switched views on how to wage a nuclear war, just when it would allow *them* to have the most missiles • The problems that arise when you won't talk to people you think are evil • Why missile defences are politically popular despite being strategically foolish • How open source intelligence can prevent arms races • And much more.Chapters:Rob’s intro (00:00:00)The interview begins (00:02:49)Misconceptions in the effective altruism community (00:05:42)Nuclear deterrence (00:17:36)Dishonest rituals (00:28:17)Downsides of generalist research (00:32:13)“Mutual assured destruction” (00:38:18)Budgetary considerations for competing parts of the US military (00:51:53)Where the effective altruism community can potentially add the most value (01:02:15)Gatekeeping (01:12:04)Strengths of the nuclear security community (01:16:14)Disarmament (01:26:58)Nuclear winter (01:38:53)Attacks against US allies (01:41:46)Most likely weapons to get used (01:45:11)The role of moral arguments (01:46:40)Salami tactics (01:52:01)Jeffrey's disagreements with Thomas Schelling (01:57:00)Why did it take so long to get nuclear arms agreements? (02:01:11)Detecting secret nuclear facilities (02:03:18)Where Jeffrey would give $10M in grants (02:05:46)The importance of archival research (02:11:03)Jeffrey's policy ideas (02:20:03)What should the US do regarding China? (02:27:10)What should the US do regarding Russia? (02:31:42)What should the US do regarding Taiwan? (02:35:27)Advice for people interested in working on nuclear security (02:37:23)Rob’s outro (02:39:13)Producer: Keiran HarrisAudio mastering: Ben CordellTranscriptions: Katy Moore

29 Joulu 20222h 40min

#142 – John McWhorter on key lessons from linguistics, the virtue of creoles, and language extinction

#142 – John McWhorter on key lessons from linguistics, the virtue of creoles, and language extinction

John McWhorter is a linguistics professor at Columbia University specialising in research on creole languages. He's also a content-producing machine, never afraid to give his frank opinion on anything and everything. On top of his academic work he's also written 22 books, produced five online university courses, hosts one and a half podcasts, and now writes a regular New York Times op-ed column. • Links to learn more, summary, and full transcript • Video version of the interview • Lecture: Why the world looks the same in any language Our show is mostly about the world's most pressing problems and what you can do to solve them. But what's the point of hosting a podcast if you can't occasionally just talk about something fascinating with someone whose work you appreciate? So today, just before the holidays, we're sharing this interview with John about language and linguistics — including what we think are some of the most important things everyone ought to know about those topics. We ask him: • Can you communicate faster in some languages than others, or is there some constraint that prevents that? • Does learning a second or third language make you smarter or not? • Can a language decay and get worse at communicating what people want to say? • If children aren't taught a language, how many generations does it take them to invent a fully fledged one of their own? • Did Shakespeare write in a foreign language, and if so, should we translate his plays? • How much does language really shape the way we think? • Are creoles the best languages in the world — languages that ideally we would all speak? • What would be the optimal number of languages globally? • Does trying to save dying languages do their speakers a favour, or is it more of an imposition? • Should we bother to teach foreign languages in UK and US schools? • Is it possible to save the important cultural aspects embedded in a dying language without saving the language itself? • Will AI models speak a language of their own in the future, one that humans can't understand but which better serves the tradeoffs AI models need to make? We then put some of these questions to ChatGPT itself, asking it to play the role of a linguistics professor at Colombia University. We’ve also added John’s talk “Why the World Looks the Same in Any Language”  to the end of this episode. So stick around after the credits! And if you’d rather see Rob and John’s facial expressions or beautiful high cheekbones while listening to this conversation, you can watch the video of the full conversation here. Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Producer: Keiran Harris Audio mastering: Ben Cordell Video editing: Ryan Kessler Transcriptions: Katy Moore

20 Joulu 20221h 47min

#141 – Richard Ngo on large language models, OpenAI, and striving to make the future go well

#141 – Richard Ngo on large language models, OpenAI, and striving to make the future go well

Large language models like GPT-3, and now ChatGPT, are neural networks trained on a large fraction of all text available on the internet to do one thing: predict the next word in a passage. This simple technique has led to something extraordinary — black boxes able to write TV scripts, explain jokes, produce satirical poetry, answer common factual questions, argue sensibly for political positions, and more. Every month their capabilities grow. But do they really 'understand' what they're saying, or do they just give the illusion of understanding? Today's guest, Richard Ngo, thinks that in the most important sense they understand many things. Richard is a researcher at OpenAI — the company that created ChatGPT — who works to foresee where AI advances are going and develop strategies that will keep these models from 'acting out' as they become more powerful, are deployed and ultimately given power in society. Links to learn more, summary and full transcript. One way to think about 'understanding' is as a subjective experience. Whether it feels like something to be a large language model is an important question, but one we currently have no way to answer. However, as Richard explains, another way to think about 'understanding' is as a functional matter. If you really understand an idea you're able to use it to reason and draw inferences in new situations. And that kind of understanding is observable and testable. Richard argues that language models are developing sophisticated representations of the world which can be manipulated to draw sensible conclusions — maybe not so different from what happens in the human mind. And experiments have found that, as models get more parameters and are trained on more data, these types of capabilities consistently improve. We might feel reluctant to say a computer understands something the way that we do. But if it walks like a duck and it quacks like a duck, we should consider that maybe we have a duck, or at least something sufficiently close to a duck it doesn't matter. In today's conversation we discuss the above, as well as: • Could speeding up AI development be a bad thing? • The balance between excitement and fear when it comes to AI advances • What OpenAI focuses its efforts where it does • Common misconceptions about machine learning • How many computer chips it might require to be able to do most of the things humans do • How Richard understands the 'alignment problem' differently than other people • Why 'situational awareness' may be a key concept for understanding the behaviour of AI models • What work to positively shape the development of AI Richard is and isn't excited about • The AGI Safety Fundamentals course that Richard developed to help people learn more about this field Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type 80,000 Hours into your podcasting app. Producer: Keiran Harris Audio mastering: Milo McGuire and Ben Cordell Transcriptions: Katy Moore

13 Joulu 20222h 44min

My experience with imposter syndrome — and how to (partly) overcome it (Article)

My experience with imposter syndrome — and how to (partly) overcome it (Article)

Today’s release is a reading of our article called My experience with imposter syndrome — and how to (partly) overcome it, written and narrated by Luisa Rodriguez. If you want to check out the links, footnotes and figures in today’s article, you can find those here. And if you like this article, you’ll probably enjoy episode #100 of this show: Having a successful career with depression, anxiety, and imposter syndrome Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Producer: Keiran Harris Audio mastering and editing for this episode: Milo McGuire

8 Joulu 202244min

Rob's thoughts on the FTX bankruptcy

Rob's thoughts on the FTX bankruptcy

In this episode, usual host of the show Rob Wiblin gives his thoughts on the recent collapse of FTX. Click here for an official 80,000 Hours statement. And here are links to some potentially relevant 80,000 Hours pieces: • Episode #24 of this show – Stefan Schubert on why it’s a bad idea to break the rules, even if it’s for a good cause. • Is it ever OK to take a harmful job in order to do more good? An in-depth analysis • What are the 10 most harmful jobs? • Ways people trying to do good accidentally make things worse, and how to avoid them

23 Marras 20225min

#140 – Bear Braumoeller on the case that war isn't in decline

#140 – Bear Braumoeller on the case that war isn't in decline

Is war in long-term decline? Steven Pinker's The Better Angels of Our Nature brought this previously obscure academic question to the centre of public debate, and pointed to rates of death in war to argue energetically that war is on the way out. But that idea divides war scholars and statisticians, and so Better Angels has prompted a spirited debate, with datasets and statistical analyses exchanged back and forth year after year. The lack of consensus has left a somewhat bewildered public (including host Rob Wiblin) unsure quite what to believe. Today's guest, professor in political science Bear Braumoeller, is one of the scholars who believes we lack convincing evidence that warlikeness is in long-term decline. He collected the analysis that led him to that conclusion in his 2019 book, Only the Dead: The Persistence of War in the Modern Age. Links to learn more, summary and full transcript. The question is of great practical importance. The US and PRC are entering a period of renewed great power competition, with Taiwan as a potential trigger for war, and Russia is once more invading and attempting to annex the territory of its neighbours. If war has been going out of fashion since the start of the Enlightenment, we might console ourselves that however nerve-wracking these present circumstances may feel, modern culture will throw up powerful barriers to another world war. But if we're as war-prone as we ever have been, one need only inspect the record of the 20th century to recoil in horror at what might await us in the 21st. Bear argues that the second reaction is the appropriate one. The world has gone up in flames many times through history, with roughly 0.5% of the population dying in the Napoleonic Wars, 1% in World War I, 3% in World War II, and perhaps 10% during the Mongol conquests. And with no reason to think similar catastrophes are any less likely today, complacency could lead us to sleepwalk into disaster. He gets to this conclusion primarily by analysing the datasets of the decades-old Correlates of War project, which aspires to track all interstate conflicts and battlefield deaths since 1815. In Only the Dead, he chops up and inspects this data dozens of different ways, to test if there are any shifts over time which seem larger than what could be explained by chance variation alone. In a nutshell, Bear simply finds no general trend in either direction from 1815 through today. It seems like, as philosopher George Santayana lamented in 1922, "only the dead have seen the end of war". In today's conversation, Bear and Rob discuss all of the above in more detail than even a usual 80,000 Hours podcast episode, as well as: • Why haven't modern ideas about the immorality of violence led to the decline of war, when it's such a natural thing to expect? • What would Bear's critics say in response to all this? • What do the optimists get right? • How does one do proper statistical tests for events that are clumped together, like war deaths? • Why are deaths in war so concentrated in a handful of the most extreme events? • Did the ideas of the Enlightenment promote nonviolence, on balance? • Were early states more or less violent than groups of hunter-gatherers? • If Bear is right, what can be done? • How did the 'Concert of Europe' or 'Bismarckian system' maintain peace in the 19th century? • Which wars are remarkable but largely unknown? Chapters:Rob’s intro (00:00:00)The interview begins (00:03:32)Only the Dead (00:06:28)The Enlightenment (00:16:47)Democratic peace theory (00:26:22)Is religion a key driver of war? (00:29:27)International orders (00:33:07)The Concert of Europe (00:42:15)The Bismarckian system (00:53:43)The current international order (00:58:16)The Better Angels of Our Nature (01:17:30)War datasets (01:32:03)Seeing patterns in data where none exist (01:45:32)Change-point analysis (01:49:33)Rates of violent death throughout history (01:54:32)War initiation (02:02:55)Escalation (02:17:57)Getting massively different results from the same data (02:28:38)How worried we should be (02:34:07)Most likely ways Only the Dead is wrong (02:36:25)Astonishing smaller wars (02:40:39)Producer: Keiran HarrisAudio mastering: Ryan KesslerTranscriptions: Katy Moore

8 Marras 20222h 47min

Suosittua kategoriassa Koulutus

rss-murhan-anatomia
psykopodiaa-podcast
voi-hyvin-meditaatiot-2
jari-sarasvuo-podcast
aamukahvilla
rss-lasnaolon-hetkia-mindfulness-tutuksi
adhd-podi
rss-vegaaneista-tykkaan
rss-duodecim-lehti
rss-narsisti
rss-valo-minussa-2
rss-vapaudu-voimaasi
psykologiaa-ja-kaikenlaista
ihminen-tavattavissa-tommy-hellsten-instituutti
rss-elamankoulu
puhutaan-koiraa
kehossa
rss-tyoelaman-timantteja
rss-tietoinen-yhteys-podcast-2
rss-koira-haudattuna