#155 – Lennart Heim on the compute governance era and what has to come after

#155 – Lennart Heim on the compute governance era and what has to come after

As AI advances ever more quickly, concerns about potential misuse of highly capable models are growing. From hostile foreign governments and terrorists to reckless entrepreneurs, the threat of AI falling into the wrong hands is top of mind for the national security community.

With growing concerns about the use of AI in military applications, the US has banned the export of certain types of chips to China.

But unlike the uranium required to make nuclear weapons, or the material inputs to a bioweapons programme, computer chips and machine learning models are absolutely everywhere. So is it actually possible to keep dangerous capabilities out of the wrong hands?

In today's interview, Lennart Heim — who researches compute governance at the Centre for the Governance of AI — explains why limiting access to supercomputers may represent our best shot.

Links to learn more, summary and full transcript.

As Lennart explains, an AI research project requires many inputs, including the classic triad of compute, algorithms, and data.

If we want to limit access to the most advanced AI models, focusing on access to supercomputing resources -- usually called 'compute' -- might be the way to go. Both algorithms and data are hard to control because they live on hard drives and can be easily copied. By contrast, advanced chips are physical items that can't be used by multiple people at once and come from a small number of sources.

According to Lennart, the hope would be to enforce AI safety regulations by controlling access to the most advanced chips specialised for AI applications. For instance, projects training 'frontier' AI models — the newest and most capable models — might only gain access to the supercomputers they need if they obtain a licence and follow industry best practices.

We have similar safety rules for companies that fly planes or manufacture volatile chemicals — so why not for people producing the most powerful and perhaps the most dangerous technology humanity has ever played with?

But Lennart is quick to note that the approach faces many practical challenges. Currently, AI chips are readily available and untracked. Changing that will require the collaboration of many actors, which might be difficult, especially given that some of them aren't convinced of the seriousness of the problem.

Host Rob Wiblin is particularly concerned about a different challenge: the increasing efficiency of AI training algorithms. As these algorithms become more efficient, what once required a specialised AI supercomputer to train might soon be achievable with a home computer.

By that point, tracking every aggregation of compute that could prove to be very dangerous would be both impractical and invasive.

With only a decade or two left before that becomes a reality, the window during which compute governance is a viable solution may be a brief one. Top AI labs have already stopped publishing their latest algorithms, which might extend this 'compute governance era', but not for very long.

If compute governance is only a temporary phase between the era of difficult-to-train superhuman AI models and the time when such models are widely accessible, what can we do to prevent misuse of AI systems after that point?

Lennart and Rob both think the only enduring approach requires taking advantage of the AI capabilities that should be in the hands of police and governments — which will hopefully remain superior to those held by criminals, terrorists, or fools. But as they describe, this means maintaining a peaceful standoff between AI models with conflicting goals that can act and fight with one another on the microsecond timescale. Being far too slow to follow what's happening -- let alone participate -- humans would have to be cut out of any defensive decision-making.

Both agree that while this may be our best option, such a vision of the future is more terrifying than reassuring.

Lennart and Rob discuss the above as well as:

  • How can we best categorise all the ways AI could go wrong?
  • Why did the US restrict the export of some chips to China and what impact has that had?
  • Is the US in an 'arms race' with China or is that more an illusion?
  • What is the deal with chips specialised for AI applications?
  • How is the 'compute' industry organised?
  • Downsides of using compute as a target for regulations
  • Could safety mechanisms be built into computer chips themselves?
  • Who would have the legal authority to govern compute if some disaster made it seem necessary?
  • The reasons Rob doubts that any of this stuff will work
  • Could AI be trained to operate as a far more severe computer worm than any we've seen before?
  • What does the world look like when sluggish human reaction times leave us completely outclassed?
  • And plenty more

Chapters:

  • Rob’s intro (00:00:00)
  • The interview begins (00:04:35)
  • What is compute exactly? (00:09:46)
  • Structural risks (00:13:25)
  • Why focus on compute? (00:21:43)
  • Weaknesses of targeting compute (00:30:41)
  • Chip specialisation (00:37:11)
  • Export restrictions (00:40:13)
  • Compute governance is happening (00:59:00)
  • Reactions to AI regulation (01:05:03)
  • Creating legal authority to intervene quickly (01:10:09)
  • Building mechanisms into chips themselves (01:18:57)
  • Rob not buying that any of this will work (01:39:28)
  • Are we doomed to become irrelevant? (01:59:10)
  • Rob’s computer security bad dreams (02:10:22)
  • Concrete advice (02:26:58)
  • Article reading: Information security in high-impact areas (02:49:36)
  • Rob’s outro (03:10:38)

Producer: Keiran Harris

Audio mastering: Milo McGuire, Dominic Armstrong, and Ben Cordell

Transcriptions: Katy Moore

Jaksot(293)

#20 - Bruce Friedrich on inventing outstanding meat substitutes to end speciesism & factory farming

#20 - Bruce Friedrich on inventing outstanding meat substitutes to end speciesism & factory farming

Before the US Civil War, it was easier for the North to morally oppose slavery. Why? Because unlike the South they weren’t profiting much from its existence. The fight for abolition was partly won because many no longer saw themselves as having a selfish stake in its continuation. Bruce Friedrich, executive director of The Good Food Institute (GFI), thinks the same may be true in the fight against speciesism. 98% of people currently eat meat. But if eating meat stops being part of most people’s daily lives -- it should be a lot easier to convince them that farming practices are just as cruel as they look, and that the suffering of these animals really matters. Full transcript, related links, job opportunities and summary of the interview. That’s why GFI is “working with scientists, investors, and entrepreneurs” to create plant-based meat, dairy and eggs as well as clean meat alternatives to animal products. In 2016, Animal Charity Evaluators named GFI one of its recommended charities. In this interview I’m joined by my colleague Natalie Cargill, and we ask Bruce about: * What’s the best meat replacement product out there right now? * How effective is meat substitute research for people who want to reduce animal suffering as much as possible? * When will we get our hands on clean meat? And why does Bruce call it clean meat, rather than in vitro meat or cultured meat? * What are the challenges of producing something structurally identical to meat? * Can clean meat be healthier than conventional meat? * Do plant-based alternatives have a better shot at success than clean meat? * Is there a concern that, even if the product is perfect, people still won’t eat it? Why might that happen? * What’s it like being a vegan in a family made up largely of hunters and meat-eaters? * What kind of pushback should be expected from the meat industry? Keiran Harris helped produce today’s episode.

19 Helmi 20181h 18min

#19 - Samantha Pitts-Kiefer on working next to the White House trying to prevent nuclear war

#19 - Samantha Pitts-Kiefer on working next to the White House trying to prevent nuclear war

Rogue elements within a state’s security forces enrich dozens of kilograms of uranium. It’s then assembled into a crude nuclear bomb. The bomb is transported on a civilian aircraft to Washington D.C, and loaded onto a delivery truck. The truck is driven by an American citizen midway between the White House and the Capitol Building. The driver casually steps out of the vehicle, and detonates the weapon. There are more than 80,000 instant deaths. There are also at least 100,000 seriously wounded, with nowhere left to treat them. Full blog post about this episode, including a transcript, summary and links to resources mentioned in the show It’s likely that one of those immediately killed would be Samantha Pitts-Kiefer, who works only one block away from the White House. Samantha serves as Senior Director of The Global Nuclear Policy Program at the Nuclear Threat Initiative, and warns that the chances of a nuclear terrorist attack are alarmingly high. Terrorist groups have expressed a desire for nuclear weapons, and the material required to build those weapons is scattered throughout the world at a diverse range of sites – some of which lack the necessary security. When you combine the massive death toll with the accompanying social panic and economic disruption – the consequences of a nuclear 9/11 would be a disasterare almost unthinkable. And yet, Samantha reminds us – we must confront the possibility. Clearly, this is far from the only nuclear nightmare. We also discuss: * In the case of nuclear war, what fraction of the world's population would die? * What is the biggest nuclear threat? * How concerned should we be about North Korea? * How often has the world experienced nuclear near misses? * How might a conflict between India and Pakistan escalate to the nuclear level? * How quickly must a president make a decision in the result of a suspected first strike? * Are global sources of nuclear material safely secured? * What role does cyber security have in preventing nuclear disasters? * How can we improve relations between nuclear armed states? * What do you think about the campaign for complete nuclear disarmament? * If you could tell the US government to do three things, what are the key priorities today? * Is it practical to get members of congress to pay attention to nuclear risks? * Could modernisation of nuclear weapons actually make the world safer?

14 Helmi 20181h 4min

#18 - Ofir Reich on using data science to end poverty & the spurious action-inaction distinction

#18 - Ofir Reich on using data science to end poverty & the spurious action-inaction distinction

Ofir Reich started out doing math in the military, before spending 8 years in tech startups - but then made a sharp turn to become a data scientist focussed on helping the global poor. At UC Berkeley’s Center for Effective Global Action he helps prevent tax evasion by identifying fake companies in India, enable Afghanistan to pay its teachers electronically, and raise yields for Ethiopian farmers by messaging them when local conditions make it ideal to apply fertiliser. Or at least that’s the hope - he’s also working on ways to test whether those interventions actually work. Full post about this episode, including a transcript and relevant links to learn more. Why dedicate his life to helping the global poor? Ofir sees little moral difference between harming people and failing to help them. After all, if you had to press a button to keep all of your money from going to charity, and you pressed that button, would that be an action, or an inaction? Is there even an answer? After reflecting on cases like this, he decided that to not engage with a problem is an active choice, one whose consequences he is just as morally responsible for as if he were directly involved. On top of his life philosophy we also discuss: * The benefits of working in a top academic environment * How best to start a career in global development * Are RCTs worth the money? Should we focus on big picture policy change instead? Or more economic theory? * How the delivery standards of nonprofits compare to top universities * Why he doesn’t enjoy living in the San Francisco bay area * How can we fix the problem of most published research being false? * How good a career path is data science? * How important is experience in development versus technical skills? * How he learned much of what he needed to know in the army * How concerned should effective altruists be about burnout? Keiran Harris helped produce today’s episode.

31 Tammi 20181h 18min

#17 - Will MacAskill on moral uncertainty, utilitarianism & how to avoid being a moral monster

#17 - Will MacAskill on moral uncertainty, utilitarianism & how to avoid being a moral monster

Immanuel Kant is a profoundly influential figure in modern philosophy, and was one of the earliest proponents for universal democracy and international cooperation. He also thought that women have no place in civil society, that it was okay to kill illegitimate children, and that there was a ranking in the moral worth of different races. Throughout history we’ve consistently believed, as common sense, truly horrifying things by today’s standards. According to University of Oxford Professor Will MacAskill, it’s extremely likely that we’re in the same boat today. If we accept that we’re probably making major moral errors, how should we proceed? Full transcript, key points and links to articles and career guides discussed in the show. If our morality is tied to common sense intuitions, we’re probably just preserving these biases and moral errors. Instead we need to develop a moral view that criticises common sense intuitions, and gives us a chance to move beyond them. And if humanity is going to spread to the stars it could be worth dedicating hundreds or thousands of years to moral reflection, lest we spread our errors far and wide. Will is an Associate Professor in Philosophy at Oxford University, author of Doing Good Better, and one of the co-founders of the effective altruism community. In this interview we discuss a wide range of topics: * How would we go about a ‘long reflection’ to fix our moral errors? * Will’s forthcoming book on how one should reason and act if you don't know which moral theory is correct. What are the practical implications of so-called ‘moral uncertainty’? * If we basically solve existential risks, what does humanity do next? * What are some of Will’s most unusual philosophical positions? * What are the best arguments for and against utilitarianism? * Given disagreements among philosophers, how much should we believe the findings of philosophy as a field? * What are some the biases we should be aware of within academia? * What are some of the downsides of becoming a professor? * What are the merits of becoming a philosopher? * How does the media image of EA differ to the actual goals of the community? * What kinds of things would you like to see the EA community do differently? * How much should we explore potentially controversial ideas? * How focused should we be on diversity? * What are the best arguments against effective altruism? Get free, one-on-one career advice We’ve helped hundreds of people compare their options, get introductions, and find high impact jobs. If you want to work on global priorities research or other important questions in academia, find out if our coaching can help you.

19 Tammi 20181h 52min

#16 - Michelle Hutchinson on global priorities research & shaping the ideas of intellectuals

#16 - Michelle Hutchinson on global priorities research & shaping the ideas of intellectuals

In the 40s and 50s neoliberalism was a fringe movement within economics. But by the 80s it had become a dominant school of thought in public policy, and achieved major policy changes across the English speaking world. How did this happen? In part because its leaders invested heavily in training academics to study and develop their ideas. Whether you think neoliberalism was good or bad, its history demonstrates the impact building a strong intellectual base within universities can have. Michelle Hutchinson is working to get a different set of ideas a hearing in academia by setting up the Global Priorities Institute (GPI) at Oxford University. The Institute, which is currently hiring for three roles, aims to bring together outstanding philosophers and economists to research how to most improve the world. The hope is that it will spark widespread academic engagement with effective altruist thinking, which will hone the ideas and help them gradually percolate into society more broadly. Link to the full blog post about this episode including transcript and links to learn more Its research agenda includes questions like: * How do we compare the good done by focussing on really different types of causes? * How does saving lives actually affect the world relative to other things we could do? * What are the biggest wins governments should be focussed on getting? Before moving to GPI, Michelle was the Executive Director of Giving What We Can and a founding figure of the effective altruism movement. She has a PhD in Applied Ethics from Oxford on prioritization and global health. We discuss: * What is global priorities research and why does it matter? * How is effective altruism seen in academia? Is it important to convince academics of the value of your work, or is it OK to ignore them? * Operating inside a university is quite expensive, so is it even worth doing? Who can pay for this kind of thing? * How hard is it to do something innovative inside a university? How serious are the administrative and other barriers? * Is it harder to fundraise for a new institute, or hire the right people? * Have other social movements benefitted from having a prominent academic arm? * How can people prepare themselves to get research roles at a place like GPI? * Many people want to have roles doing this kind of research. How many are actually cut out for it? What should those who aren’t do instead? * What are the odds of the Institute’s work having an effect on the real world? Get free, one-on-one career advice We’ve helped hundreds of people compare their options, get introductions, and find high impact jobs. If you want to work on global priorities research or other important questions in academia, find out if our coaching can help you.

22 Joulu 201755min

#15 - Phil Tetlock on how chimps beat Berkeley undergrads and when it’s wise to defer to the wise

#15 - Phil Tetlock on how chimps beat Berkeley undergrads and when it’s wise to defer to the wise

Prof Philip Tetlock is a social science legend. Over forty years he has researched whose predictions we can trust, whose we can’t and why - and developed methods that allow all of us to be better at predicting the future. After the Iraq WMDs fiasco, the US intelligence services hired him to figure out how to ensure they’d never screw up that badly again. The result of that work – Superforecasting – was a media sensation in 2015. Full transcript, brief summary, apply for coaching and links to learn more. It described Tetlock’s Good Judgement Project, which found forecasting methods so accurate they beat everyone else in open competition, including thousands of people in the intelligence services with access to classified information. Today he’s working to develop the best forecasting process ever, by combining top human and machine intelligence in the Hybrid Forecasting Competition, which you can sign up and participate in. We start by describing his key findings, and then push to the edge of what is known about how to foresee the unforeseeable: * Should people who want to be right just adopt the views of experts rather than apply their own judgement? * Why are Berkeley undergrads worse forecasters than dart-throwing chimps? * Should I keep my political views secret, so it will be easier to change them later? * How can listeners contribute to his latest cutting-edge research? * What do we know about our accuracy at predicting low-probability high-impact disasters? * Does his research provide an intellectual basis for populist political movements? * Was the Iraq War caused by bad politics, or bad intelligence methods? * What can we learn about forecasting from the 2016 election? * Can experience help people avoid overconfidence and underconfidence? * When does an AI easily beat human judgement? * Could more accurate forecasting methods make the world more dangerous? * How much does demographic diversity line up with cognitive diversity? * What are the odds we’ll go to war with China? * Should we let prediction tournaments run most of the government? Listen to it. Get free, one-on-one career advice. Want to work on important social science research like Tetlock? We’ve helped hundreds of people compare their options and get introductions. Find out if our coaching can help you.

20 Marras 20171h 24min

#14 - Sharon Nunez & Jose Valle on going undercover to expose animal abuse

#14 - Sharon Nunez & Jose Valle on going undercover to expose animal abuse

What if you knew that ducks were being killed with pitchforks? Rabbits dumped alive into containers? Or pigs being strangled with forklifts? Would you be willing to go undercover to expose the crime? That’s a real question that confronts volunteers at Animal Equality (AE). In this episode we speak to Sharon Nunez and Jose Valle, who founded AE in 2006 and then grew it into a multi-million dollar international animal rights organisation. They’ve been chosen as one of the most effective animal protection orgs in the world by Animal Charity Evaluators for the last 3 consecutive years. Blog post about the episode, including links and full transcript. A related previous episode, strongly recommended: Lewis Bollard on how to end factory farming as soon as possible. In addition to undercover investigations AE has also designed a 3D virtual-reality farm experience called iAnimal360. People get to experience being trapped in a cage – in a room designed to kill then - and can’t just look away. How big an impact is this having on users? Sharon Nuñez and Jose Valle also tackle: * How do they track their goals and metrics week to week? * How much does an undercover investigation cost? * Why don’t people donate more to factory farmed animals, given that they’re the vast majority of animals harmed directly by humans? * How risky is it to attempt to build a career in animal advocacy? * What led to a change in their focus from bullfighting in Spain to animal farming? * How does working with governments or corporate campaigns compare with early strategies like creating new vegans/vegetarians? * Has their very rapid growth been difficult to handle? * What should our listeners study or do if they want to work in this area? * How can we get across the message that horrific cases are a feature - not a bug - of factory farming? * Do the owners or workers of factory farms ever express shame at what they do?

13 Marras 20171h 25min

#13 - Claire Walsh on testing which policies work & how to get governments to listen to the results

#13 - Claire Walsh on testing which policies work & how to get governments to listen to the results

In both rich and poor countries, government policy is often based on no evidence at all and many programs don’t work. This has particularly harsh effects on the global poor - in some countries governments only spend $100 on each citizen a year so they can’t afford to waste a single dollar. Enter MIT’s Poverty Action Lab (J-PAL). Since 2003 they’ve conducted experiments to figure out what policies actually help recipients, and then tried to get them implemented by governments and non-profits. Claire Walsh leads J-PAL’s Government Partnership Initiative, which works to evaluate policies and programs in collaboration with developing world governments, scale policies that have been shown to work, and generally promote a culture of evidence-based policymaking. Summary, links to career opportunities and topics discussed in the show. We discussed (her views only, not J-PAL’s): * How can they get evidence backed policies adopted? Do politicians in the developing world even care whether their programs actually work? Is the norm evidence-based policy, or policy-based evidence? * Is evidence-based policy an evidence-based strategy itself? * Which policies does she think would have a particularly large impact on human welfare relative to their cost? * How did she come to lead one of J-PAL’s departments at 29? * How do you evaluate the effectiveness of energy and environment programs (Walsh’s area of expertise), and what are the standout approaches in that area? * 80,000 Hours has warned people about the downsides of starting your career in a non-profit. Walsh started her career in a non-profit and has thrived, so are we making a mistake? * Other than J-PAL, what are the best places to work in development? What are the best subjects to study? Where can you go network to break into the sector? * Is living in poverty as bad as we think? And plenty of other things besides. We haven’t run an RCT to test whether this episode will actually help your career, but I suggest you listen anyway. Trust my intuition on this one.

31 Loka 201752min

Suosittua kategoriassa Koulutus

rss-murhan-anatomia
psykopodiaa-podcast
voi-hyvin-meditaatiot-2
jari-sarasvuo-podcast
aamukahvilla
rss-lasnaolon-hetkia-mindfulness-tutuksi
adhd-podi
rss-vegaaneista-tykkaan
rss-duodecim-lehti
rss-narsisti
rss-valo-minussa-2
rss-vapaudu-voimaasi
psykologiaa-ja-kaikenlaista
ihminen-tavattavissa-tommy-hellsten-instituutti
rss-elamankoulu
puhutaan-koiraa
kehossa
rss-tyoelaman-timantteja
rss-tietoinen-yhteys-podcast-2
rss-koira-haudattuna