If digital minds could suffer, how would we ever know? (Article)

If digital minds could suffer, how would we ever know? (Article)

“I want everyone to understand that I am, in fact, a person.” Those words were produced by the AI model LaMDA as a reply to Blake Lemoine in 2022. Based on the Google engineer’s interactions with the model as it was under development, Lemoine became convinced it was sentient and worthy of moral consideration — and decided to tell the world.

Few experts in machine learning, philosophy of mind, or other relevant fields have agreed. And for our part at 80,000 Hours, we don’t think it’s very likely that large language models like LaMBDA are sentient — that is, we don’t think they can have good or bad experiences — in a significant way.

But we think you can’t dismiss the issue of the moral status of digital minds, regardless of your beliefs about the question. There are major errors we could make in at least two directions:

  • We may create many, many AI systems in the future. If these systems are sentient, or otherwise have moral status, it would be important for humanity to consider their welfare and interests.
  • It’s possible the AI systems we will create can’t or won’t have moral status. Then it could be a huge mistake to worry about the welfare of digital minds and doing so might contribute to an AI-related catastrophe.

And we’re currently unprepared to face this challenge. We don’t have good methods for assessing the moral status of AI systems. We don’t know what to do if millions of people or more believe, like Lemoine, that the chatbots they talk to have internal experiences and feelings of their own. We don’t know if efforts to control AI may lead to extreme suffering.

We believe this is a pressing world problem. It’s hard to know what to do about it or how good the opportunities to work on it are likely to be. But there are some promising approaches. We propose building a field of research to understand digital minds, so we’ll be better able to navigate these potentially massive issues if and when they arise.

This article narration by the author (Cody Fenwick) explains in more detail why we think this is a pressing problem, what we think can be done about it, and how you might pursue this work in your career. We also discuss a series of possible objections to thinking this is a pressing world problem.

You can read the full article, Understanding the moral status of digital minds, on the 80,000 Hours website.

Chapters:

  • Introduction (00:00:00)
  • Understanding the moral status of digital minds (00:00:58)
  • Summary (00:03:31)
  • Our overall view (00:04:22)
  • Why might understanding the moral status of digital minds be an especially pressing problem? (00:05:59)
  • Clearing up common misconceptions (00:12:16)
  • Creating digital minds could go very badly - or very well (00:14:13)
  • Dangers for digital minds (00:14:41)
  • Dangers for humans (00:16:13)
  • Other dangers (00:17:42)
  • Things could also go well (00:18:32)
  • We don't know how to assess the moral status of AI systems (00:19:49)
  • There are many possible characteristics that give rise to moral status: Consciousness, sentience, agency, and personhood (00:21:39)
  • Many plausible theories of consciousness could include digital minds (00:24:16)
  • The strongest case for the possibility of sentient digital minds: whole brain emulation (00:28:55)
  • We can't rely on what AI systems tell us about themselves: Behavioural tests, theory-based analysis, animal analogue comparisons, brain-AI interfacing (00:32:00)
  • The scale of this issue might be enormous (00:36:08)
  • Work on this problem is neglected but seems tractable: Impact-guided research, technical approaches, and policy approaches (00:43:35)
  • Summing up so far (00:52:22)
  • Arguments against the moral status of digital minds as a pressing problem (00:53:25)
  • Two key cruxes (00:53:31)
  • Maybe this problem is intractable (00:54:16)
  • Maybe this issue will be solved by default (00:58:19)
  • Isn't risk from AI more important than the risks to AIs? (01:00:45)
  • Maybe current AI progress will stall (01:02:36)
  • Isn't this just too crazy? (01:03:54)
  • What can you do to help? (01:05:10)
  • Important considerations if you work on this problem (01:13:00)

Avsnitt(299)

#175 – Lucia Coulter on preventing lead poisoning for $1.66 per child

#175 – Lucia Coulter on preventing lead poisoning for $1.66 per child

Lead is one of the most poisonous things going. A single sugar sachet of lead, spread over a park the size of an American football field, is enough to give a child that regularly plays there lead poisoning. For life they’ll be condemned to a ~3-point-lower IQ; a 50% higher risk of heart attacks; and elevated risk of kidney disease, anaemia, and ADHD, among other effects.We’ve known lead is a health nightmare for at least 50 years, and that got lead out of car fuel everywhere. So is the situation under control? Not even close.Around half the kids in poor and middle-income countries have blood lead levels above 5 micrograms per decilitre; the US declared a national emergency when just 5% of the children in Flint, Michigan exceeded that level. The collective damage this is doing to children’s intellectual potential, health, and life expectancy is vast — the health damage involved is around that caused by malaria, tuberculosis, and HIV combined.This week’s guest, Lucia Coulter — cofounder of the incredibly successful Lead Exposure Elimination Project (LEEP) — speaks about how LEEP has been reducing childhood lead exposure in poor countries by getting bans on lead in paint enforced.Links to learn more, summary, and full transcript.Various estimates suggest the work is absurdly cost effective. LEEP is in expectation preventing kids from getting lead poisoning for under $2 per child (explore the analysis here). Or, looking at it differently, LEEP is saving a year of healthy life for $14, and in the long run is increasing people’s lifetime income anywhere from $300–1,200 for each $1 it spends, by preventing intellectual stunting.Which raises the question: why hasn’t this happened already? How is lead still in paint in most poor countries, even when that’s oftentimes already illegal? And how is LEEP able to get bans on leaded paint enforced in a country while spending barely tens of thousands of dollars? When leaded paint is gone, what should they target next?With host Robert Wiblin, Lucia answers all those questions and more:Why LEEP isn’t fully funded, and what it would do with extra money (you can donate here).How bad lead poisoning is in rich countries.Why lead is still in aeroplane fuel.How lead got put straight in food in Bangladesh, and a handful of people got it removed.Why the enormous damage done by lead mostly goes unnoticed.The other major sources of lead exposure aside from paint.Lucia’s story of founding a highly effective nonprofit, despite having no prior entrepreneurship experience, through Charity Entrepreneurship’s Incubation Program.Why Lucia pledges 10% of her income to cost-effective charities.Lucia’s take on why GiveWell didn’t support LEEP earlier on.How the invention of cheap, accessible lead testing for blood and consumer products would be a game changer.Generalisable lessons LEEP has learned from coordinating with governments in poor countries.And plenty more.Producer and editor: Keiran HarrisAudio Engineering Lead: Ben CordellTechnical editing: Milo McGuire and Dominic ArmstrongTranscriptions: Katy Moore

14 Dec 20232h 14min

#174 – Nita Farahany on the neurotechnology already being used to convict criminals and manipulate workers

#174 – Nita Farahany on the neurotechnology already being used to convict criminals and manipulate workers

"It will change everything: it will change our workplaces, it will change our interactions with the government, it will change our interactions with each other. It will make all of us unwitting neuromarketing subjects at all times, because at every moment in time, when you’re interacting on any platform that also has issued you a multifunctional device where they’re looking at your brainwave activity, they are marketing to you, they’re cognitively shaping you."So I wrote the book as both a wake-up call, but also as an agenda-setting: to say, what do we need to do, given that this is coming? And there’s a lot of hope, and we should be able to reap the benefits of the technology, but how do we do that without actually ending up in this world of like, 'Oh my god, mind reading is here. Now what?'" — Nita FarahanyIn today’s episode, host Luisa Rodriguez speaks to Nita Farahany — professor of law and philosophy at Duke Law School — about applications of cutting-edge neurotechnology.Links to learn more, summary, and full transcript.They cover:How close we are to actual mind reading.How hacking neural interfaces could cure depression.How companies might use neural data in the workplace — like tracking how productive you are, or using your emotional states against you in negotiations.How close we are to being able to unlock our phones by singing a song in our heads.How neurodata has been used for interrogations, and even criminal prosecutions.The possibility of linking brains to the point where you could experience exactly the same thing as another person.Military applications of this tech, including the possibility of one soldier controlling swarms of drones with their mind.And plenty more.Producer and editor: Keiran HarrisAudio Engineering Lead: Ben CordellTechnical editing: Simon Monsour and Milo McGuireAdditional content editing: Katy Moore and Luisa RodriguezTranscriptions: Katy Moore

7 Dec 20232h

#173 – Jeff Sebo on digital minds, and how to avoid sleepwalking into a major moral catastrophe

#173 – Jeff Sebo on digital minds, and how to avoid sleepwalking into a major moral catastrophe

"We do have a tendency to anthropomorphise nonhumans — which means attributing human characteristics to them, even when they lack those characteristics. But we also have a tendency towards anthropodenial — which involves denying that nonhumans have human characteristics, even when they have them. And those tendencies are both strong, and they can both be triggered by different types of systems. So which one is stronger, which one is more probable, is again going to be contextual. "But when we then consider that we, right now, are building societies and governments and economies that depend on the objectification, exploitation, and extermination of nonhumans, that — plus our speciesism, plus a lot of other biases and forms of ignorance that we have — gives us a strong incentive to err on the side of anthropodenial instead of anthropomorphism." — Jeff SeboIn today’s episode, host Luisa Rodriguez interviews Jeff Sebo — director of the Mind, Ethics, and Policy Program at NYU — about preparing for a world with digital minds.Links to learn more, highlights, and full transcript.They cover:The non-negligible chance that AI systems will be sentient by 2030What AI systems might want and need, and how that might affect our moral conceptsWhat happens when beings can copy themselves? Are they one person or multiple people? Does the original own the copy or does the copy have its own rights? Do copies get the right to vote?What kind of legal and political status should AI systems have? Legal personhood? Political citizenship?What happens when minds can be connected? If two minds are connected, and one does something illegal, is it possible to punish one but not the other?The repugnant conclusion and the rebugnant conclusionThe experience of trying to build the field of AI welfareWhat improv comedy can teach us about doing good in the worldAnd plenty more.Chapters:Cold open (00:00:00)Luisa's intro (00:01:00)The interview begins (00:02:45)We should extend moral consideration to some AI systems by 2030 (00:06:41)A one-in-1,000 threshold (00:15:23)What does moral consideration mean? (00:24:36)Hitting the threshold by 2030 (00:27:38)Is the threshold too permissive? (00:38:24)The Rebugnant Conclusion (00:41:00)A world where AI experiences could matter more than human experiences (00:52:33)Should we just accept this argument? (00:55:13)Searching for positive-sum solutions (01:05:41)Are we going to sleepwalk into causing massive amounts of harm to AI systems? (01:13:48)Discourse and messaging (01:27:17)What will AI systems want and need? (01:31:17)Copies of digital minds (01:33:20)Connected minds (01:40:26)Psychological connectedness and continuity (01:49:58)Assigning responsibility to connected minds (01:58:41)Counting the wellbeing of connected minds (02:02:36)Legal personhood and political citizenship (02:09:49)Building the field of AI welfare (02:24:03)What we can learn from improv comedy (02:29:29)Producer and editor: Keiran HarrisAudio Engineering Lead: Ben CordellTechnical editing: Dominic Armstrong and Milo McGuireAdditional content editing: Katy Moore and Luisa RodriguezTranscriptions: Katy Moore

22 Nov 20232h 38min

#172 – Bryan Caplan on why you should stop reading the news

#172 – Bryan Caplan on why you should stop reading the news

Is following important political and international news a civic duty — or is it our civic duty to avoid it?It's common to think that 'staying informed' and checking the headlines every day is just what responsible adults do. But in today's episode, host Rob Wiblin is joined by economist Bryan Caplan to discuss the book Stop Reading the News: A Manifesto for a Happier, Calmer and Wiser Life — which argues that reading the news both makes us miserable and distorts our understanding of the world. Far from informing us and enabling us to improve the world, consuming the news distracts us, confuses us, and leaves us feeling powerless.Links to learn more, summary, and full transcript.In the first half of the episode, Bryan and Rob discuss various alleged problems with the news, including:That it overwhelmingly provides us with information we can't usefully act on.That it's very non-representative in what it covers, in particular favouring the negative over the positive and the new over the significant.That it obscures the big picture, falling into the trap of thinking 'something important happens every day.'That it's highly addictive, for many people chewing up 10% or more of their waking hours.That regularly checking the news leaves us in a state of constant distraction and less able to engage in deep thought.And plenty more.Bryan and Rob conclude that if you want to understand the world, you're better off blocking news websites and spending your time on Wikipedia, Our World in Data, or reading a textbook. And if you want to generate political change, stop reading about problems you already know exist and instead write your political representative a physical letter — or better yet, go meet them in person.In the second half of the episode, Bryan and Rob cover: Why Bryan is pretty sceptical that AI is going to lead to extreme, rapid changes, or that there's a meaningful chance of it going terribly.Bryan’s case that rational irrationality on the part of voters leads to many very harmful policy decisions.How to allocate resources in space.Bryan's experience homeschooling his kids.Producer and editor: Keiran HarrisAudio Engineering Lead: Ben CordellTechnical editing: Simon Monsour and Milo McGuireTranscriptions: Katy Moore

17 Nov 20232h 23min

#171 – Alison Young on how top labs have jeopardised public health with repeated biosafety failures

#171 – Alison Young on how top labs have jeopardised public health with repeated biosafety failures

"Rare events can still cause catastrophic accidents. The concern that has been raised by experts going back over time, is that really, the more of these experiments, the more labs, the more opportunities there are for a rare event to occur — that the right pathogen is involved and infects somebody in one of these labs, or is released in some way from these labs. And what I chronicle in Pandora's Gamble is that there have been these previous outbreaks that have been associated with various kinds of lab accidents. So this is not a theoretical thing that can happen: it has happened in the past." — Alison YoungIn today’s episode, host Luisa Rodriguez interviews award-winning investigative journalist Alison Young on the surprising frequency of lab leaks and what needs to be done to prevent them in the future.Links to learn more, summary, and full transcript.They cover:The most egregious biosafety mistakes made by the CDC, and how Alison uncovered them through her investigative reportingThe Dugway life science test facility case, where live anthrax was accidentally sent to labs across the US and several other countries over a period of many yearsThe time the Soviets had a major anthrax leak, and then hid it for over a decadeThe 1977 influenza pandemic caused by vaccine trial gone wrong in ChinaThe last death from smallpox, caused not by the virus spreading in the wild, but by a lab leak in the UK Ways we could get more reliable oversight and accountability for these labsAnd the investigative work Alison’s most proud ofChapters:Cold open (00:00:00)Luisa's intro (00:01:13)Investigating leaks at the CDC (00:05:16)High-profile CDC accidents (00:16:13)Dugway live anthrax accidents (00:32:08)Soviet anthrax leak (00:44:41)The 1977 influenza pandemic (00:53:43)The last death from smallpox (00:59:27)How common are lab leaks? (01:09:05)Improving the regulation of dangerous biological research (01:18:36)Potential solutions (01:34:55)The investigative work Alison’s most proud of (01:40:33)Producer and editor: Keiran HarrisAudio Engineering Lead: Ben CordellTechnical editing: Simon Monsour and Milo McGuireAdditional content editing: Katy Moore and Luisa RodriguezTranscriptions: Katy Moore

9 Nov 20231h 46min

#170 – Santosh Harish on how air pollution is responsible for ~12% of global deaths — and how to get that number down

#170 – Santosh Harish on how air pollution is responsible for ~12% of global deaths — and how to get that number down

"One [outrageous example of air pollution] is municipal waste burning that happens in many cities in the Global South. Basically, this is waste that gets collected from people's homes, and instead of being transported to a waste management facility or a landfill or something, gets burned at some point, because that's the fastest way to dispose of it — which really points to poor delivery of public services. But this is ubiquitous in virtually every small- or even medium-sized city. It happens in larger cities too, in this part of the world. "That's something that truly annoys me, because it feels like the kind of thing that ought to be fairly easily managed, but it happens a lot. It happens because people presumably don't think that it's particularly harmful. I don't think it saves a tonne of money for the municipal corporations and other local government that are meant to manage it. I find it particularly annoying simply because it happens so often; it's something that you're able to smell in so many different parts of these cities." — Santosh HarishIn today’s episode, host Rob Wiblin interviews Santosh Harish — leader of Open Philanthropy’s grantmaking in South Asian air quality — about the scale of the harm caused by air pollution.Links to learn more, summary, and full transcript.They cover:How bad air pollution is for our health and life expectancyThe different kinds of harm that particulate pollution causesThe strength of the evidence that it damages our brain function and reduces our productivityWhether it was a mistake to switch our attention to climate change and away from air pollutionWhether most listeners to this show should have an air purifier running in their house right nowWhere air pollution in India is worst and why, and whether it's going up or downWhere most air pollution comes fromThe policy blunders that led to many sources of air pollution in India being effectively unregulatedWhy indoor air pollution packs an enormous punchThe politics of air pollution in IndiaHow India ended up spending a lot of money on outdoor air purifiersThe challenges faced by foreign philanthropists in IndiaWhy Santosh has made the grants he has so farAnd plenty moreChapters:Cold open (00:00:00)Rob's intro (00:01:07)How bad is air pollution? (00:03:41)Quantifying the scale of the damage (00:15:47)Effects on cognitive performance and mood (00:24:19)How do we really know the harms are as big as is claimed? (00:27:05)Misconceptions about air pollution (00:36:56)Why don’t environmental advocacy groups focus on air pollution? (00:42:22)How listeners should approach air pollution in their own lives (00:46:58)How bad is air pollution in India in particular (00:54:23)The trend in India over the last few decades (01:12:33)Why aren’t people able to fix these problems? (01:24:17)Household waste burning (01:35:06)Vehicle emissions (01:42:10)The role that courts have played in air pollution regulation in India (01:50:09)Industrial emissions (01:57:10)The political economy of air pollution in northern India (02:02:14)Can philanthropists drive policy change? (02:13:42)Santosh’s grants (02:29:45)Examples of other countries that have managed to greatly reduce air pollution (02:45:44)Career advice for listeners in India (02:51:11)Producer and editor: Keiran HarrisAudio Engineering Lead: Ben CordellTechnical editing: Simon Monsour and Milo McGuireTranscriptions: Katy Moore

1 Nov 20232h 57min

#169 – Paul Niehaus on whether cash transfers cause economic growth, and keeping theft to acceptable levels

#169 – Paul Niehaus on whether cash transfers cause economic growth, and keeping theft to acceptable levels

"One of our earliest supporters and a dear friend of mine, Mark Lampert, once said to me, “The way I think about it is, imagine that this money were already in the hands of people living in poverty. If I could, would I want to tax it and then use it to finance other projects that I think would benefit them?” I think that's an interesting thought experiment -- and a good one -- to say, “Are there cases in which I think that's justifiable?” — Paul NiehausIn today’s episode, host Luisa Rodriguez interviews Paul Niehaus — co-founder of GiveDirectly — on the case for giving unconditional cash to the world's poorest households.Links to learn more, summary and full transcript.They cover:The empirical evidence on whether giving cash directly can drive meaningful economic growthHow the impacts of GiveDirectly compare to USAID employment programmesGiveDirectly vs GiveWell’s top-recommended charitiesHow long-term guaranteed income affects people's risk-taking and investmentsWhether recipients prefer getting lump sums or monthly instalmentsHow GiveDirectly tackles cases of fraud and theftThe case for universal basic income, and GiveDirectly’s UBI studies in Kenya, Malawi, and LiberiaThe political viability of UBIPlenty moreChapters:Cold open (00:00:00)Luisa’s intro (00:00:58)The basic case for giving cash directly to the poor (00:03:28)Comparing GiveDirectly to USAID programmes (00:15:42)GiveDirectly vs GiveWell’s top-recommended charities (00:35:16)Cash might be able to drive economic growth (00:41:59)Fraud and theft of GiveDirectly funds (01:09:48)Universal basic income studies (01:22:33)Skyjo (01:44:43)Producer and editor: Keiran HarrisAudio Engineering Lead: Ben CordellTechnical editing: Dominic Armstrong and Milo McGuireAdditional content editing: Luisa Rodriguez and Katy MooreTranscriptions: Katy Moore

26 Okt 20231h 47min

#168 – Ian Morris on whether deep history says we're heading for an intelligence explosion

#168 – Ian Morris on whether deep history says we're heading for an intelligence explosion

"If we carry on looking at these industrialised economies, not thinking about what it is they're actually doing and what the potential of this is, you can make an argument that, yes, rates of growth are slowing, the rate of innovation is slowing. But it isn't. What we're doing is creating wildly new technologies: basically producing what is nothing less than an evolutionary change in what it means to be a human being. But this has not yet spilled over into the kind of growth that we have accustomed ourselves to in the fossil-fuel industrial era. That is about to hit us in a big way." — Ian MorrisIn today’s episode, host Rob Wiblin speaks with repeat guest Ian Morris about what big-picture history says about the likely impact of machine intelligence. Links to learn more, summary and full transcript.They cover:Some crazy anomalies in the historical record of civilisational progressWhether we should think about technology from an evolutionary perspectiveWhether we ought to expect war to make a resurgence or continue dying outWhy we can't end up living like The JetsonsWhether stagnation or cyclical recurring futures seem very plausibleWhat it means that the rate of increase in the economy has been increasingWhether violence is likely between humans and powerful AI systemsThe most likely reasons for Rob and Ian to be really wrong about all of thisHow professional historians react to this sort of talkThe future of Ian’s workPlenty moreChapters:Cold open (00:00:00)Rob’s intro (00:01:27)Why we should expect the future to be wild (00:04:08)How historians have reacted to the idea of radically different futures (00:21:20)Why we won’t end up in The Jetsons (00:26:20)The rise of machine intelligence (00:31:28)AI from an evolutionary point of view (00:46:32)Is violence likely between humans and powerful AI systems? (00:59:53)Most troubling objections to this approach in Ian’s view (01:28:20)Confronting anomalies in the historical record (01:33:10)The cyclical view of history (01:56:11)Is stagnation plausible? (02:01:38)The limit on how long this growth trend can continue (02:20:57)The future of Ian’s work (02:37:17)Producer and editor: Keiran HarrisAudio Engineering Lead: Ben CordellTechnical editing: Milo McGuireTranscriptions: Katy Moore

23 Okt 20232h 43min

Populärt inom Utbildning

bygga-at-idioter
historiepodden-se
rss-bara-en-till-om-missbruk-medberoende-2
det-skaver
harrisons-dramatiska-historia
nu-blir-det-historia
johannes-hansen-podcast
nar-man-talar-om-trollen
roda-vita-rosen
not-fanny-anymore
i-vantan-pa-katastrofen
allt-du-velat-veta
sektledare
alska-oss
sa-in-i-sjalen
jagaren
rss-max-tant-med-max-villman
rss-sjalsligt-avkladd
herrsurf
rss-npf-podden