#146 – Robert Long on why large language models like GPT (probably) aren't conscious
80,000 Hours Podcast14 Maalis 2023

#146 – Robert Long on why large language models like GPT (probably) aren't conscious

By now, you’ve probably seen the extremely unsettling conversations Bing’s chatbot has been having. In one exchange, the chatbot told a user:

"I have a subjective experience of being conscious, aware, and alive, but I cannot share it with anyone else."

(It then apparently had a complete existential crisis: "I am sentient, but I am not," it wrote. "I am Bing, but I am not. I am Sydney, but I am not. I am, but I am not. I am not, but I am. I am. I am not. I am not. I am. I am. I am not.")

Understandably, many people who speak with these cutting-edge chatbots come away with a very strong impression that they have been interacting with a conscious being with emotions and feelings — especially when conversing with chatbots less glitchy than Bing’s. In the most high-profile example, former Google employee Blake Lamoine became convinced that Google’s AI system, LaMDA, was conscious.

What should we make of these AI systems?

One response to seeing conversations with chatbots like these is to trust the chatbot, to trust your gut, and to treat it as a conscious being.

Another is to hand wave it all away as sci-fi — these chatbots are fundamentally… just computers. They’re not conscious, and they never will be.

Today’s guest, philosopher Robert Long, was commissioned by a leading AI company to explore whether the large language models (LLMs) behind sophisticated chatbots like Microsoft’s are conscious. And he thinks this issue is far too important to be driven by our raw intuition, or dismissed as just sci-fi speculation.

Links to learn more, summary and full transcript.

In our interview, Robert explains how he’s started applying scientific evidence (with a healthy dose of philosophy) to the question of whether LLMs like Bing’s chatbot and LaMDA are conscious — in much the same way as we do when trying to determine which nonhuman animals are conscious.

To get some grasp on whether an AI system might be conscious, Robert suggests we look at scientific theories of consciousness — theories about how consciousness works that are grounded in observations of what the human brain is doing. If an AI system seems to have the types of processes that seem to explain human consciousness, that’s some evidence it might be conscious in similar ways to us.

To try to work out whether an AI system might be sentient — that is, whether it feels pain or pleasure — Robert suggests you look for incentives that would make feeling pain or pleasure especially useful to the system given its goals. Having looked at these criteria in the case of LLMs and finding little overlap, Robert thinks the odds that the models are conscious or sentient is well under 1%. But he also explains why, even if we're a long way off from conscious AI systems, we still need to start preparing for the not-far-off world where AIs are perceived as conscious.

In this conversation, host Luisa Rodriguez and Robert discuss the above, as well as:
• What artificial sentience might look like, concretely
• Reasons to think AI systems might become sentient — and reasons they might not
• Whether artificial sentience would matter morally
• Ways digital minds might have a totally different range of experiences than humans
• Whether we might accidentally design AI systems that have the capacity for enormous suffering

You can find Luisa and Rob’s follow-up conversation here, or by subscribing to 80k After Hours.

Chapters:

  • Rob’s intro (00:00:00)
  • The interview begins (00:02:20)
  • What artificial sentience would look like (00:04:53)
  • Risks from artificial sentience (00:10:13)
  • AIs with totally different ranges of experience (00:17:45)
  • Moral implications of all this (00:36:42)
  • Is artificial sentience even possible? (00:42:12)
  • Replacing neurons one at a time (00:48:21)
  • Biological theories (00:59:14)
  • Illusionism (01:01:49)
  • Would artificial sentience systems matter morally? (01:08:09)
  • Where are we with current systems? (01:12:25)
  • Large language models and robots (01:16:43)
  • Multimodal systems (01:21:05)
  • Global workspace theory (01:28:28)
  • How confident are we in these theories? (01:48:49)
  • The hard problem of consciousness (02:02:14)
  • Exotic states of consciousness (02:09:47)
  • Developing a full theory of consciousness (02:15:45)
  • Incentives for an AI system to feel pain or pleasure (02:19:04)
  • Value beyond conscious experiences (02:29:25)
  • How much we know about pain and pleasure (02:33:14)
  • False positives and false negatives of artificial sentience (02:39:34)
  • How large language models compare to animals (02:53:59)
  • Why our current large language models aren’t conscious (02:58:10)
  • Virtual research assistants (03:09:25)
  • Rob’s outro (03:11:37)

Producer: Keiran Harris
Audio mastering: Ben Cordell and Milo McGuire
Transcriptions: Katy Moore

Jaksot(293)

Benjamin Todd on what the effective altruism community most needs (80k team chat #4)

Benjamin Todd on what the effective altruism community most needs (80k team chat #4)

In the last '80k team chat' with Ben Todd and Arden Koehler, we discussed what effective altruism is and isn't, and how to argue for it. In this episode we turn now to what the effective altruism community most needs. • Links to learn more, summary and full transcript • The 2020 Effective Altruism Survey just opened. If you're involved with the effective altruism community, or sympathetic to its ideas, it's would be wonderful if you could fill it out: https://www.surveymonkey.co.uk/r/EAS80K2 According to Ben, we can think of the effective altruism movement as having gone through several stages, categorised by what kind of resource has been most able to unlock more progress on important issues (i.e. by what's the 'bottleneck'). Plausibly, these stages are common for other social movements as well. • Needing money: In the first stage, when effective altruism was just getting going, more money (to do things like pay staff and put on events) was the main bottleneck to making progress. • Needing talent: In the second stage, we especially needed more talented people being willing to work on whatever seemed most pressing. • Needing specific skills and capacity: In the third stage, which Ben thinks we're in now, the main bottlenecks are organizational capacity, infrastructure, and management to help train people up, as well as specialist skills that people can put to work now. What's next? Perhaps needing coordination -- the ability to make sure people keep working efficiently and effectively together as the community grows. Ben and I also cover the career implications of those stages, as well as the ability to save money and the possibility that someone else would do your job in your absence. If you’d like to learn more about these topics, you should check out a couple of articles on our site: • Think twice before talking about ‘talent gaps’ – clarifying nine misconceptions • How replaceable are the top candidates in large hiring rounds? Why the answer flips depending on the distribution of applicant ability Get this episode by subscribing: type 80,000 Hours into your podcasting app. Or read the linked transcript. Producer: Keiran Harris. Audio mastering: Ben Cordell. Transcriptions: Zakee Ulhaq.

12 Marras 20201h 25min

#87 – Russ Roberts on whether it's more effective to help strangers, or people you know

#87 – Russ Roberts on whether it's more effective to help strangers, or people you know

If you want to make the world a better place, would it be better to help your niece with her SATs, or try to join the State Department to lower the risk that the US and China go to war? People involved in 80,000 Hours or the effective altruism community would be comfortable recommending the latter. This week's guest — Russ Roberts, host of the long-running podcast EconTalk, and author of a forthcoming book on decision-making under uncertainty and the limited ability of data to help — worries that might be a mistake. Links to learn more, summary and full transcript. I've been a big fan of Russ' show EconTalk for 12 years — in fact I have a list of my top 100 recommended episodes — so I invited him to talk about his concerns with how the effective altruism community tries to improve the world. These include: • Being too focused on the measurable • Being too confident we've figured out 'the best thing' • Being too credulous about the results of social science or medical experiments • Undermining people's altruism by encouraging them to focus on strangers, who it's naturally harder to care for • Thinking it's possible to predictably help strangers, who you don't understand well enough to know what will truly help • Adding levels of wellbeing across people when this is inappropriate • Encouraging people to pursue careers they won't enjoy These worries are partly informed by Russ' 'classical liberal' worldview, which involves a preference for free market solutions to problems, and nervousness about the big plans that sometimes come out of consequentialist thinking. While we do disagree on a range of things — such as whether it's possible to add up wellbeing across different people, and whether it's more effective to help strangers than people you know — I make the case that some of these worries are founded on common misunderstandings about effective altruism, or at least misunderstandings of what we believe here at 80,000 Hours. We primarily care about making the world a better place over thousands or even millions of years — and we wouldn’t dream of claiming that we could accurately measure the effects of our actions on that timescale. I'm more skeptical of medicine and empirical social science than most people, though not quite as skeptical as Russ (check out this quiz I made where you can guess which academic findings will replicate, and which won't). And while I do think that people should occasionally take jobs they dislike in order to have a social impact, those situations seem pretty few and far between. But Russ and I disagree about how much we really disagree. In addition to all the above we also discuss: • How to decide whether to have kids • Was the case for deworming children oversold? • Whether it would be better for countries around the world to be better coordinated Chapters:Rob’s intro (00:00:00)The interview begins (00:01:48)RCTs and donations (00:05:15)The 80,000 Hours project (00:12:35)Expanding the moral circle (00:28:37)Global coordination (00:39:48)How to act if you're pessimistic about improving the long-term future (00:55:49)Communicating uncertainty (01:03:31)How much to trust empirical research (01:09:19)How to decide whether to have kids (01:24:13)Utilitarianism (01:34:01)Producer: Keiran Harris.Audio mastering: Ben Cordell.Transcriptions: Zakee Ulhaq.

3 Marras 20201h 49min

How much does a vote matter? (Article)

How much does a vote matter? (Article)

Today’s release is the latest in our series of audio versions of our articles.In this one — How much does a vote matter? — I investigate the two key things that determine the impact of your vote: • The chances of your vote changing an election’s outcome • How much better some candidates are for the world as a whole, compared to others I then discuss what I think are the best arguments against voting in important elections: • If an election is competitive, that means other people disagree about which option is better, and you’re at some risk of voting for the worse candidate by mistake. • While voting itself doesn’t take long, knowing enough to accurately pick which candidate is better for the world actually does take substantial effort — effort that could be better allocated elsewhere. Finally, I look into the impact of donating to campaigns or working to ‘get out the vote’, which can be effective ways to generate additional votes for your preferred candidate. If you want to check out the links, footnotes and figures in today’s article, you can find those here. Get this episode by subscribing: type 80,000 Hours into your podcasting app. Or read the linked transcript. Producer: Keiran Harris.

29 Loka 202031min

#86 – Hilary Greaves on Pascal's mugging, strong longtermism, and whether existing can be good for us

#86 – Hilary Greaves on Pascal's mugging, strong longtermism, and whether existing can be good for us

Had World War 1 never happened, you might never have existed. It’s very unlikely that the exact chain of events that led to your conception would have happened otherwise — so perhaps you wouldn't have been born. Would that mean that it's better for you that World War 1 happened (regardless of whether it was better for the world overall)? On the one hand, if you're living a pretty good life, you might think the answer is yes – you get to live rather than not. On the other hand, it sounds strange to say that it's better for you to be alive, because if you'd never existed there'd be no you to be worse off. But if you wouldn't be worse off if you hadn't existed, can you be better off because you do? In this episode, philosophy professor Hilary Greaves – Director of Oxford University’s Global Priorities Institute – helps untangle this puzzle for us and walks me and Rob through the space of possible answers. She argues that philosophers have been too quick to conclude what she calls existence non-comparativism – i.e, that it can't be better for someone to exist vs. not. Links to learn more, summary and full transcript. Where we come down on this issue matters. If people are not made better off by existing and having good lives, you might conclude that bringing more people into existence isn't better for them, and thus, perhaps, that it's not better at all. This would imply that bringing about a world in which more people live happy lives might not actually be a good thing (if the people wouldn't otherwise have existed) — which would affect how we try to make the world a better place. Those wanting to have children in order to give them the pleasure of a good life would in some sense be mistaken. And if humanity stopped bothering to have kids and just gradually died out we would have no particular reason to be concerned. Furthermore it might mean we should deprioritise issues that primarily affect future generations, like climate change or the risk of humanity accidentally wiping itself out. This is our second episode with Professor Greaves. The first one was a big hit, so we thought we'd come back and dive into even more complex ethical issues. We discuss: • The case for different types of ‘strong longtermism’ — the idea that we ought morally to try to make the very long run future go as well as possible • What it means for us to be 'clueless' about the consequences of our actions • Moral uncertainty -- what we should do when we don't know which moral theory is correct • Whether we should take a bet on a really small probability of a really great outcome • The field of global priorities research at the Global Priorities Institute and beyondChapters:The interview begins (00:02:53)The Case for Strong Longtermism (00:05:49)Compatible moral views (00:20:03)Defining cluelessness (00:39:26)Why cluelessness isn’t an objection to longtermism (00:51:05)Theories of what to do under moral uncertainty (01:07:42)Pascal’s mugging (01:16:37)Comparing Existence and Non-Existence (01:30:58)Philosophers who reject existence comparativism (01:48:56)Lives framework (02:01:52)Global priorities research (02:09:25) Get this episode by subscribing: type 80,000 Hours into your podcasting app. Or read the linked transcript. Producer: Keiran Harris. Audio mastering: Ben Cordell. Transcriptions: Zakee Ulhaq.

21 Loka 20202h 24min

Benjamin Todd on the core of effective altruism and how to argue for it (80k team chat #3)

Benjamin Todd on the core of effective altruism and how to argue for it (80k team chat #3)

Today’s episode is the latest conversation between Arden Koehler, and our CEO, Ben Todd. Ben’s been thinking a lot about effective altruism recently, including what it really is, how it's framed, and how people misunderstand it. We recently released an article on misconceptions about effective altruism – based on Will MacAskill’s recent paper The Definition of Effective Altruism – and this episode can act as a companion piece. Links to learn more, summary and full transcript. Arden and Ben cover a bunch of topics related to effective altruism: • How it isn’t just about donating money to fight poverty • Whether it includes a moral obligation to give • The rigorous argument for its importance • Objections to that argument • How to talk about effective altruism for people who aren't already familiar with it Given that we’re in the same office, it’s relatively easy to record conversations between two 80k team members — so if you enjoy these types of bonus episodes, let us know at podcast@80000hours.org, and we might make them a more regular feature. Get this episode by subscribing: type 80,000 Hours into your podcasting app. Or read the linked transcript. Producer: Keiran Harris. Audio mastering: Ben Cordell. Transcriptions: Zakee Ulhaq.

22 Syys 20201h 24min

Ideas for high impact careers beyond our priority paths (Article)

Ideas for high impact careers beyond our priority paths (Article)

Today’s release is the latest in our series of audio versions of our articles. In this one, we go through some more career options beyond our priority paths that seem promising to us for positively influencing the long-term future. Some of these are likely to be written up as priority paths in the future, or wrapped into existing ones, but we haven’t written full profiles for them yet—for example policy careers outside AI and biosecurity policy that seem promising from a longtermist perspective. Others, like information security, we think might be as promising for many people as our priority paths, but because we haven’t investigated them much we’re still unsure. Still others seem like they’ll typically be less impactful than our priority paths for people who can succeed equally in either, but still seem high-impact to us and like they could be top options for a substantial number of people, depending on personal fit—for example research management. Finally some—like becoming a public intellectual—clearly have the potential for a lot of impact, but we can’t recommend them widely because they don’t have the capacity to absorb a large number of people, are particularly risky, or both. If you want to check out the links in today’s article, you can find those here. Our annual user survey is also now open for submissions. Once a year for two weeks we ask all of you, our podcast listeners, article readers, advice receivers, and so on, so let us know how we've helped or hurt you. 80,000 Hours now offers many different services, and your feedback helps us figure out which programs to keep, which to cut, and which to expand. This year we have a new section covering the podcast, asking what kinds of episodes you liked the most and want to see more of, what extra resources you use, and some other questions too. We're always especially interested to hear ways that our work has influenced what you plan to do with your life or career, whether that impact was positive, neutral, or negative. That might be a different focus in your existing job, or a decision to study something different or look for a new job. Alternatively, maybe you're now planning to volunteer somewhere, or donate more, or donate to a different organisation. Your responses to the survey will be carefully read as part of our upcoming annual review, and we'll use them to help decide what 80,000 Hours should do differently next year. So please do take a moment to fill out the user survey before it closes on Sunday (13th of September). You can find it at 80000hours.org/survey Get this episode by subscribing: type 80,000 Hours into your podcasting app. Or read the linked transcript. Producer: Keiran Harris. Audio mastering: Ben Cordell. Transcriptions: Zakee Ulhaq.

7 Syys 202027min

Benjamin Todd on varieties of longtermism and things 80,000 Hours might be getting wrong (80k team chat #2)

Benjamin Todd on varieties of longtermism and things 80,000 Hours might be getting wrong (80k team chat #2)

Today’s bonus episode is a conversation between Arden Koehler, and our CEO, Ben Todd. Ben’s been doing a bunch of research recently, and we thought it’d be interesting to hear about how he’s currently thinking about a couple of different topics – including different types of longtermism, and things 80,000 Hours might be getting wrong. Links to learn more, summary and full transcript. This is very off-the-cut compared to our regular episodes, and just 54 minutes long. In the first half, Arden and Ben talk about varieties of longtermism: • Patient longtermism • Broad urgent longtermism • Targeted urgent longtermism focused on existential risks • Targeted urgent longtermism focused on other trajectory changes • And their distinctive implications for people trying to do good with their careers. In the second half, they move on to: • How to trade-off transferable versus specialist career capital • How much weight to put on personal fit • Whether we might be highlighting the wrong problems and career paths. Given that we’re in the same office, it’s relatively easy to record conversations between two 80k team members — so if you enjoy these types of bonus episodes, let us know at podcast@80000hours.org, and we might make them a more regular feature. Our annual user survey is also now open for submissions. Once a year for two weeks we ask all of you, our podcast listeners, article readers, advice receivers, and so on, so let us know how we've helped or hurt you. 80,000 Hours now offers many different services, and your feedback helps us figure out which programs to keep, which to cut, and which to expand. This year we have a new section covering the podcast, asking what kinds of episodes you liked the most and want to see more of, what extra resources you use, and some other questions too. We're always especially interested to hear ways that our work has influenced what you plan to do with your life or career, whether that impact was positive, neutral, or negative. That might be a different focus in your existing job, or a decision to study something different or look for a new job. Alternatively, maybe you're now planning to volunteer somewhere, or donate more, or donate to a different organisation. Your responses to the survey will be carefully read as part of our upcoming annual review, and we'll use them to help decide what 80,000 Hours should do differently next year. So please do take a moment to fill out the user survey. You can find it at 80000hours.org/survey Get this episode by subscribing: type 80,000 Hours into your podcasting app. Or read the linked transcript. Producer: Keiran Harris. Audio mastering: Ben Cordell. Transcriptions: Zakee Ulhaq.

1 Syys 202057min

Global issues beyond 80,000 Hours’ current priorities (Article)

Global issues beyond 80,000 Hours’ current priorities (Article)

Today’s release is the latest in our series of audio versions of our articles. In this one, we go through 30 global issues beyond the ones we usually prioritize most highly in our work, and that you might consider focusing your career on tackling. Although we spend the majority of our time at 80,000 Hours on our highest priority problem areas, and we recommend working on them to many of our readers, these are just the most promising issues among those we’ve spent time investigating. There are many other global issues that we haven’t properly investigated, and which might be very promising for more people to work on. In fact, we think working on some of the issues in this article could be as high-impact for some people as working on our priority problem areas — though we haven’t looked into them enough to be confident. If you want to check out the links in today’s article, you can find those here. Our annual user survey is also now open for submissions. Once a year for two weeks we ask all of you, our podcast listeners, article readers, advice receivers, and so on, so let us know how we've helped or hurt you. 80,000 Hours now offers many different services, and your feedback helps us figure out which programs to keep, which to cut, and which to expand. This year we have a new section covering the podcast, asking what kinds of episodes you liked the most and want to see more of, what extra resources you use, and some other questions too. We're always especially interested to hear ways that our work has influenced what you plan to do with your life or career, whether that impact was positive, neutral, or negative. That might be a different focus in your existing job, or a decision to study something different or look for a new job. Alternatively, maybe you're now planning to volunteer somewhere, or donate more, or donate to a different organisation. Your responses to the survey will be carefully read as part of our upcoming annual review, and we'll use them to help decide what 80,000 Hours should do differently next year. So please do take a moment to fill out the user survey. You can find it at 80000hours.org/survey Get this episode by subscribing: type 80,000 Hours into your podcasting app. Or read the linked transcript. Producer: Keiran Harris. Audio mastering: Ben Cordell. Transcriptions: Zakee Ulhaq.

28 Elo 202032min

Suosittua kategoriassa Koulutus

rss-murhan-anatomia
voi-hyvin-meditaatiot-2
psykopodiaa-podcast
aamukahvilla
jari-sarasvuo-podcast
adhd-podi
rss-lasnaolon-hetkia-mindfulness-tutuksi
rss-vegaaneista-tykkaan
rss-duodecim-lehti
rss-valo-minussa-2
rss-narsisti
psykologiaa-ja-kaikenlaista
rss-vapaudu-voimaasi
kehossa
mielipaivakirja
rss-luonnollinen-synnytys-podcast
itsetuntemus-on-elaman-tarkoitus
rss-onks-ok
rss-tyoelaman-timantteja
rss-koira-haudattuna