#209 – Rose Chan Loui on OpenAI’s gambit to ditch its nonprofit

#209 – Rose Chan Loui on OpenAI’s gambit to ditch its nonprofit

One OpenAI critic calls it “the theft of at least the millennium and quite possibly all of human history.” Are they right?

Back in 2015 OpenAI was but a humble nonprofit. That nonprofit started a for-profit, OpenAI LLC, but made sure to retain ownership and control. But that for-profit, having become a tech giant with vast staffing and investment, has grown tired of its shackles and wants to change the deal.

Facing off against it stand eight out-gunned and out-numbered part-time volunteers. Can they hope to defend the nonprofit’s interests against the overwhelming profit motives arrayed against them?

That’s the question host Rob Wiblin puts to nonprofit legal expert Rose Chan Loui of UCLA, who concludes that with a “heroic effort” and a little help from some friendly state attorneys general, they might just stand a chance.

Links to learn more, highlights, video, and full transcript.

As Rose lays out, on paper OpenAI is controlled by a nonprofit board that:

  • Can fire the CEO.
  • Would receive all the profits after the point OpenAI makes 100x returns on investment.
  • Is legally bound to do whatever it can to pursue its charitable purpose: “to build artificial general intelligence that benefits humanity.”

But that control is a problem for OpenAI the for-profit and its CEO Sam Altman — all the more so after the board concluded back in November 2023 that it couldn’t trust Altman and attempted to fire him (although those board members were ultimately ousted themselves after failing to adequately explain their rationale).

Nonprofit control makes it harder to attract investors, who don’t want a board stepping in just because they think what the company is doing is bad for humanity. And OpenAI the business is thirsty for as many investors as possible, because it wants to beat competitors and train the first truly general AI — able to do every job humans currently do — which is expected to cost hundreds of billions of dollars.

So, Rose explains, they plan to buy the nonprofit out. In exchange for giving up its windfall profits and the ability to fire the CEO or direct the company’s actions, the board will become minority shareholders with reduced voting rights, and presumably transform into a normal grantmaking foundation instead.

Is this a massive bait-and-switch? A case of the tail not only wagging the dog, but grabbing a scalpel and neutering it?

OpenAI repeatedly committed to California, Delaware, the US federal government, founding staff, and the general public that its resources would be used for its charitable mission and it could be trusted because of nonprofit control. Meanwhile, the divergence in interests couldn’t be more stark: every dollar the for-profit keeps from its nonprofit parent is another dollar it could invest in AGI and ultimately return to investors and staff.

Chapters:

  • Cold open (00:00:00)
  • What's coming up (00:00:50)
  • Who is Rose Chan Loui? (00:03:11)
  • How OpenAI carefully chose a complex nonprofit structure (00:04:17)
  • OpenAI's new plan to become a for-profit (00:11:47)
  • The nonprofit board is out-resourced and in a tough spot (00:14:38)
  • Who could be cheated in a bad conversion to a for-profit? (00:17:11)
  • Is this a unique case? (00:27:24)
  • Is control of OpenAI 'priceless' to the nonprofit in pursuit of its mission? (00:28:58)
  • The crazy difficulty of valuing the profits OpenAI might make (00:35:21)
  • Control of OpenAI is independently incredibly valuable and requires compensation (00:41:22)
  • It's very important the nonprofit get cash and not just equity (and few are talking about it) (00:51:37)
  • Is it a farce to call this an "arm's-length transaction"? (01:03:50)
  • How the nonprofit board can best play their hand (01:09:04)
  • Who can mount a court challenge and how that would work (01:15:41)
  • Rob's outro (01:21:25)

Producer: Keiran Harris
Audio engineering by Ben Cordell, Milo McGuire, Simon Monsour, and Dominic Armstrong
Video editing: Simon Monsour
Transcriptions: Katy Moore

Avsnitt(299)

#175 – Lucia Coulter on preventing lead poisoning for $1.66 per child

#175 – Lucia Coulter on preventing lead poisoning for $1.66 per child

Lead is one of the most poisonous things going. A single sugar sachet of lead, spread over a park the size of an American football field, is enough to give a child that regularly plays there lead poisoning. For life they’ll be condemned to a ~3-point-lower IQ; a 50% higher risk of heart attacks; and elevated risk of kidney disease, anaemia, and ADHD, among other effects.We’ve known lead is a health nightmare for at least 50 years, and that got lead out of car fuel everywhere. So is the situation under control? Not even close.Around half the kids in poor and middle-income countries have blood lead levels above 5 micrograms per decilitre; the US declared a national emergency when just 5% of the children in Flint, Michigan exceeded that level. The collective damage this is doing to children’s intellectual potential, health, and life expectancy is vast — the health damage involved is around that caused by malaria, tuberculosis, and HIV combined.This week’s guest, Lucia Coulter — cofounder of the incredibly successful Lead Exposure Elimination Project (LEEP) — speaks about how LEEP has been reducing childhood lead exposure in poor countries by getting bans on lead in paint enforced.Links to learn more, summary, and full transcript.Various estimates suggest the work is absurdly cost effective. LEEP is in expectation preventing kids from getting lead poisoning for under $2 per child (explore the analysis here). Or, looking at it differently, LEEP is saving a year of healthy life for $14, and in the long run is increasing people’s lifetime income anywhere from $300–1,200 for each $1 it spends, by preventing intellectual stunting.Which raises the question: why hasn’t this happened already? How is lead still in paint in most poor countries, even when that’s oftentimes already illegal? And how is LEEP able to get bans on leaded paint enforced in a country while spending barely tens of thousands of dollars? When leaded paint is gone, what should they target next?With host Robert Wiblin, Lucia answers all those questions and more:Why LEEP isn’t fully funded, and what it would do with extra money (you can donate here).How bad lead poisoning is in rich countries.Why lead is still in aeroplane fuel.How lead got put straight in food in Bangladesh, and a handful of people got it removed.Why the enormous damage done by lead mostly goes unnoticed.The other major sources of lead exposure aside from paint.Lucia’s story of founding a highly effective nonprofit, despite having no prior entrepreneurship experience, through Charity Entrepreneurship’s Incubation Program.Why Lucia pledges 10% of her income to cost-effective charities.Lucia’s take on why GiveWell didn’t support LEEP earlier on.How the invention of cheap, accessible lead testing for blood and consumer products would be a game changer.Generalisable lessons LEEP has learned from coordinating with governments in poor countries.And plenty more.Producer and editor: Keiran HarrisAudio Engineering Lead: Ben CordellTechnical editing: Milo McGuire and Dominic ArmstrongTranscriptions: Katy Moore

14 Dec 20232h 14min

#174 – Nita Farahany on the neurotechnology already being used to convict criminals and manipulate workers

#174 – Nita Farahany on the neurotechnology already being used to convict criminals and manipulate workers

"It will change everything: it will change our workplaces, it will change our interactions with the government, it will change our interactions with each other. It will make all of us unwitting neuromarketing subjects at all times, because at every moment in time, when you’re interacting on any platform that also has issued you a multifunctional device where they’re looking at your brainwave activity, they are marketing to you, they’re cognitively shaping you."So I wrote the book as both a wake-up call, but also as an agenda-setting: to say, what do we need to do, given that this is coming? And there’s a lot of hope, and we should be able to reap the benefits of the technology, but how do we do that without actually ending up in this world of like, 'Oh my god, mind reading is here. Now what?'" — Nita FarahanyIn today’s episode, host Luisa Rodriguez speaks to Nita Farahany — professor of law and philosophy at Duke Law School — about applications of cutting-edge neurotechnology.Links to learn more, summary, and full transcript.They cover:How close we are to actual mind reading.How hacking neural interfaces could cure depression.How companies might use neural data in the workplace — like tracking how productive you are, or using your emotional states against you in negotiations.How close we are to being able to unlock our phones by singing a song in our heads.How neurodata has been used for interrogations, and even criminal prosecutions.The possibility of linking brains to the point where you could experience exactly the same thing as another person.Military applications of this tech, including the possibility of one soldier controlling swarms of drones with their mind.And plenty more.Producer and editor: Keiran HarrisAudio Engineering Lead: Ben CordellTechnical editing: Simon Monsour and Milo McGuireAdditional content editing: Katy Moore and Luisa RodriguezTranscriptions: Katy Moore

7 Dec 20232h

#173 – Jeff Sebo on digital minds, and how to avoid sleepwalking into a major moral catastrophe

#173 – Jeff Sebo on digital minds, and how to avoid sleepwalking into a major moral catastrophe

"We do have a tendency to anthropomorphise nonhumans — which means attributing human characteristics to them, even when they lack those characteristics. But we also have a tendency towards anthropodenial — which involves denying that nonhumans have human characteristics, even when they have them. And those tendencies are both strong, and they can both be triggered by different types of systems. So which one is stronger, which one is more probable, is again going to be contextual. "But when we then consider that we, right now, are building societies and governments and economies that depend on the objectification, exploitation, and extermination of nonhumans, that — plus our speciesism, plus a lot of other biases and forms of ignorance that we have — gives us a strong incentive to err on the side of anthropodenial instead of anthropomorphism." — Jeff SeboIn today’s episode, host Luisa Rodriguez interviews Jeff Sebo — director of the Mind, Ethics, and Policy Program at NYU — about preparing for a world with digital minds.Links to learn more, highlights, and full transcript.They cover:The non-negligible chance that AI systems will be sentient by 2030What AI systems might want and need, and how that might affect our moral conceptsWhat happens when beings can copy themselves? Are they one person or multiple people? Does the original own the copy or does the copy have its own rights? Do copies get the right to vote?What kind of legal and political status should AI systems have? Legal personhood? Political citizenship?What happens when minds can be connected? If two minds are connected, and one does something illegal, is it possible to punish one but not the other?The repugnant conclusion and the rebugnant conclusionThe experience of trying to build the field of AI welfareWhat improv comedy can teach us about doing good in the worldAnd plenty more.Chapters:Cold open (00:00:00)Luisa's intro (00:01:00)The interview begins (00:02:45)We should extend moral consideration to some AI systems by 2030 (00:06:41)A one-in-1,000 threshold (00:15:23)What does moral consideration mean? (00:24:36)Hitting the threshold by 2030 (00:27:38)Is the threshold too permissive? (00:38:24)The Rebugnant Conclusion (00:41:00)A world where AI experiences could matter more than human experiences (00:52:33)Should we just accept this argument? (00:55:13)Searching for positive-sum solutions (01:05:41)Are we going to sleepwalk into causing massive amounts of harm to AI systems? (01:13:48)Discourse and messaging (01:27:17)What will AI systems want and need? (01:31:17)Copies of digital minds (01:33:20)Connected minds (01:40:26)Psychological connectedness and continuity (01:49:58)Assigning responsibility to connected minds (01:58:41)Counting the wellbeing of connected minds (02:02:36)Legal personhood and political citizenship (02:09:49)Building the field of AI welfare (02:24:03)What we can learn from improv comedy (02:29:29)Producer and editor: Keiran HarrisAudio Engineering Lead: Ben CordellTechnical editing: Dominic Armstrong and Milo McGuireAdditional content editing: Katy Moore and Luisa RodriguezTranscriptions: Katy Moore

22 Nov 20232h 38min

#172 – Bryan Caplan on why you should stop reading the news

#172 – Bryan Caplan on why you should stop reading the news

Is following important political and international news a civic duty — or is it our civic duty to avoid it?It's common to think that 'staying informed' and checking the headlines every day is just what responsible adults do. But in today's episode, host Rob Wiblin is joined by economist Bryan Caplan to discuss the book Stop Reading the News: A Manifesto for a Happier, Calmer and Wiser Life — which argues that reading the news both makes us miserable and distorts our understanding of the world. Far from informing us and enabling us to improve the world, consuming the news distracts us, confuses us, and leaves us feeling powerless.Links to learn more, summary, and full transcript.In the first half of the episode, Bryan and Rob discuss various alleged problems with the news, including:That it overwhelmingly provides us with information we can't usefully act on.That it's very non-representative in what it covers, in particular favouring the negative over the positive and the new over the significant.That it obscures the big picture, falling into the trap of thinking 'something important happens every day.'That it's highly addictive, for many people chewing up 10% or more of their waking hours.That regularly checking the news leaves us in a state of constant distraction and less able to engage in deep thought.And plenty more.Bryan and Rob conclude that if you want to understand the world, you're better off blocking news websites and spending your time on Wikipedia, Our World in Data, or reading a textbook. And if you want to generate political change, stop reading about problems you already know exist and instead write your political representative a physical letter — or better yet, go meet them in person.In the second half of the episode, Bryan and Rob cover: Why Bryan is pretty sceptical that AI is going to lead to extreme, rapid changes, or that there's a meaningful chance of it going terribly.Bryan’s case that rational irrationality on the part of voters leads to many very harmful policy decisions.How to allocate resources in space.Bryan's experience homeschooling his kids.Producer and editor: Keiran HarrisAudio Engineering Lead: Ben CordellTechnical editing: Simon Monsour and Milo McGuireTranscriptions: Katy Moore

17 Nov 20232h 23min

#171 – Alison Young on how top labs have jeopardised public health with repeated biosafety failures

#171 – Alison Young on how top labs have jeopardised public health with repeated biosafety failures

"Rare events can still cause catastrophic accidents. The concern that has been raised by experts going back over time, is that really, the more of these experiments, the more labs, the more opportunities there are for a rare event to occur — that the right pathogen is involved and infects somebody in one of these labs, or is released in some way from these labs. And what I chronicle in Pandora's Gamble is that there have been these previous outbreaks that have been associated with various kinds of lab accidents. So this is not a theoretical thing that can happen: it has happened in the past." — Alison YoungIn today’s episode, host Luisa Rodriguez interviews award-winning investigative journalist Alison Young on the surprising frequency of lab leaks and what needs to be done to prevent them in the future.Links to learn more, summary, and full transcript.They cover:The most egregious biosafety mistakes made by the CDC, and how Alison uncovered them through her investigative reportingThe Dugway life science test facility case, where live anthrax was accidentally sent to labs across the US and several other countries over a period of many yearsThe time the Soviets had a major anthrax leak, and then hid it for over a decadeThe 1977 influenza pandemic caused by vaccine trial gone wrong in ChinaThe last death from smallpox, caused not by the virus spreading in the wild, but by a lab leak in the UK Ways we could get more reliable oversight and accountability for these labsAnd the investigative work Alison’s most proud ofChapters:Cold open (00:00:00)Luisa's intro (00:01:13)Investigating leaks at the CDC (00:05:16)High-profile CDC accidents (00:16:13)Dugway live anthrax accidents (00:32:08)Soviet anthrax leak (00:44:41)The 1977 influenza pandemic (00:53:43)The last death from smallpox (00:59:27)How common are lab leaks? (01:09:05)Improving the regulation of dangerous biological research (01:18:36)Potential solutions (01:34:55)The investigative work Alison’s most proud of (01:40:33)Producer and editor: Keiran HarrisAudio Engineering Lead: Ben CordellTechnical editing: Simon Monsour and Milo McGuireAdditional content editing: Katy Moore and Luisa RodriguezTranscriptions: Katy Moore

9 Nov 20231h 46min

#170 – Santosh Harish on how air pollution is responsible for ~12% of global deaths — and how to get that number down

#170 – Santosh Harish on how air pollution is responsible for ~12% of global deaths — and how to get that number down

"One [outrageous example of air pollution] is municipal waste burning that happens in many cities in the Global South. Basically, this is waste that gets collected from people's homes, and instead of being transported to a waste management facility or a landfill or something, gets burned at some point, because that's the fastest way to dispose of it — which really points to poor delivery of public services. But this is ubiquitous in virtually every small- or even medium-sized city. It happens in larger cities too, in this part of the world. "That's something that truly annoys me, because it feels like the kind of thing that ought to be fairly easily managed, but it happens a lot. It happens because people presumably don't think that it's particularly harmful. I don't think it saves a tonne of money for the municipal corporations and other local government that are meant to manage it. I find it particularly annoying simply because it happens so often; it's something that you're able to smell in so many different parts of these cities." — Santosh HarishIn today’s episode, host Rob Wiblin interviews Santosh Harish — leader of Open Philanthropy’s grantmaking in South Asian air quality — about the scale of the harm caused by air pollution.Links to learn more, summary, and full transcript.They cover:How bad air pollution is for our health and life expectancyThe different kinds of harm that particulate pollution causesThe strength of the evidence that it damages our brain function and reduces our productivityWhether it was a mistake to switch our attention to climate change and away from air pollutionWhether most listeners to this show should have an air purifier running in their house right nowWhere air pollution in India is worst and why, and whether it's going up or downWhere most air pollution comes fromThe policy blunders that led to many sources of air pollution in India being effectively unregulatedWhy indoor air pollution packs an enormous punchThe politics of air pollution in IndiaHow India ended up spending a lot of money on outdoor air purifiersThe challenges faced by foreign philanthropists in IndiaWhy Santosh has made the grants he has so farAnd plenty moreChapters:Cold open (00:00:00)Rob's intro (00:01:07)How bad is air pollution? (00:03:41)Quantifying the scale of the damage (00:15:47)Effects on cognitive performance and mood (00:24:19)How do we really know the harms are as big as is claimed? (00:27:05)Misconceptions about air pollution (00:36:56)Why don’t environmental advocacy groups focus on air pollution? (00:42:22)How listeners should approach air pollution in their own lives (00:46:58)How bad is air pollution in India in particular (00:54:23)The trend in India over the last few decades (01:12:33)Why aren’t people able to fix these problems? (01:24:17)Household waste burning (01:35:06)Vehicle emissions (01:42:10)The role that courts have played in air pollution regulation in India (01:50:09)Industrial emissions (01:57:10)The political economy of air pollution in northern India (02:02:14)Can philanthropists drive policy change? (02:13:42)Santosh’s grants (02:29:45)Examples of other countries that have managed to greatly reduce air pollution (02:45:44)Career advice for listeners in India (02:51:11)Producer and editor: Keiran HarrisAudio Engineering Lead: Ben CordellTechnical editing: Simon Monsour and Milo McGuireTranscriptions: Katy Moore

1 Nov 20232h 57min

#169 – Paul Niehaus on whether cash transfers cause economic growth, and keeping theft to acceptable levels

#169 – Paul Niehaus on whether cash transfers cause economic growth, and keeping theft to acceptable levels

"One of our earliest supporters and a dear friend of mine, Mark Lampert, once said to me, “The way I think about it is, imagine that this money were already in the hands of people living in poverty. If I could, would I want to tax it and then use it to finance other projects that I think would benefit them?” I think that's an interesting thought experiment -- and a good one -- to say, “Are there cases in which I think that's justifiable?” — Paul NiehausIn today’s episode, host Luisa Rodriguez interviews Paul Niehaus — co-founder of GiveDirectly — on the case for giving unconditional cash to the world's poorest households.Links to learn more, summary and full transcript.They cover:The empirical evidence on whether giving cash directly can drive meaningful economic growthHow the impacts of GiveDirectly compare to USAID employment programmesGiveDirectly vs GiveWell’s top-recommended charitiesHow long-term guaranteed income affects people's risk-taking and investmentsWhether recipients prefer getting lump sums or monthly instalmentsHow GiveDirectly tackles cases of fraud and theftThe case for universal basic income, and GiveDirectly’s UBI studies in Kenya, Malawi, and LiberiaThe political viability of UBIPlenty moreChapters:Cold open (00:00:00)Luisa’s intro (00:00:58)The basic case for giving cash directly to the poor (00:03:28)Comparing GiveDirectly to USAID programmes (00:15:42)GiveDirectly vs GiveWell’s top-recommended charities (00:35:16)Cash might be able to drive economic growth (00:41:59)Fraud and theft of GiveDirectly funds (01:09:48)Universal basic income studies (01:22:33)Skyjo (01:44:43)Producer and editor: Keiran HarrisAudio Engineering Lead: Ben CordellTechnical editing: Dominic Armstrong and Milo McGuireAdditional content editing: Luisa Rodriguez and Katy MooreTranscriptions: Katy Moore

26 Okt 20231h 47min

#168 – Ian Morris on whether deep history says we're heading for an intelligence explosion

#168 – Ian Morris on whether deep history says we're heading for an intelligence explosion

"If we carry on looking at these industrialised economies, not thinking about what it is they're actually doing and what the potential of this is, you can make an argument that, yes, rates of growth are slowing, the rate of innovation is slowing. But it isn't. What we're doing is creating wildly new technologies: basically producing what is nothing less than an evolutionary change in what it means to be a human being. But this has not yet spilled over into the kind of growth that we have accustomed ourselves to in the fossil-fuel industrial era. That is about to hit us in a big way." — Ian MorrisIn today’s episode, host Rob Wiblin speaks with repeat guest Ian Morris about what big-picture history says about the likely impact of machine intelligence. Links to learn more, summary and full transcript.They cover:Some crazy anomalies in the historical record of civilisational progressWhether we should think about technology from an evolutionary perspectiveWhether we ought to expect war to make a resurgence or continue dying outWhy we can't end up living like The JetsonsWhether stagnation or cyclical recurring futures seem very plausibleWhat it means that the rate of increase in the economy has been increasingWhether violence is likely between humans and powerful AI systemsThe most likely reasons for Rob and Ian to be really wrong about all of thisHow professional historians react to this sort of talkThe future of Ian’s workPlenty moreChapters:Cold open (00:00:00)Rob’s intro (00:01:27)Why we should expect the future to be wild (00:04:08)How historians have reacted to the idea of radically different futures (00:21:20)Why we won’t end up in The Jetsons (00:26:20)The rise of machine intelligence (00:31:28)AI from an evolutionary point of view (00:46:32)Is violence likely between humans and powerful AI systems? (00:59:53)Most troubling objections to this approach in Ian’s view (01:28:20)Confronting anomalies in the historical record (01:33:10)The cyclical view of history (01:56:11)Is stagnation plausible? (02:01:38)The limit on how long this growth trend can continue (02:20:57)The future of Ian’s work (02:37:17)Producer and editor: Keiran HarrisAudio Engineering Lead: Ben CordellTechnical editing: Milo McGuireTranscriptions: Katy Moore

23 Okt 20232h 43min

Populärt inom Utbildning

bygga-at-idioter
rss-bara-en-till-om-missbruk-medberoende-2
historiepodden-se
det-skaver
harrisons-dramatiska-historia
nu-blir-det-historia
allt-du-velat-veta
nar-man-talar-om-trollen
johannes-hansen-podcast
not-fanny-anymore
roda-vita-rosen
sektledare
i-vantan-pa-katastrofen
sa-in-i-sjalen
alska-oss
handen-pa-hjartat
jagaren
rss-max-tant-med-max-villman
rss-sjalsligt-avkladd
rss-npf-podden