Transcribe your podcast
[00:00:01]

From New York Times. I'm Michael Bobaro. This is the daily today. The surprise ouster of Sam Aldman as CEO of OpenAI, the fallout that it triggered, and what it all means for the future of the transformational technology at the center. Enter of the boardroom drama. I speak with my colleague, technology reporter Cade Metz. Wednesday, November 22 you.

[00:00:52]

Well, cade, good morning. Thank you for making time for us. We are now several days into a corporate drama that has consumed the world that you cover, Silicon Valley, and that remains pretty head spinning and a very fluid situation.

[00:01:10]

Absolutely. I mean, corporate dramas can be interesting, but never this interesting.

[00:01:17]

They're not supposed to be this interesting. They're supposed to be a bit more boring.

[00:01:20]

That's right.

[00:01:21]

So, in the simplest possible terms, what happened?

[00:01:24]

Sam Altman, the chief executive of OpenAI, the company that wowed millions of people late last year with the debut of Chat GPT, an online chat bot that can answer questions, generate term papers, write poetry, even write its own computer code, has been ousted from that company by the company's own board.

[00:01:52]

A boardroom coup is how we used to refer to that when I was a business reporter.

[00:01:58]

Yes, and the stakes are artificial intelligence of a level that most people, even in the industry, did not expect to arrive this soon. This is powerful technology that is already changing the way disinformation is spread across the Internet. It's starting to take away jobs, and it is being improved at an incredibly fast rate that has caused great optimism in parts of Silicon Valley and great concern in other parts. And those two things are clashing. And the clash is encapsulated in this very human story that is ultimately about ego and power and money.

[00:02:45]

Cade.

[00:02:45]

Sam Altman. That name is not a name we've ever really examined in great detail. We've talked a lot about this technology, we've talked a lot about this company, OpenAI, but not a ton about Altman himself. So help us understand, really, who Sam Altman is, why his firing is, as you just said, such a titanic moment for this technology, and how he fits into this large clash that you just outlined.

[00:03:16]

In many ways, Sam Altman is the classic Silicon Valley archetype. He dropped out of Stanford as a sophomore, he founds his own company, and then he winds up as the president of a well known startup, incubator.

[00:03:31]

One of the most important things that we try to teach at Y Combinator.

[00:03:35]

Called Y Combinator, is that it's more.

[00:03:37]

Important to build something that a few users love than something that a lot.

[00:03:41]

Of users like an organization that helps other startups get off the ground.

[00:03:47]

If you can build a product that is so good, people spontaneously tell their friends about it, you have done 80% of the work that you need to be a really successful startup.

[00:03:55]

And through that job, he becomes one of the most well connected people in Silicon Valley.

[00:04:01]

Welcome to how to build the future Today. Our guest is Mark Zuckerberg. Today we have Elon Musk. Elon, thank you for joining us. Yeah, thanks for having me.

[00:04:09]

And in 2015, he and others, including Elon Musk, found this company OpenAI to do artificial intelligence. But he is not the main name in the headline. Musk is the main name in the headline. And Sam. He's not an AI researcher. He will tell you that he studied it briefly when he was at Stanford. But he is ambitious. And as he sees this technology rising, he is among those who go after it. Google was in the lead, so to speak, at that point in this push towards artificial intelligence. They were building increasingly powerful technologies that could recognize objects the way a driverless car does, that could recognize your voice the way Siri does. That could translate between language instantly, as Google Translate does. And Sam and Elon Musk and a group of researchers got together and decided they needed to challenge Google. There was a concern that Google was going to build this technology on its own. It would become increasingly powerful, and Google would control the universe. And so they felt they would be a counterweight to Google, and they would develop this technology in the open. They wouldn't bottle this thing up and keep it to themselves.

[00:05:35]

They would develop it in a way that would benefit humanity.

[00:05:39]

So from the start, you're saying altman approached artificial intelligence with a sense of altruism and a little bit of worry that in the wrong hands, this could be dangerous.

[00:05:54]

That may or may not have been his main motivation. Okay? But it was part of how those around him thought. And when they debuted in 2015, that was part of the ethos. To that end, they created OpenAI not as a company, but as a nonprofit. The idea was that they would be free of the corporate pressures that were dangerous at places like Google. They would not be beholden to the stock market. They would be beholden to the people. But a couple years later, Musk, in a huff, leaves OpenAI because he feels like they're not pushing forward fast enough. And Elon is not only gone, elon's money is gone. Elon was the main donor to this operation. And Sam realizes if this operation is going to survive, he needs money. And I need to underscore just how much money goes into the creation of this technology. You need billions of dollars to build this stuff, billions of dollars of computing power needed to train these AI systems. Sam recognizes this. He creates a new company, a for profit company, because he needs to give investors a reason to invest. He needs to give them profits. So he creates this for profit and bolts it on to the nonprofit.

[00:07:35]

He does that and immediately raises $1 billion from one of the tech giants here in the US. Microsoft.

[00:07:45]

Okay, well, we're going to return to that structure you just explained, of a corporation basically being bolted to a nonprofit because that becomes important to this drama later on. But what does this company that Altman bolts onto this nonprofit end up doing with this big infusion of cash?

[00:08:03]

It ends up building an increasingly powerful technology called GPT. A technology that can take in vast amounts of text from across the Internet, wikipedia articles, digital books, chat logs, even computer programs. And it can learn to generate text on its own. And Sam Altman is the person who is helping to push this forward because he continues to raise vast sums of money from Microsoft. The initial $1 billion that Microsoft invested grew to $3 billion. All that goes into the creation of this technology. And for people following the AI field, this technology was increasingly impressive. But the general public, for the most part, did not wake up to what was happening until the end of last year when OpenAI and Sam Altman released Chat GPT.

[00:09:11]

A new artificial intelligence tool is going viral for cranking out entire essays in a matter of seconds.

[00:09:17]

It literally shows you how to do it says how to do the math, and at the end, the answer is b.

[00:09:26]

What the heck?

[00:09:27]

This is insane. Chat GPT is poised to change the way we interact with computers and AI. In fact, Chat GPT wrote everything I just said when we asked it to write an introduction to this piece. Now, other than the right, this is when everyone, the daily included, start covering the heck out of this new technology because it is so insanely powerful and compelling and, let's be honest, a little bit scary.

[00:09:57]

Yes. And as all this attention goes to Chat GPT, so much attention also goes to Sam Altman. He suddenly rises from a well known figure in Silicon Valley to a well known figure across the world. He is the face of this technology which so many millions of people have been wowed by. So what have you done?

[00:10:29]

Like, ever?

[00:10:30]

No, I mean, what have you done.

[00:10:31]

With AI, sam's on every podcast?

[00:10:34]

I think it's going to be a great thing, but I think it's not going to be all a great thing.

[00:10:39]

Then he's in front of Congress.

[00:10:40]

OpenAI was founded on the belief that artificial intelligence has the potential to improve nearly every aspect of our lives.

[00:10:47]

And then Sam Altman, co founder and CEO of OpenAI, visited his hall last.

[00:10:53]

Week as he's on a global tour.

[00:10:56]

I would be surprised if Australia does not build great AI companies.

[00:11:00]

Talking with world leaders in Europe and.

[00:11:04]

In Asia, the thoughtfulness, the focus, the urgency on figuring out how we mitigate these very huge risks that are coming so that we get to enjoy the.

[00:11:14]

Benefits of this technology, promoting this technology, but also acknowledging that it could be dangerous. There's always this nod to the dangers and the concerns and that's part of who Sam is.

[00:11:31]

We've got to be cautious here. And also, I think it doesn't work to do all this in a lab. You've got to get these products out into the world.

[00:11:40]

He will tell you one thing and then immediately nod to the opposite thing. And this happens throughout a conversation. He is optimistic this is going to be a good thing, while always saying, yeah, but there are concerns.

[00:11:56]

People should be happy that we're a little bit scared of this. I think people should be.

[00:12:00]

You're a little bit scared?

[00:12:01]

A little bit, yeah. You personally, I think if I said I were not, you should either not trust me or be very unhappy I'm in this job.

[00:12:14]

So in some ways, it sounds like he is embodying the tensions all around this new technology.

[00:12:21]

He absolutely is, and those tensions are real. And in some ways, this is a new thing for Silicon Valley. The thing you have to realize about Silicon Valley is that it's very much about, or has been about optimism. People believing that things were possible, that most people didn't. They, as Silicon Valley entrepreneurs, were going to make the world better. That's been the trope. What happened with artificial intelligence in particular over the last decade, is that you also have a group of people who are looking into the future, and they don't necessarily see an optimistic picture. They see a concerning picture. And as this technology that Mr. Altman and his lab are building gets more powerful, that becomes part of the Silicon Valley ethos. You have your optimists and you have your pessimists, and then you've got Sam Altman, who's so good at balancing things, embodying both of those two sides of the equation.

[00:13:33]

Can you just define these two camps? I think the optimist case is pretty straightforward, right. That something like chat TPT is going to enhance our lives and there can be safeguards that protect us. What exactly are the pessimists making the case for?

[00:13:47]

Well, the pessimists will acknowledge that this could be a powerful technology. They'll even say this could be used to cure cancer or solve climate change. But they are also fundamentally worried that if the right safeguards are not put in place, this could destroy humanity. It is a deeply held, sometimes strange to understand belief, but this is a real force in the Valley. People are worried about this. And some of those people were sitting on the board of that not for profit that Sam Altman created back in 2015 when he initially put together OpenAI.

[00:14:33]

So there are pessimists about this technology who sit on the board of the nonprofit that basically employs Altman.

[00:14:42]

Right. And by November of 2023, that board is just six people. And because of the unusual structure of OpenAI, they have an incredible amount of power. Those six people control the for profit company alone. They are not beholden to shareholders, investors who have put money into the company.

[00:15:11]

Even Microsoft with its billions in the company.

[00:15:14]

Even Microsoft, which has by this time put $13 billion into the company wow. Does not have control over the situation. Six people control whether Mr. Altman is at the head of that company or not. And a few days ago, Friday morning, while I was on the phone with another OpenAI employee, I was told I'd better look out for an email around noon. And then just after noon, that tiny board announces to the world that Sam Altman is no longer the CEO of OpenAI. And no one can believe it. No one at OpenAI, no one at Microsoft, none of the investors in OpenAI, nobody saw this coming. And no one understands how all this is going to play out over the next 72 hours. You.

[00:16:25]

We'll be right back.

[00:16:35]

So, Kate, why did this board fire Sam Alman? How does that firing fit into this divide we've been discussing of the AI optimists and the AI pessimists and what exactly has been the fallout?

[00:16:48]

One of the many remarkable things of this whole soap opera is that we don't really know why they ousted Sam Altman. What the board said was that they could no longer trust him to run this company and build AI for the benefit of humanity. But we still don't know why, ultimately, he was removed.

[00:17:15]

I mean, is it safe to assume that it does fit into the schism that you mentioned between those who think that Chat GPT is ultimately very good or very scary and that somehow in trying to straddle those two universes, that Altman somehow ran afoul of a board that seems pretty full of pessimists?

[00:17:40]

That tension is fundamentally part of this situation. You have essentially a board that is split in half. You have three founders of OpenAI, including Sam Altman, on the board, and you have three other people, some of whom are very concerned about the future of AI. And Sam and his leadership team thought that that balance would work out. But one of the co founders, Ilya Sutzkever, a very important AI researcher over the past decade, has grown increasingly concerned about the dangers of the technology. And he is among those who ousted Mr. Altman.

[00:18:27]

He kind of broke the tie, as it were.

[00:18:29]

Ilya broke the tie.

[00:18:31]

Okay, so let's turn to the fallout of this board doing what it just did in getting rid of Altman. We have major breaking news related to OpenAI. Sam Altman is out as CEO of OpenAI.

[00:18:45]

The fallout is immediate.

[00:18:49]

The minute this thing started to filter out, microsoft shares started to fall.

[00:18:53]

They're down near two investors across the world, the people and companies who have put billions of dollars into this AI company. They don't understand the decision. They are blindsided by it.

[00:19:08]

Even key investors that were only supposedly notified just a few minutes before that missive went out.

[00:19:14]

And on top of it all, they're powerless to do anything.

[00:19:18]

Obviously, this just puts a spotlight on this nonprofit board.

[00:19:23]

They have no say in what this tiny little board does. And that includes Satina Della, the CEO of Microsoft.

[00:19:30]

We really want to partner with OpenAI and we want to partner with Sam. And so irrespective of where Sam is.

[00:19:37]

He and other investors start to put pressure on this board to take Altman back.

[00:19:45]

We're very confident in Sam and his leadership team. I've not been told about anything they.

[00:19:51]

Published internally and the walls start to cave in, so to speak.

[00:19:56]

The company's president and co founder, Greg Brockman has quit after the CEO and.

[00:20:01]

Fellow co founder, especially when other talent from inside OpenAI starts to leave. There's this support spilling out across the Internet. Everyone is trying to put pressure on these four people on the board. And pretty soon Altman is inside the offices of OpenAI in San Francisco trying to convince them to take him back. And the board late on Sunday, they put out a note that says, we are standing firm by our decision. Sam Altman is out and out of the blue.

[00:20:44]

Microsoft announcing that it has hired Sam Altman to lead its artificial intelligence group. That came just days.

[00:20:50]

Microsoft and Satya Nadella say, okay, we're essentially going to rebuild what you were doing at OpenAI and we're creating a competitor. And when that happens, other employees at OpenAI start to sign a letter threatening to leave OpenAI and join this new venture.

[00:21:15]

Latest number we have right now at least 700 of OpenAI employees threatening to leave the startup for Microsoft. By the way, that's out of about 770 employees total.

[00:21:24]

Those demands include and still the board stands firm. And if that wasn't enough for you, ilya Sutzkever, the guy who switched sides and joined the board in Ousting, Mr. Altman, he switches sides again and he goes back to team Sam and he says, I deeply regret what I have done. And he puts his name on the letter that the 700 employees have sent out threatening to join the Microsoft venture.

[00:22:06]

So one of the board members who is responsible for Altman's ouster once the Ouster's repercussions become clear, says, I really regret that and signs a letter saying I will leave the company unless Altman returns as CEO. That is pretty headspinning.

[00:22:21]

We're still trying to figure out what was inside his head and what is inside his head. Now.

[00:22:28]

How do you understand the depth of loyalty to Altman? Why does everyone decide that if Altman leaves, they're going to leave too?

[00:22:37]

Well, there are many reasons here. For one, they were on top of the world. They had a good thing going. They have a stake in the company, they can make money. They're also, for the most part on the optimistic side, many people who are concerned about dangers have left the company over the years because of this type of disagreement, just on a smaller scale. So you've got people who are more aligned with Altman at the company for the most part.

[00:23:05]

So, in that sense, this board misunderstood where most people inside OpenAI were when it came to this pessimistic versus optimistic approach to AI. But if the board's goal was to constrain this technology and constrain someone like Altman, with whom they disagree about the future of this technology, haven't they just failed to do that? Because that technology has just up and gone over to Microsoft, which would seem to have even fewer safeguards in place.

[00:23:38]

What you have to realize, ultimately, is that this technology is going to happen one way or another. This is a story that shows that the genie is out of the bottle in many ways, and the technology is going to push forward. Bottling the technology up in the way that the board seems to want to do may not be the best way forward. There are a lot of arguments that say, we don't want to bottle it up inside these companies. We want more people to be aware of what is being built so we can understand it, so we can find the flaws, so we can find where the dangers might be. And there are a lot of arguments about the future of this technology, but you can be sure this technology has a future.

[00:24:30]

Does that mean that in this case, and when we think about the future of this powerful technology, that the optimists have prevailed and that the safeguards have kind of lost?

[00:24:42]

Not necessarily. This is an argument that will continue across Silicon Valley. There are optimists and there are pessimists. This battle has only just begun.

[00:25:01]

Well, Cade, thank you very much. We appreciate it.

[00:25:04]

Thank you.

[00:25:11]

On Wednesday morning, OpenAI announced that Sam Altman would be reinstated as CEO. The reversal reflected the enormous pressure placed on the company by allies of Altman, including investors and employees, nearly all 770 of whom had threatened to leave OpenAI by Tuesday night. As part of Altman's return, OpenAI's board of directors will be overhauled, and several members who voted to fire Altman will be forced out. Just before all of this unfolded, my colleague, Times technology columnist Kevin Roos, interviewed Altman for his podcast, Hard Fork. If you're curious to hear that conversation with the man at the center of all of this, search for Hard Fork, wherever you listen, we'll be right back. Here's what else you need to know today. On Wednesday morning, the Israeli government approved a hostage deal with Hamas that could produce the longest pause in fighting since the war began 46 days ago. Under the terms of the deal, hamas will release at least 50 hostages held captive in Gaza, while Israel will release 150 Palestinian prisoners from Israeli jails. Those exchanges, which could start as early as tomorrow, are expected to occur over four days, during which fighting will pause. And in the latest blow to the beleaguered cryptocurrency market, the founder of Binance, the world's largest crypto exchange has pleaded guilty to violating US money laundering laws, and Finance itself said it would pay a $4.3 billion fine.

[00:27:22]

US. Prosecutors had accused Finance and its founder, Chung Pengzao, of engaging in outlawed financial transactions, including with customers in countries under US. Sanctions. The guilty plea comes shortly after the conviction of Sam Bankman Fried, the founder of another crypto exchange. FTX. Today's episode was produced by Olivia Natt and Will Reed. It was edited by Lisa Chow and Brendan Klinkenberg, contains original music by Marion Lozano, Rowan Namisto and Dan Powell, and was engineered by Chris Wood. Our theme music is by Jim Brunberg and Ben Lansberg of Wonderley. That's it for the daily. I'm Michael Aboro. See you tomorrow.