Transcribe your podcast
[00:00:01]

Casey, how was your weekend?

[00:00:02]

What? There was no weekend. There was only work. That was a trick question. And there will only ever be work.

[00:00:07]

What.

[00:00:09]

Is happening, Kevin?

[00:00:11]

I don't know, man. I am on two hours of sleep. I've been working all weekend, and I'm increasingly certain that we are, in fact, living in a simulation.

[00:00:22]

I mean, it would be nice if we were living in a simulation because that would suggest that there is at least some plan for what might be about to happen next. But I think recent events would suggest that actually there is not.

[00:00:32]

Yeah. I had a moment this morning where I woke up and I looked at my phone from my two-hour nap, and I was like, I'm Huffing fumes. This can't be real.

[00:00:40]

Let's just say over the course of a weekend, OpenAI, as we know it, ceased to exist. And by the time this podcast gets put in the air, I would believe anything you told me about the future of OpenAI, up to and including it had been purchased by Etsy and it was becoming a maker of handcrafted coffee mugs.

[00:01:02]

Honestly, it would not be the strangest thing that's happened this weekend.

[00:01:07]

Not remotely. It would be the top five.

[00:01:15]

I'm Kevin Rubis, a tech columnist for The New York Times.

[00:01:17]

I'm KZ Noon.

[00:01:18]

From Platformer, and this is Hard Fork.

[00:01:20]

This week on the show, one of the wildest weekends in recent memory will tell you everything that happened at OpenAI and what's going on with Sam Altman. And then later in the show, we will present to you our interview with Sam Alman from last week. So before he was fired, we asked him about the future of AI, and we're going to share that conversation with you.

[00:01:52]

So this episode is going to have two parts. The first part, we're going to talk about the news that happened at OpenAI over the weekend and run down all of the latest drama and talk about where we think things are headed from here. And then we are going to play that Sam Altman interview that we discussed on our last emergency podcast, the one that we conducted last week and planned to run this week, but that has since become fascinating for very different reasons. So let's just run down what has happened so far because there's been so much. It's like enough to fill one of those epic Russian novels or something. So on Friday, when.

[00:02:28]

We're- And the good news, by the way, is that the events are all very easy to understand. There's no way you'll mess up while trying to describe what happened over the past three days.

[00:02:36]

Yeah. Let's try this on a couple of hours of sleep. Okay. Friday, when we recorded our last emergency podcast episode, Sam Altman had just been fired by the board of OpenAI. He was fired for what were essentially vague and unspecified reasons. The board put out a statement saying that he had not been candid with them, but they didn't say more about what exactly had led them to decide that he was no longer fit to run the company. So he's fired. It's this huge deal, huge shock to all of the people at OpenAI and in the tech industry, and then it just keeps getting weirder. So Greg Brockman, the President and co-founder of OpenAI, announced that he, too, is quitting. Some other senior researchers resign as well. And then Saturday rolls around, and we still don't really know what happened. Brad Lightcap, who is OpenAI's COO, sent out about a memo to employees explaining that they know that Sam was not fired for any malfeasance. This wasn't like a financial crime or anything related to a big data leak or anything. He says, quote, This was a breakdown in communication between Sam and the board.

[00:03:48]

And let's say that by the time that Brad put that letter out, there had been reporting that at an all hands, Ilia Sutskever, the chief scientist at the company and a member of the board, had told employees that getting rid of Sam was the only way to ensure that OpenAI could safely build AI, which led to a lot of speculation and commentary that this was an AI safety issue driven by effective altruists on the board. So it was very significant when we then get a letter from Brad saying explicitly this was not an AI safety issue. And of course, that only served to make us even more confused. But lucky for us, further confusion would then follow.

[00:04:30]

This was actually the clearest that things would be for the rest of the next 48 hours. So OpenAI, its executives are saying this isn't about safety or anything related to our practices. But what we know from reporting that I and my colleagues did over the weekend is that this actually was at least partially about AI safety, and that one of the big fault lines between Sam Maltman and the board was over the safety issue, was over whether he was moving too aggressively without taking the proper precautions. Yeah. After this memo from the COO went out, there were reports that investors in OpenAI, including Sequoia Capital, Thrive Capital, and also Microsoft, which is the biggest investor in OpenAI, were exerting pressure on the board to reverse their decision and to reinstate Sam as CEO and then for the entire board to resign. They had a deadline for figuring some of this stuff out, which is 5:00 PM on Saturday. That came and went with no resolution. And then Sunday, a bunch of senior OpenAI people, including Sam Altman, who is, by the way, now no longer the CEO of this company officially, gather at the offices of OpenAI in San Francisco to try to work.

[00:05:54]

Through this all. That's right. There is some reporting that all of a sudden, at least some people on the board are open to the idea of Sam returning, which was one of those moments that was both shocking and not at all surprising. Shocking because they had just gotten rid of him, not at all surprising because I think by that point, it had started to dawn on the world and on OpenAI in particular, on what it would mean for Altman to no longer be associated with this company where he had recruited most of the star talent.

[00:06:24]

Totally. And the employees of OpenAI were making their feelings known as well. They did this campaign on X on Saturday where they were posting heart emojis in quote posts of Sam, indicating that they stood by him and that they would follow him if he decided to leave and start another company or something like that.

[00:06:49]

Yeah. It was something to behold. It was essentially a labor action aimed at the board. And what I will say was in this moment, you realize the degree to which the were weirdly stacked against the board because on one hand, the board has all of the power when it came to firing Sam. But beyond that, there is still a company to run. There is still technology to build. And so now you had many employees of the company being very public in saying, Hey, we do not have your back. We did not sign up for this, and you're in trouble.

[00:07:22]

Yeah. And so on Sunday, there was a moment where it looked like Sam Altman was going to return and take his spot back as the CEO of this company. He posted a photo of himself in the OpenAI office wearing a guest badge, like one that you would give to a visitor to your office.

[00:07:39]

I will say I have worn that exact badge at OpenAI headquarters before.

[00:07:43]

Yeah. And so the caption on the photo was something like, this is the first and last time I'll ever wear one of these. So it sounded like he was setting the scene for a return as well. And I would say there was just a feeling among, especially the company's investors, but also a lot of employees and just people who work in the industry that this wasn't going to stand, that there were too many mad employees that the stakes of blowing this company up over this disagreement were too high. And if there really wasn't a smoking gun, if there was really nothing concrete that the board was going to hold up and say, This is why we fired Sam Altman, there was this sense that that just wasn't going to work, that there was no way that the board could actually go through with this firing.

[00:08:28]

Yeah. And I think one way that the employees and the former executives were very effective was in using social media to create this picture of the overwhelming support that was behind them. So if you were an observer to this situation, you're only seeing one side of the story because the board is not out there posting. They have an issue to a statement that lays out any details about what Sam allegedly did. And so instead, you just have a bunch of people saying like, Hey, Sam was a great CEO. I love working for the guy. Openai is nothing without him. All these posts are getting massively reshared. It's easy to look at that and think, Oh, yeah, he's probably going to be back in power by the end of the day.

[00:09:06]

Totally. So that was the scene as of Sunday afternoon. But then Sunday evening Pacific Time, this new deadline, 5:00 PM Pacific Time has been given for some resolution. That also comes and goes. And there is no word from OpenAI's headquarters about what the heck is going on. It feels like there's like a paper enclave, and everyone is waiting for the white smoke to emerge from the chimney. And then we get word that the board of directors of OpenAI has sent a note to employees announcing that Sam Altman will not return as CEO after all, and standing by its decision. They still didn't give a firm reason or a specific reason why they pushed him out, but they said that, quote, put simply, Sam's behavior and lack of transparency in his interactions with the board undermined the board's ability to effectively supervise the company in the manner it was mandated to do. And they announced that they have appointed a new interim CEO. Now, remember, this company already had an interim CEO, Mirah Maradi, the former Chief Technology Officer of OpenAI who had been appointed on Friday. She also signaled her support for Sam and Greg, and reporting suggests that she actually tried to have them brought back.

[00:10:27]

And because of that, the board decided to replace her as well. So Mirah Maradi's reign as the temporary CEO of OpenAI lasted about 48 hours before she was replaced by Emmett Shear, who is the former CEO of Twitch and who was the board's choice to take over on an interim basis.

[00:10:49]

The board found an alternative man or altman to lead the company.

[00:10:53]

So that was already mind-blowing. This happened at night on Sunday, and I thought, well, clearly things cannot get any crazier than this.

[00:11:05]

That's when I went to bed, by the way. I was like, whatever is happening with these people can wait till the morning. And then, of course, I wake up and an additional four years worth of news has happened.

[00:11:16]

Yes. So after this announcement about Sam Maltman not returning and Emmett Shear being appointed as the interim CEO, there is a full-on staff revolt at OpenAI. The employees are outraged. They start threatening to quit. And then just a couple of hours after this note from the board of directors comes out, Microsoft announced that it is hiring Sam Altman and Greg Brockman to lead an Advanced Research Lab at the company.

[00:11:46]

An Advanced Research Lab, I assume, means that Saatchya has just given those to a Fifthom, and they will be given an unlimited budget to do whatever the heck they want. But of course, because Microsoft owns 49 % of OpenAI at this advanced research unit, Sam and Greg and all their old friends from OpenAI will have access to all of the APIs, everything that they were doing before. They will just get to pick up where they lift off and build everything that they were going to do, but now firmly under the auspices of a for-profit corporation, and by the way, one of the very biggest giants in the world.

[00:12:19]

Yeah. So I think it's worth just pausing a beat on this move because it is truly a wild twist in this saga. So just to explain why this is so crazy. So Microsoft is the biggest investor in OpenAI. They've put $13 billion into the company. They're also highly dependent on OpenAI because they've now built OpenAI's models into a bunch of their products that they are betting the future of Microsoft on in some sense. And this was a big bet for them that over the course of a weekend was threatening to fall apart. Sam Altman and Greg Brockman were the leaders of OpenAI. They were the people that Microsoft was most interested in having run the company. Microsoft did not like this new plan to have Emmett Shear take over as CEO.

[00:13:07]

They said, It's Shear madness, Kevin.

[00:13:10]

And so they did the next best thing, which was to poach the leaders of OpenAI, the deposed leaders, and bring them into Microsoft along with presumably many of their colleagues who will be leaving OpenAI in protest if the board sticks to this decision.

[00:13:29]

Yeah, man. So this one threw me for a loop, because if you have spoken with Sam, or Greg, or many of the people who work at OpenAI, you got the strong impression. These people like working at a startup. Okay, working to OpenAI is, in many ways, the opposite of working at a company like Microsoft, which is this massive bureaucracy with so much process for doing anything. I think they really liked working at this nimble thing, at a new thing, being able to chart their own destiny. Keep in mind, OpenAI was about to become the only big new consumer technology company that we have seen in a long time in Silicon Valley. And so initially it's like, okay, they're going to work at Microsoft. What the heck? Because, Kevin, one thing you didn't mention, which is fine because we didn't have to get through that timeline. But it's like the instant that Sam was fired, reporting started leaking out he was starting a new company with Greg. So my my assumption had been these guys are going to go off back into startup land. They're going to raise an unlimited amount of money and do whatever they want.

[00:14:36]

At the same time, you think about where they were in their mission when all of this happened on Friday, and they had a very clear roadmap, I think, for the next year. And if they would have to go out, raise money, build a new team, train a large language model, think about how much time it would take them just to get back to where they were before. They would probably lose a year, if not more, of development. And this is pure speculation, but my guess is that part of their calculus was, look, if we deal with the devil we know and we go to Microsoft, we get to play with all of our old toys, we will have an unlimited budget, and we can skip the fundraising and team building stage and just get back to work. So I have to believe that that was the calculus. But that said, it still is a very unexpected outcome, at least to me.

[00:15:25]

It's a crazy outcome. And it means that Microsoft now has a hand in two essentially warring AI companies. They have what remains of OpenAI, and they have this long term deal with OpenAI, and they also control, by the way, the computing power that OpenAI uses to run its models, which gives them some leverage there. So it is a fascinating position that Microsoft is now in and really makes them look even more dominant in AI than they already did.

[00:15:54]

That's right. But listen, all of that said, everything that you just said is true as we record. However, Kevin, by the end of the day, I would believe any of the following scenarios: Greg and Sam have quit Microsoft. Greg and Sam are starting their own company. Greg and Sam have returned to OpenAI. Greg and Sam have retired from public life. Greg and Sam have opened at the Netsy store. This is all within the realm of possibilities to me. So if we're back doing another one of these emergency pods tomorrow, I just want to say that while I accept that everything that Kevin just said is true, I'm only five % confident that any of it lasts until the end of the week.

[00:16:30]

Yes, we are still in the zone where anything can happen. In fact, there have been some things that have happened even since the Microsoft announcement. So super early on Monday morning, like 1:00 AM Pacific time when I was still up, but I guess you were asleep because some of us are built for the grind set. Everett Shearer, the new interim CEO of OpenAI, put out a statement saying that he would basically be digging into what happened over the past weekend, speaking to employees and customers, and then trying to restore stability at the company. And my read of this letter was that he was basically telling OpenAI employees, Please don't quit. I am not the doomer that you think I am, and you can continue to work here. Because one other thing that we should say about Emmett Shear is that while we don't know a ton about his views on AI and AI progress, he has done some interviews where he's indicated that he is something of an AI pessimist, that he doesn't think that AI should be going so quickly ahead that he wants to actually slow it down, which is a position that is at odds with what we know Sam Altonman believes.

[00:17:43]

Yeah. As soon as he was named, people found a recent interview he gave where he said that his P doom, his probability that AI will cause doom was between 5 and 50 %. But if you listen to that interview, it sure sounds like the P doom is closer to 50 than this to five, I would say. The other interesting thing in that statement is that Emmett said before he took the job, he checked on the reasoning behind firing Sam, and he said, quote, The board did not remove Sam over any specific disagreement on safety. Their reasoning was completely different from that. So once again, we have someone talking about the firing without telling us anything and making it even more confusing.

[00:18:22]

Totally. But that is not even the end of the timeline. We are still going because after this 1:00 AM memo from Emmett Shear, OpenAI employees start collecting signatures on what amounts to an ultimatum saying that they will quit if the board does not resign and replace Sam. And if the board does not resign and bring Sam Althman back as CEO, this letter starts going around OpenAI and eventually collects the signatures of the vast majority of the company's roughly 700 employees, almost all of its senior leadership and the rank and file saying that if the board does not resign and bring back Sam Altman, they will go work for Microsoft or just leave OpenAI.

[00:19:09]

And do you know how much you have to hate your job to go work for Microsoft? These people are pissed, Kevin.

[00:19:14]

And then as if it couldn't get any crazier, just Monday morning, Iliya Setskiver, the OpenAI co-founder and chief scientist and board member who started all this, who led the coup against Sam Altman and rallied the board to force him out, posted on X saying that he, quote, deeply regrets his participation with the board. He said, quote, I never intended to harm OpenAI. I love everything we've built together, and I will do everything I can to reunite the company. So that is it. That is the entire timeline of the weekend up to the point that we are recording this episode. Casey, are you okay? Do you need to lie down?

[00:19:56]

Well, I do need to lie down. But sometimes, Kevin, when you're watching a TV show or a movie and the central antagonist has a sudden change of heart that's completely unexplained. There's no obvious motivation. I always feel like, Wow, the writers really copped out on this one. At least give us some arc. That was the moment Iliya Setskiver had this moment where, as you say, after leading the charge to get rid of Sam for reasons that the board did not specify, but that Iliya strongly hinted had something to do with AI safety, he now spins around and says, Hey, time to get the band back together. I mean, just a tremendously undermining moment for the board, generally, and for Illa in particular.

[00:20:38]

Totally. So right now, as things stand, there are a lot of different factions who have different feelings and emotions about what's going on. There's the people at OpenAI, the vast majority of whom are opposed to the board's actions here and are threatening to walk out if they're not reversed. There are the investors in OpenAI who are furious about how this is playing out. So a lot of people with a lot of high emotions and a lot of uncertainty yelling at these what used to be four and are now three OpenAI board members who have decided to just stand their ground and stick it out.

[00:21:14]

So let's pause there because I think that while all of us agree that the board badly mishandled this situation, it is worth taking a beat on what this board's role is. When I listen back to the episode that we did on Friday, this is a place where I wish I had drilled down a little bit deeper. The mission of this board is to safely develop a super intelligence absent any commercial motive. That is the goal of this board. This board was put together with the idea that if you have a big company like, let's say, a Microsoft that is in charge of a super intelligence, something that is smarter than us, something that will outcompete us in natural selection. They didn't want that to be owned by a for-profit corporation. And something happened where at one point, at least four of the people on this board, and now it's down to three, but three of those people thought, We are not achieving this mission. Sam did something or he didn't do something, or he behaved in some way that made us feel like we cannot safely build a super intelligence. And so we need to find somebody else to run that company.

[00:22:26]

And until we know why they felt that way, there is part of me that just feels like we just can't fully process our feelings on this. I think it was actually really depressing to see how quickly polarizing this became on social media as it turned into Team Sam versus Team Safety. That's actually a really bad outcome for society. Because I think we do want... If we're going to build a super intelligence, I would like to see it built safely. I'm not sure that it is a for-profit corporation that is going to do the best job without having watched for-profit corporations create a lot of social harm in my lifetime. I just want to say that I'm sure before the end of this podcast, we will continue to criticize the board for the way that it handled this. But at the same time, it's important to remember what their mission was and to assume that they had at least some reasons for doing what they did.

[00:23:29]

Yeah. I was talking to people all day yesterday who thought that the money would win here, basically, that these investors and Microsoft, they were powerful enough, and they had enough stake in the outcome here that they would, by any means necessary, get Sam Altman and Greg Brockman back to OpenAI. And I was very surprised when that didn't happen. But maybe I shouldn't have been because as someone pointed out to me when I talked to them yesterday, who was involved with the situation said, The board has the ultimate leverage here. This structure, this convoluted governance structure where there's a nonprofit that controls a for-profit and the nonprofit can vote to fire the CEO at any time. It was set up for this purpose. You can argue with how they executed it, and I would say they executed it very badly, but it was meant to give the board the power to shut this all down if they determined that what was happening at OpenAI was or was not going to lead to broadly beneficial AGI. And it sounds like that's what happened.

[00:24:35]

That's right. Another piece that I would point to, my friend Eric Newcomer wrote a good column about this, just pointing out that Sam has had abrupt breaks with folks in the past. He had an abrupt break with Y Combinator, where he used to lead the startup incubator. He had an abrupt break with Elon Musk, who co-founded OpenAI with him. He had an abrupt break with the folks who left OpenAI to go start Anthropic for what they described as AI safety reasons. So there is a history there that suggests that... Right now, a lot of people think that the board is crazy, but these are not the first people to say, Sam Altman is not building AI safely.

[00:25:12]

Right. Here's the thing. I still think there has to have been some inciting incident. This does not feel to me like it was a slow accumulation of worry by Ilia Satskever and the more safety-minded board members that just woke up one day and said, you know what? It's just gotten a little too aggressive over there, so let's shut this thing down. I still think there had to have been some incident, something that Ilia Setskiver and the board saw that made them think that they had to act now. So much is changing. We have to keep going back to this caveat of we still don't know what is going to happen in the next hour to say nothing of the next day or week or month. But that is the state of play right now. And I think this is, I mean, Casey, I don't know how you feel about this, but I would say this is the most fascinating and crazy story that I have ever covered in my career as a tech journalist. I cannot remember anything that made my head spin as much as this.

[00:26:12]

Yeah, certainly in terms of the number of unexplained and unexpected twists, it's hard for me to think of another story that comes close. But I think we should look forward a little bit and talk about what this might mean for OpenAI in particular. Openai was described to me over the weekend by a former employee as a money incinerator. Chatgpt does not make money. Training these models is incredibly expensive. The whole reason OpenAI became a for-profit company was because it costs so much money to build and maintain and run these models. When Sam was fired, it has been reported that he was out there raising money to put back into the incinerator. So think about the position that that leaves the OpenAI board at. Let's say that they're able to staunch the bleeding and retain a couple hundred people who are closely associated with the mission, and the board thinks that these are the right people. Who is going to give them the money to continue their work after what has just happened? Now look, Emmett Shear is very well regarded in Silicon Valley. He was texting with Sources last night who were very excited that he was the guy that they chose.

[00:27:19]

And so no disrespect to him. But this board has shown that it is serious when it says it does not have a profit incentive. So unless it's going to go out there and start raising money from foundations and philanthropists and kindly billionaires, I do not see how they get the money to keep maintaining the status quo. And so in a very real sense, over the weekend, OpenAI truly may have died.

[00:27:45]

It truly may have. I mean, you're right. We are going to take a bunch of money and incinerate it. And by the way, we're also going to move very slowly and not accelerate AI progress is not a compelling pitch to investors. And so I don't think that the new OpenAI is going to have a good time when it goes out to raise its next round of funding. Or by the way, and this is another factor that we haven't talked about, to close this tender offer, this round of secondary investment that was going to give OpenAI employees a chance to cash out some of their shares. That, I would say, is doomed.

[00:28:18]

Yeah. And I'm sure that motivated a lot of the signatures on the letter demanding that Sam and Greg come back, right? Because those people are about to get paid and not anymore.

[00:28:27]

Totally. So that's some of what lies ahead for Microsoft and OpenAI, although anything could change. And that brings us to the interview that we had with Sam Alton last week. So last week, before any of this happened on Wednesday, which is two days before he was fired-.

[00:28:45]

It was a simpler, more innocent time.

[00:28:48]

That's true. I actually do feel like that was about a year and a half ago. So we sat down with Sam Alton, and we asked him all kinds of questions, both about the year since ChatGPT was launched and what had happened since then, and also about the future and his thoughts about where AI was headed and where OpenAI was headed. So then all this news broke, and we thought, well, what do we do with this interview now? And we thought about, should we even run it? Should we chop it up and just play the most relevant bits? But we ultimately decided we should just put the whole thing out.

[00:29:20]

Put it out there.

[00:29:21]

Yeah. So I would just say to listeners, as you listen to this interview, you may be thinking, why are these guys asking about ChatGPT? Whos who cares about ChatGPT? We've got bigger fish to fry here, people. But just keep in mind that when we recorded this, none of this drama had happened yet. The biggest news in Sam Alton's world was that the one year anniversary of ChatGPT was coming up, and we wanted to ask him to reflect on that. So just keep in mind these are questions from Earth 1, and we are now on Earth 2. And just bear that in mind as you listen. But I would say that the issues that we talked about with Sam, some of the things around the acceleration of progress at OpenAI and his view of the future and his optimism about what building powerful AI could do, those are some of the key issues that seem to have motivated this coup by the board. So I think it's still very relevant, even though the specific facts on the ground have changed so much since we recorded with him.

[00:30:22]

So in this interview, you'll hear us talk about existential risk, AI safety. If that's a subject you haven't been paying much attention to, the fear here is that as these systems grow more powerful and they are already growing exponentially more powerful year by year, at some point, they may become smarter than us. Their goals may disalign from ours. And so for folks who follow this stuff closely, there's a big debate on how seriously we should take that risk.

[00:30:49]

Right. And there's also a big debate in the tech world more broadly about whether AI and technology, in general, should be progressing faster or whether things are already going too fast and they should be slowed down. And so when we ask him about being an accelerationist, that's what we're talking about.

[00:31:07]

And I should say I texted Sam this morning to see if there was anything that he wanted to say or add. And as we record, I have not heard back from him yet.

[00:31:19]

Let me come back our interview from last week with Sam Alman. Sam Alman, welcome back to HardFork. Thank you.

[00:31:46]

Sam, it has been just about a year since ChatGPT was released. I wonder if you have been doing some reflecting over the past year and where it has brought us in the development of AI.

[00:31:58]

Frankly, it has been such a busy year there has not been a ton of time for reflections.

[00:32:03]

Well, that's why we brought you in. We want you to reflect here.

[00:32:05]

Great. I can do it now. I definitely think this was the year so far. There will be maybe more in the future, but the year so far where the general average tech person went from taking AI not that seriously to taking it pretty seriously. And the recompiling of expectations given that. In't know. I think in some sense, that's the most significant update of.

[00:32:32]

The year. I would imagine that for you, a lot of the past year has been watching the world catch up to things that you have been thinking about for some time. Does it feel that way?

[00:32:41]

Yeah, it does. We always thought on the inside of OpenAI that it was strange that the rest of the world didn't take this more seriously. It wasn't more excited about it.

[00:32:52]

I mean, I think if five years ago you had explained what ChatGPT was going to be, I would have thought, wow, that sounds pretty cool. Presumably, I could have just looked into it more and I would have smartened myself up. But I think until I actually used it, as is often the case, it was just hard to know what.

[00:33:07]

It was. Yeah, I actually think we could have explained it and it wouldn't have made that much of a difference. We tried. People are busy with their lives. They don't have a lot of time to sit there and listen to some tech people prognosticate about something that may or may not happen. But you should have a product that people use, get real value out of, and then it's different.

[00:33:26]

Yeah. I remember reading about the early days of the run up to the launch of ChatGPT. And I think you all have said that you did not expect it to be a hit when it launched.

[00:33:37]

No, we thought it would be a hit. We did it because we thought it was going to be a hit. We didn't think it was going to be like this. We did it because we thought it was going to be a hit. We didn't think it was going to be like this big of a hit.

[00:33:42]

Right. As we're sitting here today, I believe it's the case that you can't actually sign up for ChatGPT Plus right now. Is that right? Correct. Yeah. So what's that all about?

[00:33:50]

We have not enough capacity always, but at some point, it gets really bad. So over the last 10 days or so, we have done everything we can. We've rolled out new optimizations. We've disabled some features and then people just keep signing up. It keeps getting slower and slower. And there's a limit at some point to what you can do there. We just don't want to offer a bad quality of service. And so it gets slow enough that we just say, you know what? Until we can make more progress, either with more GPUs or more optimizations, we're going to put this on hold. Not a great place to be in, to be honest, but it's like the least of several bad options.

[00:34:30]

Sure. And I feel like in the history of tech development, there often is a moment with really popular products where you just have to close signups for a little while, right?

[00:34:38]

The thing that's different about this than others is it's so much more compute intensive than the world is used to for internet services. So you don't usually have to do this. Usually, by the time you're at this scale, you've solved your scaling bottlenecks.

[00:34:50]

One of the interesting things for me about covering all the AI changes over the past year is that it often feels like journalists and researchers and companies are discovering properties of these systems at the same time altogether. I mean, I remember when we had you and Kevin Scott from Microsoft on the show earlier this year around the Bing relaunch. And you both said something to the effect of, well, to discover what these models are or what they're capable of, you have to put them out into the world and have millions of people using them. Then we saw all kinds of crazy, but also inspiring things. You had Bing, Sydney, but you also had people starting to use these things in their lives. I guess I'm curious what you feel like you have learned about language models and your language model specifically from putting them out into the world.

[00:35:39]

What we don't want to be surprised by is the capabilities of the model. That would be bad. We're not. With GPT-4, for example, we took a long time between finishing the model and releasing it. Red team did heavily, really studied it, did all of the work internally, externally. I'd say there's, at least so far, and maybe now it's been long enough that we would have, we have not been surprised by any capabilities the model had that we just didn't know about it all in a way that we were for GPT-3. Frankly, sometimes people found stuff. But what I think you can't do in the lab is understand how technology and society are going to co-evolve. You can say here's what the model can do and not do, but you can't say, and here's exactly how society is going to progress given that. That's where you just have to see what people are doing, how they're using it. Well, one thing is they use it a lot. That's one takeaway that clearly we did not appropriately plan for. But more interesting than that is the way in which this is transforming people's productivity, personal lives, how they're learning.

[00:36:51]

One example that I think is instructive because it was the first and the loudest is what happened with ChatGPT and education. Days, at least weeks, but I think days after the release of ChatGPT, school districts were falling all over themselves to ban ChatGPT. That didn't really surprise us. That we could have predicted and did predict. The thing that happened after that quickly was weeks to months, was school districts and teachers saying, Hey, actually, we made a mistake and this is really important part of the future of education and the benefit's far away the downside. Not only are we unbanning it, we're encouraging our teachers to make use of it in the classroom. We're encouraging our students to get really good at this tool because it's going to be part of the way people live. And then there was a big discussion about what the path forward should be. And that is just not something that could have happened without releasing. Yeah. Can I say one more thing? Yeah. Part of the decision that we made with the ChatGPT release, the original plan had been to do the chat interface and GPT-4 together in March. And we really believe in this idea of iterative deployment.

[00:38:16]

And we had realized that the Chat interface plus GPT-4 was a lot. I don't think we realized quite how much it was.

[00:38:23]

Because we had-Like too much for society to.

[00:38:25]

Take in. We split it and we put it out with GPT-3.5 first, which we thought was a much weaker model. It turned out to still be powerful enough for a lot of use cases. But I think that, in retrospect, was a really good decision and helped with that process of gradual adaptation for society.

[00:38:44]

Looking back, do you wish that you had done more to, I don't know, give people some a manual to say, Here's how you can use this at school or at work?

[00:38:51]

Two things. One, I wish we had done something intermediate between the release of 3.5 in the API and ChatGPT. Now, I don't know how well that would have worked because I think there was just going to be some moment where it went viral in the mind of society, and I don't know how incremental that could have been. That's either it goes like this or doesn't do anything. I have reflected on this question a lot. I think the world was going to have to have that moment. It was better sooner than later. It was good we did it when we did. Maybe we should have tried to push it even a little earlier, but it's a little chancey about when it hits. I think only a consumer product could have done what happened there. Now, the second thing is, should we have released more of a how-to manual? I honestly don't know. I think we could have done some things there that would have been helpful. But I really believe that it's not optimal for tech companies to tell people like, Here is how to use this technology and here's how to do whatever. And the organic thing that happened there actually was pretty good.

[00:40:06]

I'm curious about the thing that you just said about we thought it was important to get this stuff into folks' hands sooner rather than later. Say more about what that is.

[00:40:14]

Well, more time to adapt for our institutions and leaders to understand for people to think about what the next version of the model should do, what they'd like, what would be useful, what would not be useful, what would be really bad, how society and the economy need to co-evolve. The thing that many people in the field or adjacent to the field have advocated or used to advocate for, which I always thought was super bad was, This is so disruptive, such a big deal. It's got to be done in secret by the small group of us that can understand it. Then we will fully build the AGI and push a button all at once when it's ready. I think that'd be quite bad.

[00:40:55]

Yeah, because it would just be way too much change too fast.

[00:40:58]

Yeah, again, society and technologys actually have to co-evolve and people have to decide what's going to work for them and not and how they want to use it. And we're... You can criticize OpenAI about many, many things, but we do try to really listen to people and adapt it in ways that make it better or more useful, and I think we're able to do that, but we wouldn't get it right without that feedback.

[00:41:15]

I want to talk about AGI and the path to AGI later on. But first, I want to just define AGI and have you talk about where we are on the continuum. I think.

[00:41:27]

It's a ridiculous and meaningless term. Yeah. I'm sorry. I apologize that I keep using it.

[00:41:32]

It's like. I mean, I just never know what people are talking about. No one else is.

[00:41:36]

Talking either. They mean really.

[00:41:38]

Smart AI. Yeah. So it stands for Artificial General Intelligence. And you could probably ask a hundred different AI researchers and they would give you a hundred different definitions of what AGI is. Researchers at Google Deep Mind just released a paper this month that offers a framework. They have five levels. I guess they have levels ranging from level zero, which is no AI, all the way up to level five, which is superhuman. And they suggest that currently ChatGPT, Bard, Llama2 are all at level one, which is equal to or slightly better than an unskilled human. Would you agree with that? Where are we? If you'd say this is a term that means something and you define it that way, how close are we?

[00:42:24]

I think the thing that matters is the curve and the rate of progress, and there's not going to be some milestone that we all agree like, okay, we've passed it and now it's called AGI. What I would say is we currently have systems that are... There will be researchers who will write papers like that and academics will debate it and people in the industry will debate it. I think most of the world just cares like, Is this thing useful to me or not? We currently have systems that are somewhat useful, clearly. Whether we want to say it's a level one or two, I don't know, but people use it a lot and they really love it. There's huge weaknesses in the current systems, but it doesn't mean that... I'm a little embarrassed by GPT-s, but people still like them. That's good. It's nice to do useful stuff for people. So yeah, call it a level one. It doesn't bother me at all. I am embarrassed by it. We will make them much better. But at their current state, they're still delighting people and being useful to people.

[00:43:27]

Yeah. I also think it underwrites them slightly to say that they're just better than unskilled humans. When I use ChatGPT, it is better than skilled humans for some things.

[00:43:34]

It's some things and worse than unskilled, worse than any human and many.

[00:43:37]

Other things. But I guess this is one of the questions that people ask me the most, and I imagine ask you is like, what are today's AI systems useful and not useful for doing?

[00:43:49]

I would say the main thing that they're bad at, well, many things, but one that is on my mind a lot is they're bad at reasoning.

[00:43:57]

And.

[00:43:58]

A lot of the valuable human things require some degree of complex reasoning. But they're good at a lot of other things. Gpt-4 is vastly superhuman in terms of its world knowledge. It knows there's a lot of things in there. And it's very different than how we think about evaluating human intelligence. So it can't do these basic reasoning tasks. On the other hand, it knows more than any human has ever known. On the other hand, again, sometimes it totally makes stuff up in a way that a human would not. But if you are using it to be a coder, for example, it can hugely increase your productivity. There's value there, even though it has all of these other weak points. If you were a student, you can learn a lot more than you could without using this tool in some ways. Value there, too.

[00:44:52]

Let's talk about GPT-s, which you announced at your recent developer conference. For those who haven't had a chance to use one yet, Sam, what's a GPT?

[00:44:59]

It's like a custom version of ChatGPT that you can get to behave in a certain way. You can give it limited ability to do actions. You can give it knowledge to refer to. You can say, Act this way. But it's super easy to make, and it's a first step towards more powerful AI systems and agents.

[00:45:17]

We've had some fun with them on the show. There's a hard fork bot that you can ask about anything that's happened on any episode of the show. It works pretty well, we found when we did some testing. But I want to talk about where this is going. What is the GPT's that you've released a first step toward?

[00:45:35]

Ais that can accomplish useful tasks. I think we need to move towards this with great care. I think it would be a bad idea to put turn powerful agents free on the internet. But AIs that can act on your behalf to do something with a company that can access your data that can help you be good at a task. I think that's going to be an exciting way we use computers. We have this belief that we're heading towards a vision where there are new interfaces, new user experiences possible because finally the computer can understand you and think. And so the sci-fi vision of a computer that you just tell what you want and it figures out how to do it, this is a step towards that.

[00:46:25]

Right now, I think what's holding a lot of people back in... A lot of companies and organizations back in using this AI in their work is that it can be unreliable. It can make up things. It can give wrong answers, which is fine if you're doing creative writing assignments, but not if you're a hospital or a law firm or something else with big stakes. How do we solve this problem of reliability? And do you think we'll ever get to the low-fault tolerance that is needed for these really high stakes applications? Yeah.

[00:47:00]

First of all, I think this is a great example of people understanding the technology, making smart decisions with it, society and the technology co-evolving together. What you see is that people are using it where appropriate and where it's helpful and not using it where you shouldn't. For all of the fear that people have had, both users and companies seem to really understand the limitations and are making appropriate decisions about where to roll it out. The controllability, reliability, whatever you want to call it, that is going to get much better. I think we'll see a big step forward there over the coming years. It's the end of the story. And I think that there will be a time. I don't know if it's like 2026, 2028, 2030, whatever, but there will be a time where we just don't talk about this anymore. Yeah.

[00:47:58]

It seems to me, though, that that is something that becomes very important to get right as you build these more powerful GPTs. Once I tell, I would love to have a GPT be my assistant, go through my emails, hey, don't forget to respond to this before the end of the day.

[00:48:12]

The reliability has got to be way up before that happens.

[00:48:15]

Yeah, that makes sense. You mentioned as we started to talk about GPTs that you have to do this carefully. For folks who haven't spent as much time reading about this, explain what are some things that could go on. You guys are obviously going to be very careful with this. Other people are going to build GPT-like things, might not put the same controls in place. So what can you imagine other people's doing that you as the CEO would say, you're folks, hey, it's not going to be able to do that?

[00:48:42]

Well, that example that you just gave, if you let it act as your assistant and go send emails, do financial transfers for you, it's very easy to imagine how that could go wrong. But I think most people who would use this don't want that to happen on their behalf either. And so there's more resilience to this stuff than people think.

[00:49:04]

I think those are, I mean, for what's worth on the hallucination thing, which it does feel like has maybe been the longest conversation that we've had about ChatGPT in general since it launched. I just always think about Wikipedia as a resource I use all the time. And I don't want Wikipedia to be wrong, but 100 % of the time, it doesn't matter if it does. I am not relying on it for life-saving information. Chatgpt, for me, is the same. It's like, hey, it's great. It's just bar trivy. Hey, you know what? What's the history of this conflict in the world?

[00:49:32]

Yeah, I mean, we want to get it a lot better, and we will. I think the next model will just hallucinate much less.

[00:49:39]

Is there an optimal level of hallucination in an AI model? Because I've heard researchers say, well, you actually don't want it to never hallucinate because that would mean making it not creative. That new ideas come from making stuff up that's not necessarily tethered.

[00:49:54]

To-this is why I tend to use the word controllability and not reliability. You want it to be reliable when you want. You want it to either you instruct it or it just knows based off of the context that you are asking a factual query and you want the 100 % black and white answer. But you also want it to know when you want it to hallucinate or you want it to make stuff up. As you just said, new discovery happens because you come up with new ideas, most of which are wrong and you discard those and keep the good ones and add those to your understanding of reality. Or if you're telling a creative story, you want that. If these models didn't hallucinate at all, ever, they wouldn't be so exciting. They wouldn't do a lot of the things that they can do. But you only want them to do that when you want them to do that. The way I think about it is like model capability, personalization, and controllability. Those are the three axes we have to push on. Controlability means no hallucinations when you don't want, lots of it when you're trying to invent something new.

[00:50:56]

Let's maybe start moving into some of the debates that we've been having about AI over the past year. I actually want to start with something that I haven't heard as much, but that I do bump into when I use your products, which is they can be quite restrictive in how you use them, I think mostly for great reasons. I think you guys have learned a lot of lessons from the past era of tech development. At the same time, I've tried to ask ChatGPT a question about sexual health. I feel like it's going to call the police on me, right? So I'm just curious how you've.

[00:51:26]

Approached that. Yeah, look, one thing, no one wants to be scored by a computer ever. That is not a good feeling. And so you should never feel like you're going to have the police called on you. It's more like, horrible, horrible, horrible. We have started very conservative, which I think is a defensible choice. Other people may have made a different one. But again, that principle of controllability, what we'd like to get to is a world where if you want some of the guardrails relaxed a lot and you're not a child or something, then fine, we'll relax the guardrails. It should be up to you. But I think starting super conservative here, although annoying, is a defensible decision and I wouldn't have gone back and made it differently. We have relaxed it already. We will relax it much more, but we want to do it in a way where it's user controlled.

[00:52:19]

Yeah. Are there certain red lines you won't cross things that you will never let your models be used for other than things that are obviously illegal or dangerous?

[00:52:29]

Yeah, certainly things that are illegal and dangerous we want. There's a lot of other things that I could say, but they so depend where those red lines will be so depend on how the technology evolves that it's hard to say right now, Here's the exhaustive set. We really try to just study the models and predict capabilities as we go. But if we learn something new, we change our plans.

[00:52:54]

Yeah. One other area where things have been shifting a lot over the past year is in AI regulation and governance. I think a year ago, if you'd asked the average congressperson, What do you think of AI? They would have said, What's that? Get out of.

[00:53:07]

My.

[00:53:07]

Office. Right. We just recently saw the Biden White House put out an executive order about AI. You have obviously been meeting a lot with lawmakers and regulators, not just in the US, but around the world. What's your view of how AI regulation is shaping up?

[00:53:22]

It's a really tricky point to get across. What we believe is that on the frontier systems, there does need to be proactive regulation. I'm not there. But heading into overreach and regulatory capture would be really bad. And there's a lot of amazing work that's going to happen with smaller models, smaller companies, open source efforts. And it's really important that regulation not strangle that... So it's like, I've become a villain for this, but I think there was-.

[00:53:50]

You have? Yeah. How do you feel about this?

[00:53:53]

Like, annoyed, but have bigger problems in my life right now. But this message of regulate us, regulate the really capable models that can have significant consequences but leave the rest of the industry alone. It's a hard message to get across.

[00:54:12]

Here is an argument that was made to me by a high ranking executive at a major tech company as some of this debate was playing out. This person said to me that there is essentially no harms that these models can have that the internet itself doesn't enable, right? And that to do any work like it is proposed in this executive order to have to inform the Biden administration is just essentially pulling up the ladder behind you and ensuring that the folks who've already raised the money can reap all of the profits of this new world and will leave the little people behind. So I'm curious what you make of that argument.

[00:54:51]

I disagree with it on a bunch of levels. First of all, I wish the threshold for when you drove to report was set differently and based off of evals and capability thresholds.

[00:55:05]

Not flops?

[00:55:06]

Not flops. Okay.

[00:55:07]

But there's no small company training with that many flops anyway. So that's like a little bit...

[00:55:11]

For the listener who maybe didn't listen to our last episode.

[00:55:14]

About this-Listen to our.

[00:55:15]

Flops episode. The flops are the measure of the amount of computing that is used to train these models. The executive order says if you're above a certain computing threshold, you have to tell the government that you're training a model.

[00:55:25]

That big. But no small effort is training at 10-26 flops. Currently, no big effort is either. So that's a dishonest comment. Second of all, the burden of just saying, Here's what we're doing is not that great. But third of all, the underlying thing there, there's nothing you can do here that you couldn't already do on the internet. That's the real either dishonesty or lack of understanding. You could maybe say with GPT-4, you can't do anything you can't do on the internet. But I don't think that's really true even at GPT-4. There are some new things. And GPT-5 and 6, there will be very new things. Saying that we're going to be cautious and responsible and have some testing around that, I think that's going to look more prudent in retrospect than it maybe sounds right now.

[00:56:22]

I'd say for me, these seem like the absolute gentlest regulations you could imagine. It's like tell the government and report on any safety testing.

[00:56:28]

You did. Seems reasonable. Yeah.

[00:56:30]

People are not just saying that these fears are unjustified of AI and existential risk. Some people, some of the more vocal critics of OpenAI have said that OpenAI, that you are specifically lying about the risks of human extinction from AI, creating fear so that regulators will come in and make laws or give executive orders that prevent smaller competitors from being able to compete with you. Handrew In, who is I think one of your professors at Stanford, recently said something to this effect. What's your response to that? I'm curious if you have thoughts about that. Yeah, I.

[00:57:09]

Actually don't think we're all going to go extinct. I think it's going to be great. I think we're heading towards the best world ever. But when we deal with a dangerous technology as a society, we often say that we have to confront and successfully navigate the risks to get to enjoy the benefits. That's a pretty consensus thing. I don't think that's a radical position. I can imagine that if this technology stays on the same curve, there are systems that are capable of significant harm in the future. Andrew also said not that long ago that he thought it was totally irresponsible to talk about AGI because it was just never happening.

[00:58:00]

I think he compared it to worrying about overpopulation on Mars.

[00:58:03]

And I think now he might say something different. So, it's... Humans are very bad at having intuition for exponentials. Again, I think it's going to be great. I wouldn't work on this if I didn't think it was going to be great. People love it already and I think they're going to love it a lot more. But that doesn't mean we don't need to be responsible and accountable and thoughtful about what the downsides could be. And in fact, I think the tech industry often has only talked about the good and not the bad. And that doesn't go well either.

[00:58:40]

The exponential thing is real. I have dealt with this. I've talked about the fact that I was only using GPT-3.5 until a few months ago and finally at the urging of a friend, upgraded. And I thought I.

[00:58:51]

Would have given you a free account. Sorry. I wish I should have asked. But it's a real improvement.

[00:58:56]

It is a real improvement. And not just in the sense of, Oh, the copy that it generates is better. It actually transformed my sense of how quickly the industry was moving. It made me think, oh, the next generation of this is going to be radically better. And so I think that part of what we're dealing with is just that it has not been widely distributed enough to get people to reckon with the implications.

[00:59:19]

I disagree with that. I mean, I think that maybe the tech experts say, oh, this is not a big deal, whatever. Most of the world is like, who has used even the free version is like, oh, man, they got real AI. Yeah.

[00:59:35]

And you went around the world this year talking to people in a lot of different countries. I'd be curious to what extent that informed what you just said.

[00:59:43]

Significantly. I had a little bit of a sample bias because the people that wanted to meet me were probably pretty excited. But you do get a sense and there's quite a lot of excitement, maybe more excitement in the rest of the world than the US.

[00:59:55]

Sam, I want to ask you about something else that people are not happy about when it comes to these language and image models, which is this issue of copyright. I think a lot of people view what OpenAI and other companies did, which is hoovering up work from across the internet, using it to train these models that can, in some cases, output things that are similar to the work of living authors or writers or artists. And they just think this is the original sin of the AI industry, and we are never going to forgive them for doing this.

[01:00:29]

What do you.

[01:00:30]

Think about that? And what would you say to artists or writers who just think that this was a moral lapse? Forget about the legal, whether you're allowed to do it or not, that it was just unethical for you and other companies to do that in the.

[01:00:43]

First place. Well, we blocked that stuff. Like, you can't go to Dolly and generate something. I mean, speaking of being annoyed, we may be too aggressive on that. But I think it's the right thing to do until we figure out some economic model that works for people. And we're doing some things there now, but we've got more to do. Other people in the industry do allow quite a lot of that. And I get why artists are annoyed.

[01:01:08]

I guess I'm talking less about the output question than just the act of taking all of this work, much of it copyrighted. And without the explicit permission of the people who created it and using it to train these models, what would you say to the people who just say, Sam, that was the wrong move you should have asked, and we will never.

[01:01:28]

Forgive you for it? Well, first of all, I always have empathy for people who are like, Hey, you did this thing, and it's affecting me. And we didn't talk about it first, or it was just like a new thing. I do think that in the same way humans can read the internet and learn, AI should be allowed to read the internet and learn. Shouldn't be regurgitating, shouldn't be violating any copyright laws. But if we're really going to say that AI doesn't get to read the internet and learn, and if you read a physics textbook and learn how to do a physics calculation, every time you do that in the rest of your life, you got to figure out how to like... That seems like not a good solution to me. But on individuals, private work, we try not to train on that stuff. We really don't want to be here upsetting people. Again, I think other people in the industry have taken different approaches. And we've also done some things that I think now that we understand more, we'll do differently in the future.

[01:02:42]

What?

[01:02:44]

What we do differently? We want to figure out new economic models so that, say, if you're an artist, we don't just totally block you. We don't just not train on your data, which a lot of artists also say, No, I want this in here. I want whatever. But we have a way to help share revenue with you. Gpts are maybe going to be an interesting first example of this because people will be able to put private data in there and say, Hey, use this version, and there could be a revenue share around it.

[01:03:09]

I feel like that might be a good place to take a break and then come back and talk about.

[01:03:14]

The future. Yes. Let's take a break.

[01:03:29]

Well, I had one question about the future that came out of what we were talking about before the break, which is, and it's so big, but I truly need to hear your thoughts on this, which is what is the future of the internet as ChatGPT rises? And the reason I ask is I now have a hotkey on my computer that I type when I want to know something, and it just accesses... It accesses ChatGPT directly through a software called Raycast. And because of this, I am using Google Search, not nearly as much. I am visiting websites, not nearly as much. That has implications for all the publishers. And for, frankly, just the model itself, because presumably if the economics change, there will be fewer web pages created. There's less data for ChatGPT to access. So I'm just curious what you have thought about the internet in a world where your product succeeds in the way you want it to.

[01:04:28]

I do think if this all works, it should really change how we use the internet. There's a lot of things that the interface for is perfect. If you want to mindlessly watch TikTok videos, perfect. But if you're trying to get information or get a task accomplished, it's actually quite bad relative to what we should all aspire for. You can totally imagine a world where you have a task that right now takes hours of stuff, clicking around the internet and bringing stuff together. And you just ask ChatGPT to do one thing and it goes off and computes and you get the answer back. I'll be disappointed if we don't use the internet differently.

[01:05:15]

Yeah. Do you think that the economics of the internet as it is today are robust enough to withstand the challenge that AI poses? Probably. Okay. What do you think? Well, I worry in particular about the publishers. The publishers have been are having a hard time already for a million other reasons, but to the extent that they're driven by advertising and visits to web pages, and to the extent that the visits to the web pages are driven by Google search in particular, a world where web search is just no longer the front page to most of the internet. Internet, I think does require a different web economics.

[01:05:47]

I think it does require a shift, but I think the value is... What I thought you were asking about was like, is there going to be enough value there for some economic model to work? And I think that's definitely going to be the case. Yeah, the model may have to shift. I would love it if ads become less a part of the internet. I was thinking the other day, I just had this for whatever reason, this thought in my head as I was browsing around the internet being like, There's more ads than content everywhere.

[01:06:14]

I was reading a story today, scrolling on my phone, and I managed to get it to a point where between all of the ads on my relatively large phone screen, there was one line of text from the article visible.

[01:06:24]

One of the reasons I think people like ChatGPT, even if they can't articulate it, is we don't do ads. Yes. As an intentional choice because there's plenty of ways you could imagine us putting ads totally. But we made the choice that ads plus AI can get a little dystopic. We're not saying never. We do want to offer a free service, but a big part of our mission, fulfillment, I think, is if we can continue to offer ChatGPT for free at a high quality of service to anybody who wants it and just say, Hey, here's free AI and good free AI and no ads, because I think that really does, especially as the AI gets really smart, that really does get a little strange. Yeah.

[01:07:08]

I know we talked about AGI and it not being your favorite term, but it is a term that people in the industry uses a benchmark or a milestone or something that they're aiming for. And I'm curious what you think the barriers between here and AGI are. Maybe let's define AGI as a computer that can do any cognitive task that a human can.

[01:07:32]

Let's say we make an AI that is really good, but it can't go discover novel physics. Would you call that AGI?

[01:07:38]

I probably would.

[01:07:40]

Would, okay.

[01:07:41]

Would you?

[01:07:43]

Well, again, I don't like the term, but I wouldn't call that we're done with the mission. I'd say we still got a lot more work to do.

[01:07:48]

The vision is to create something that is better at humans than doing original science that can invent, can discover.

[01:07:55]

Well, I am a believer that all real sustainable human progress comes from scientific and technological progress. If we can have a lot more of that, I think it's great. If the system can do things that we unaided on our own can't, just to even as a tool that helps us go do that, then I will consider that a massive triumph and I can happily retire at that point. But before that, I can imagine that we do something that creates incredible economic value but is not the AGI, super intelligence, whatever you want to call it, thing that we should aspire to.

[01:08:29]

Right. Right. What are some of the barriers to getting to that place where we're doing novel physics research?

[01:08:35]

And.

[01:08:37]

Keep in mind, Kevin and I don't know anything about technology.

[01:08:41]

That seems unlikely to get true.

[01:08:43]

Well, if you start talking about retrieval, augmented generation or anything, you might lose me even. I'll follow, but you'll lose, Casey. He'll follow, yeah.

[01:08:53]

We talked earlier about just the model's limited ability to reason. And I think that's one thing that needs to be better. The model needs to be better at reasoning. Like GBT-4, an example of this that my co-founder, Ilia, uses sometimes that's really stuck in my mind is there was a time in Newton's life where the right thing for him.

[01:09:16]

To do- You're talking, of course, about Isaac Newton, not my life. Isaac Newton. Okay.

[01:09:19]

Well, maybe you too. But maybe my life.

[01:09:20]

We'll find out.

[01:09:21]

Stay tuned. Where the right thing for him to do is to read every math textbook he could get his hands on. He should talk to every smart professor, talk to his peers, do problem sets, whatever. That's what our models do today. At some point, he was never going to invent calculus doing that, but didn't exist in any textbook. At some point, he had to go think of new ideas and then test them out and building them whatever else. That phase, that second phase, we don't do yet. I think you need that before. It's something we want to call an AGI.

[01:09:55]

One thing that I hear from AI researchers is that a lot of the progress that has been made over the past, let's call it five years, in this type of AI has been just the result of just things getting bigger. Bigger models, more compute. Obviously, there's work around the edges in how you build these things that makes them more useful. But there hasn't really been a shift on the architectural level of the systems that these models are built on. Do you think that that is going to remain true? Or do you think that we need to invent some new process or new mode or new technique to get through some of these barriers?

[01:10:38]

We will need new research ideas, and we have needed them. I don't think it's fair to say there haven't been any here. I think a lot of the people who say that are not the people building GPT-4, but they're the people a pioneer from the sidelines. But there is some kernel of truth to it. And the answer is... Openai has a philosophy of we will just do whatever works. If it's time to scale the models and work on the engineering challenges, we'll go do that. If now we need a new algorithm to break through, we'll go work on that. If now we need a different data mix, we'll go work on that. We just do the thing in front of us and then the next one and then the next one and the next one. There are a lot of other people who want to write papers about level one, two, three and whatever. There are a lot of other people who want to say, Well, it's not real progress. They just made this incredible thing that people are using and loving, and it's not real sign. But, our belief is like, we will just do whatever we can to usefully drive the progress forward.

[01:11:42]

And we're open minded about how we do that.

[01:11:45]

What is super alignment? You all just recently announced that you are devoting a lot of resources and time and computing power to super alignment, and I don't know what it is. So can you help me understand?

[01:11:58]

It's alignment that comes with sour cream and guacamole. There you go. San Francisco, taco shot. That's a very San Francisco specific joke, but it's pretty good. I'm sorry. Go ahead, Sam. Can I.

[01:12:08]

Leave it at that? I don't really want to fall. I mean, that was such a good answer. Alignment is how you get these models to behave in accordance with the human who's using them, what they want. And super alignment is how you do that for super capable systems. We know how to align GPT-4 pretty well, but better than people thought we were going to be able to do. Now there's this like when we put out GPT-2 and 3 people are like, Oh, it's irresponsible research because this is always going to just spew toxic shit. You're never going to get it. And it actually turns out we're able to align GPT-4 reasonably well, maybe too well.

[01:12:47]

Yeah. I mean, good luck getting it to talk about sex is my official comment.

[01:12:52]

About GPT-4. But in some sense, that's an alignment failure because it's not doing what you want there. But now we have that. Now we have the social part of the problem. We can technically do it. But we don't yet know what the new challenges will be for much more capable systems. And so that's what that team researches.

[01:13:10]

What kinds of questions are they investigating or what research are they doing? Because I confess I lose my grounding in reality when you start talking about super capable systems and the problems that can emerge with them. Is this a theoretical future.

[01:13:26]

Forecasting team? Well, they try to do work that is useful today, but for the theoretical systems of the future. So they'll have their first result coming out, I think, pretty soon. But yeah, they're interested in these questions of, as the systems get more capable than humans, what is it going to take to reliably solve the alignment challenge?

[01:13:53]

Yeah. This is the stuff where my brain does feel like it starts to melt as I ponder the implications because you've made something that is smarter than every human. But you, the human, have to be smart enough to ensure that it always acts in your interest, even though, by definition, it is way.

[01:14:07]

Smarter than you. Yeah, we need some help there.

[01:14:09]

Yeah. I do want to stick on this issue of alignment or super alignment because I think there's an unspoken assumption in there that... Well, you just put it as alignment is what the user wants it to behave like. And obviously, there are a lot of users with good intentions.

[01:14:27]

No, it has to be like what society and the user can intersect on. There are going to have to be some rules here.

[01:14:36]

And I guess where do you derive those rules? Because if you're anthropic, you use the UN Declaration of Human Rights and the Apple terms of service, and that becomes the- The two.

[01:14:47]

Most important documents in rights, governance.

[01:14:50]

If you're not just going to borrow someone else's rules, how do you decide which values these things should.

[01:14:57]

Align themselves to? So we're doing this thing. We've been doing this thing. We've been doing these democratic input governance grants where we're giving different research teams money to go off and study different proposals. There's some very interesting ideas in there about how to fairly decide that. The naive approach to this that I have always been interested in, maybe we'll try at some point, is what if you had hundreds of millions of ChatGPT users spend a few hours a year answering questions about what they thought the default settings should be, what the wide bounds should be. Eventually, you need more than ChatGPT users. You need the whole world represented in some way because even if you're not using it, you're still impacted by it. But to start, what if you literally just had ChatGPT chat with its users? I think it's very important. It would be very important in this case to let the users make final decisions, of course. But you could imagine it saying like, Hey, you answered this question this way. Here's how this would impact other users in a way you might not have thought of. If you want to stick with your answer, that's totally up to you.

[01:16:00]

But are you sure given this new data? And then you could imagine GPT-5 or whatever just learning that collective preference set. And I think that's interesting to consider better than the Apple terms.

[01:16:16]

Of.

[01:16:16]

Service, let's say.

[01:16:18]

I want to ask you about this feeling. Kevin might call it AI Vertico. Is this a widespread term that people use? No, I think you invented this. It's just us. So there is this moment when you contemplate even just the medium AI future, you start to think about what it might mean for the job market, your own job, your daily life, for society. And there is this dizziness that I find sets in. This year, I actually had a nightmare about AGI, and then I asked around, and I feel like people who work on this stuff, that's not uncommon. I wonder for you, if you have had these moments of AI, Vertigo, if you continue to have them, or is there at some point where you think about it long enough that you feel like you get your legs underneath you?

[01:17:00]

I used to have... I mean, there were some point-to-these moments, but there were some very strange, extreme, vertigo moments, particularly around the launch of GPT-3. But you do get your legs under you.

[01:17:16]

Yeah.

[01:17:16]

I think the future will somehow be less different than we think. It's this amazing thing to say, right? We invent AGI and it matters less than we think. It doesn't sound like a sentence that parses. And yet it's what I expect to happen.

[01:17:29]

Why is that? What?

[01:17:34]

There's a lot of inertia in society, and humans are remarkably adaptable to any amount of change.

[01:17:40]

One question I get a lot that I imagine you do, too, is from people who want to know what they can do. You mentioned adaptation as being necessary on the societal level. I think for many years, the conventional wisdom was that if you wanted to adapt to a changing world, you should learn how to code. That was the classic.

[01:17:58]

Advice that people- May not be such.

[01:17:59]

Good advice anymore. Exactly. So now AI systems can code pretty well. For a long time, the conventional wisdom was that creative work was untouchable by machines. If you were a factory worker, you might get automated out of your job. But if you were an artist or a writer, that was impossible for computers to do. Now we see that's no longer safe. So where is the high ground here? Where can people focus their energy if they want skills and abilities that AI is not going to be able to replace?

[01:18:27]

My meta answer is it's always the right bed to just get good at the most powerful new tools, most capable of new tools. And so when Computer Program was that, you did want to become a programmer. And now that AI tools totally change what one person can do, you want to get really good at using AI tools. Having a sense for how to work with ChatGPT and other things, that is the high ground. We're not going back. That's going to be part of the world. You can use it in all sorts of ways, but getting fluent at it, I think, is really important. I want to.

[01:19:08]

Challenge that because I think you're partially right in that I think there is an opportunity for people to embrace AI and become more resilient to disruption that way. But I also think if you look back through history, it's not like we learn how to do something new and then the old way just goes away, right? We still make things by hand. We're still an artisanal market. So do you think there's going to be people who just decide, you know what? I don't want to use this stuff. Totally. And there's going to be something valuable in their, I don't know, non AI assisted work.

[01:19:42]

I expect like, I expect that if we look forward to the future, things that we want to be cheap can get much cheaper and things that we want to be expensive are going to be astronomical expensive. What? Real estate, handmade goods, art. And so totally, there will be a huge premium on things like that. And there will be many people who really... There's always been like a... Even when machine made products have been much better, there has always been a premium on handmade products, and I'd expect that to intensify.

[01:20:17]

This is also a bit of a curveball. Very curious to get your thoughts. Where do you come down on the idea of AI romances? Are these net good for society? I don't want one personally. You don't want one. Okay. But it's clear that there's a huge demand for this, right? Yeah. I think that, I mean, Replica is building these. They seem like they're doing very well. I would be shocked if this is not a multibillion dollar company, right?

[01:20:40]

Someone will make a multibillion.

[01:20:41]

Dollar company. That's what I'm saying. Yeah, somebody will. Yeah, for sure. I just personally think we're going to have a big culture war. I think Fox News is going to be doing segments about the generation lost to AI, girlfriends or boyfriends at some point within the next few years. But at the same time, you look at all the data on loneliness, and it seems like, well, if we can give people companions that make them happy during the day, it could be a net good thing?

[01:21:01]

It's complicated. I have misgivings, but this is not a place where I think I get to impose what I think is good on.

[01:21:09]

Other people. Totally. Okay, but it sounds like this is not at the top of your product roadmap, is building the boyfriend API. No. All right.

[01:21:15]

You recently posted on X that you expect AI to be capable of superhuman persuasion well before it is superhuman at general intelligence, which may lead to some very strange outcomes. Can you expand on that? What are some things that AI might become very good at persuading us to do? And what are some of those strange outcomes you're worried about?

[01:21:34]

The thing I was thinking about at that moment was the upcoming election. There's a huge focus on the US 2024 election. There's a huge focus on deep fakes and the impact of AI there. And I think that's reasonable to worry about, good to worry about. But we already have some societal antibodies towards people seeing like doctorate photos or whatever. And yeah, they're going to get more compelling. It's going to be more, but we know those are there. There's a lot of discussion about that. There's almost no discussion about what are the new things AI can do to influence an election, AI tools can do to influence an election. One of those is to carefully, one on one persuade individual people.

[01:22:15]

-tailored messages. -tailored messages. That's a new thing that the content farms couldn't quite do.

[01:22:20]

Right. And that's not AGI, but that could still be pretty harmful.

[01:22:23]

I think so, yeah.

[01:22:25]

I know we are running out of time, but I do want to push us a little bit further into the future than the, I don't know, maybe five year horizon we've been talking about. If you can imagine a good post AGI world, a world in which we have reached this threshold, whatever it is, what does that world look like? Does it have a government? Does it have companies? What do people do all day?

[01:22:50]

Like a lot of material abundance. People continue to be very busy, but the way we define work always moves. Our jobs would have seemed like real jobs to people several hundred years ago. This would have seemed like incredibly silly entertainment. It's important to me, it's important to you. Hopefully, it has some value to other people as well. The jobs of the future may seem, I hope they seem even sillier to us, but I hope that people get even more fulfillment and I hope society gets even more fulfillment out of them. But everybody can have a really great quality of life to a degree that I think we probably just can't imagine now. Of course, we'll still have governments. Of course, people will still squabble over whatever they squabble over. Less different in all of these ways than someone would think and then unbelievably different in terms of what you can get a computer to do for you.

[01:23:42]

One fun thing about becoming a very prominent person in the tech industry as you are is that people have all kinds of theories about you. One fun one that I heard the other day is that you have a secret Twitter account where you are way less measured.

[01:23:58]

And careful. I don't anymore. I did for a while. I decided I just couldn't keep up with the Opsac.

[01:24:03]

It's so hard to lead a double life.

[01:24:05]

What was your secret.

[01:24:06]

Twitter account? Obviously, I can't. I had a good alt. A lot of people have good alt, but...

[01:24:12]

Your name is literally Sam Altman. I mean, it would have been weird if you didn't have one.

[01:24:15]

But I think I just got too well known or something to be doing that.

[01:24:20]

Yeah. And the theory that I heard attached to this was that you are secretly an accelerationist, a person who wants AI to go as fast as possible. And then all this careful diplomacy that you're doing and asking for regulation. This is really just the polite face that you put on for society. But deep down you just think we should go all gas, no breaks toward the future.

[01:24:41]

No, I certainly don't think all gas, no breaks to the future, but I do think we should go to the future. And that probably is what differentiates me than most of the AI companies is I think AI is good. I don't secretly hate what I do all day. I think it's going to be awesome. I want to see this get built. I want people to benefit from this. So all gas, no break, certainly not. I don't even think like most people who say it mean it, but I am a believer that this is a tremendously beneficial technology and that we have got to find a way safely and responsibly to get into the hands of the people to confront the risk so that we get to enjoy the huge rewards. And maybe relative to the prior of most people who work on AI, that does make me an accelerationist. But compared to those accelerationist people, I'm clearly not them. So I'm like somewhere... I think you want the CEO of this company to be somewhere in the middle, which I think I am. Your acceleration is... -somewhere in the mid-space, I.

[01:25:36]

Think I am. Your gas and breaks.

[01:25:38]

I believe that this will be the most important and beneficial technology humanity has ever, has yet invented. And I also believe that if we're not careful about it, it can be quite disastrous. And so we have to navigate it carefully.

[01:25:53]

Yeah. Yeah.

[01:25:54]

Sam, thanks so much for coming on Hardwork.

[01:25:56]

Thank you, guys.

[01:25:58]

When we come back, we'll have some notes on that interview now with the benefit of hindsight. Casey, now with five days of hindsight on this interview and after everything that has transpired between the time that we originally recorded it and now, are there any things that Sam said that stuck out to you as being particularly relevant to understanding this conflict?

[01:26:38]

I keep coming back to the question that you asked him about whether he was a closet accelerationist. Is he somebody who is telling the world, Hey, I'm trying to do this in a very gradual iterative way, but behind the scenes is working to hit the accelerator? And during the interview, he gave a very diplomatic answer, as you might expect, to that question. But learning what we have learned over the past few days, I do feel like he is on the more accelerationist side of things. And certainly all of the people rallying to his defense on social media over the weekend, a good number of them were rallying because they think he is the one who is pushing AI forward. How about you? What do you think?

[01:27:19]

Totally. I thought that was very interesting. And now with the additional context of the last three days, explains a lot about the conflict between Sam Alman and the board. We still don't obviously know exactly what happened, but I can imagine that Sam going around saying things like, I think that the future is going to be amazing, and I think that everything's going to be great with AI. I can see why that would land poorly with board members who are much more concerned from the looks of things about how the future is going to look. So it seems like he's an optimist who is running a company where the board of that company is less optimistic about AI, and that just seems like a fundamental tension that it sounds like they were not able to get past. I was also struck by something else that he said. It was interesting when we talked about GPTs, these like, build your own chatbots that OpenAI released at Dev Day a few weeks ago, he said that he was embarrassed because they were so simple and not all that functional and pretty prosaic. And that's just such a striking contrast because some of the reporting that came out over the weekend suggested that the GPTs were actually one of the things that scared Ilia Cuscover and the board that giving these AIs more agency and more autonomy and allowing them to do things on the internet was, at least if you believe the reporting part of what made the board so anxious.

[01:28:48]

Yes. And at the same time, if it is true that the board and Ilia found out about GPT's at Developer Day, that speaks to some fundamental problems and how this company was being run. And I don't know if that is a Sam thing or a bored thing or what, but you would think that by the time the keynote was being delivered, all of those stakeholders would have been looped in.

[01:29:11]

Totally. And I guess my other reflection on that interview is that it just sounded like Sam had no idea that any of this was brewing. This did not sound like someone who was trying to carefully walk the line between being optimistic and being scared of existential risk. This did not sound like someone who thought that he was on thin ice with his board. This sounded like someone who was very confidently charging ahead with his vision for the future of AI. That's right. I really hope we are not doing more emergency podcasts on this. Could the news just give us a little break for a minute?

[01:29:46]

Well, if I were you, Kevin, I would clear your Tuesday morning. Oh, God.

[01:29:51]

Happy.

[01:29:52]

Thanksgiving. Happy Thanksgiving.

[01:29:55]

Heart Fork is produced by Rachel Cone in Davis Land. We're edited byJen Poyant. Today's show is engineered by Rowen Niemistow, original music by Marion Lasano, Rowen Niemistow, and Dan Powell. Our audience editor is Nel Galugli. Video production by Ryan Manning and Dylan Burgesson. Special thanks to Paula Schumann, Puewing Tam, Kate Leprestee, and Jeffrey Miranda. As always, you can email us at hardfork@nytimes. Com.