Transcribe your podcast
[00:00:00]

Hey, it's Michael. I hope you're having a wonderful Thanksgiving holiday. If you didn't catch us yesterday, let me just say, in addition to you and everyone who listens to The Daily, one of the things we are so grateful for here at the show is our amazing colleagues, reporters and editors throughout the newsroom and also throughout the Times audio department. So yesterday and today, we're doing something a little bit different. We're turning the stage over to those colleagues in the audio department to showcase their terrific work today, it's our friends at Hardfork. If you're not familiar, Hardfork is a weekly tech conversation hosted by Kevin Roos and Casey Newton. It's excellent. Case in point for today's show, we're going to play you an interview that Kevin and Casey did with Sam Altman, the CEO at OpenAI just two days before Altman was abruptly ousted and later reinstated by his board. If you're a Daily listener, earlier this week, we covered the entire saga in our episode Inside the Coup at OpenAI. Anyway, here's Kevin and here's Casey, who are going to say a little bit more about their interview with Altman and about their show Hard Fork.

[00:01:15]

Take a listen.

[00:01:16]

Hello, daily listeners. Hope you had a good Thanksgiving. I'm Kevin Ruse, a tech columnist for The New York Times.

[00:01:21]

I'm Casey Newton for a platformer.

[00:01:23]

And as you just heard, we are the hosts of Hardfork. It's a weekly podcast from the New York Times about technology. Silicon Valley, AI. The future, all that stuff. Casey, how would you describe our show, Hard Fork to the Daily audience?

[00:01:38]

I would say if you're somebody who is curious about the future, but you're also the sort of person who likes to get a drink at the end of the week with your friends, we are the show for you.

[00:01:46]

Okay?

[00:01:46]

We're going to tell you what is going on in this crazy world, but we're also going to try to make sure you have a good time while we do it.

[00:01:51]

Yeah.

[00:01:51]

So this week, for example, we have been talking about the never ending saga at OpenAI that Michael just mentioned. If you haven't been following this news, let's just summarize what's going on in the quickest way possible. So, last Friday, Sam Altman, the CEO of OpenAI and arguably one of the most important people in the tech industry, was fired by his board. This firing shocked everyone investors, employees, seemingly Sam Altman himself, who seemed not to know what was coming. Then over the next few days, there was a wild campaign by investors, employees, and eventually some of the board members to bring back Sam Altman as CEO. And late on Tuesday night, that campaign was successful. The company announced that Sam was coming back and that the board was going to be reconstituted and basically back to normal.

[00:02:44]

Yeah, on one hand, Kevin, a shocking turn of events. And on the other, by the time we got here, basically the only turn of events possible, I think.

[00:02:53]

Yeah. So it's been a totally insane few days. We've done several emergency podcasts about this and today we are going to bring you something that I think is really important, which is an interview with Sam Altman. Now this interview predates. Sam Altman's firing. We recorded it on Wednesday of last week, just two days before he was fired. We obviously weren't able to ask him about the firing or the events that followed, but I think this conversation lays out Sam's worldview and is really important to understanding why he's been such a controversial figure inside OpenAI and how he's thinking about the way that AI is developing and how it's going to influence the future.

[00:03:36]

Yeah.

[00:03:37]

In a way, Kevin, it's almost as if the firing never happened because what we were curious about was how are you going to be leading OpenAI into the future? And as of Tuesday evening, he now will once again be leading OpenAI into the future.

[00:03:50]

Totally. So when we come back, our conversation with Sam Altman, the CEO of OpenAI, recorded just two days before all of this drama started.

[00:04:01]

Sam Altman.

[00:04:01]

Welcome back to Hardfork.

[00:04:02]

Thank you, sam, it has been just about a year since Chachi PT was released and I wonder if you have been doing some reflecting over the past year and kind of where it has brought us in the development of AI.

[00:04:16]

Frankly, it has been such a busy year, there has not been a ton of time for reflection.

[00:04:20]

That's why we brought you in. We want you to reflect here.

[00:04:23]

Great, I can do it now. I definitely think this was the year so far. There will be maybe more in the future, but the year so far where the general average tech person went from taking AI not that seriously, to taking it pretty seriously.

[00:04:39]

Yeah.

[00:04:40]

And the sort of recompiling of expectations given that. So I think in some sense that's like the most significant update of the year.

[00:04:50]

I would imagine that for you, a lot of the past year has been watching the world catch up to things that you have been thinking about for some time. Does it feel that way?

[00:04:59]

Yeah, it does. We kind of always thought on the inside of OpenAI that it was strange that the rest of the world didn't take this more seriously, it wasn't more excited about it.

[00:05:09]

I mean, I think if five years ago you had explained what Chat GPT was going to be, I would have thought, wow, that sounds pretty cool. And presumably I could have just looked into it more and I would have smartened myself up. But I think until I actually used it, as is often the case, it was just hard to know what it was.

[00:05:25]

Yeah, I actually think we could have explained it and it wouldn't have made that much of a difference. We tried. People are busy with their lives. They don't have a lot of time to sit there and listen to some tech people prognosticate about something that may or may not happen, but you ship a product that people use, like get real value out of and then it's different.

[00:05:43]

Yeah. I remember reading about the early days of the run up to the launch of Chachipt and I think you all have said that you did not expect it to be a hit when it launched.

[00:05:54]

No, we thought it would be a hit. We did it because we thought it was going to be a hit. We didn't think it was going to be like this big of a hit.

[00:06:00]

Right. As we're sitting here today, I believe it's the case that you can't actually sign up for Chat GPT Plus right now, is that right?

[00:06:05]

Correct. Yeah.

[00:06:06]

So what's that all about?

[00:06:08]

We have not enough capacity always, but at some point it gets really bad. So over the last ten days or so, we've done everything we can. We've rolled out new optimizations, we've disabled some features and then people just keep signing up. It keeps getting slower and slower and there's like a limit at some point to what you can do there and we just don't want to offer like a bad quality of service. And so it gets like slow enough that we just say, you know what, until we can make more progress either with more GPUs or more optimizations, we're going to put this on hold. Not a great place to be in, to be honest, but it's like the least of several bad options.

[00:06:47]

Sure. And I feel like in the history of tech development, there often is a moment with really popular products where you just have to close sign ups for a little while. Right.

[00:06:56]

The thing that's different about this than others is it's so much more compute intensive than the world is used to for Internet services. So you don't usually have to do this. Usually by the time you're at this scale, you've solved your scaling bottlenecks.

[00:07:08]

Yeah. One of the interesting things for me about covering all the AI changes over the past year is that it often feels like journalists and researchers and companies are discovering properties of these systems sort of at the same time altogether. I mean, I remember when we had you and Kevin Scott from Microsoft on the show earlier this year around the Bing relaunch, and you both said something to the effect of, well, to discover what these models are or what they're capable of, you kind of have to put them out into the world and have millions of people using them. Then we saw all kinds of crazy but also inspiring things. You had Bing, Sydney, but you also had people starting to use these things in their lives. So I guess I'm curious what you feel like you have learned about language models and your language models specifically from putting them out into the world.

[00:07:56]

What we don't want to be surprised by is the capabilities of the model. That would be bad. And we were not with GPT Four, for example, we took a long time between finishing that model and releasing it. Red teamed it heavily, really studied it, did all of the work internally, externally. And I'd say there's, at least so far, and maybe now it's been long enough that we would have we have not been surprised by any capabilities the model had that we just didn't know about at all, in a way that we were for GBT three. Frankly, sometimes the people found stuff. But what I think you can't do in the lab is understand how technology and society are going to coevolve. So you can say here's what the model can do and not do, but you can't say like and here's exactly how society is going to progress, given that. And that's where you just have to see what people are doing, how they're using it. And that well, one thing is they use it a lot. That's one takeaway that we did not clearly we did not appropriately plan for. But more interesting than that is the way in which this is transforming people's productivity, personal lives, how they're learning, and how one example that I think is instructive, because it was the first and the loudest is what happened with Chachipt in education.

[00:09:22]

Days, at least weeks. But I think days after the release of Chat GPT, school districts were like falling all over themselves to ban Chat GPT. And that didn't really surprise us. We could have predicted, and did predict the thing that happened after that quickly was like weeks to months was school districts and teachers saying, hey, actually we made a mistake. And this is really important part of the future of education and the benefits far away the downside. And not only are we unbanning it, we're encouraging our teachers to make use of it in the classroom. We're encouraging our students to get really good at this tool because it's going to be part of the way people live. And then there was like a big discussion about what the kind of path forward should be and that is just not something that could have happened without releasing. Can I say one more thing? Yeah. Part of the decision that we made with the Chat GPT release, the original plan had been to do the Chat interface and GPT Four together in March. And we really believe in this idea of Iterative deployment. And we had realized that the Chat interface plus GPT Four was a lot.

[00:10:39]

I don't think we realized quite how much it was like too much for.

[00:10:42]

Society to take in, split it and.

[00:10:45]

We put it out with GPT 3.5 1st, which we thought was a much weaker model. It turned out to still be powerful enough for a lot of use cases. But I think that in retrospect was a really good decision and helped with that process of gradual adaptation for society.

[00:11:02]

Looking back, do you wish that you had done more to sort of, I don't know, give people some sort of a manual to say, here's how you can use this at school or at work?

[00:11:09]

Two things. One, I wish we had done something intermediate between the release of 3.5 in the API and Chat GPT. Now, I don't know how well that would have worked because I think there was just going to be some moment where it went like viral in the mind of society and I don't know how incremental that could have been. That's sort of a like either it goes like this or it doesn't kind of thing. And I have reflected on this question a lot. I think the world was going to have to have that moment. It was better sooner than later. It was good we did it when we did. Maybe we should have tried to push it even a little earlier, but it's a little chancy about when it hits and I think only a consumer product could have done what happened there. Now, the second thing is, should we have released more of a how to manual? And I honestly don't know. I think we could have done some things there that would have been helpful. But I really believe that it's not optimal for tech companies to tell people like, here is how to use this technology and here's how to do whatever.

[00:12:20]

And the organic thing that happened there actually was pretty good. Yeah.

[00:12:24]

I'm curious about the thing that you just said about we thought it was important to get this stuff into folks hands sooner rather than later.

[00:12:31]

Say more about what that is, more time to adapt for our institutions and leaders to understand, for people to think about what the next version of the model should do, what they'd like, what would be useful, what would not be useful, what would be really bad, how society and the economy need to coevolve. The thing that many people in the field or adjacent to the field have advocated or used to advocate for, which I always thought was super bad, was, this is so disruptive, such a big deal. It's got to be done in secret by the small group of us that can understand it. And then we will fully build the AGI and push a button all at once when it's ready. And I think that'd be quite bad. Yeah.

[00:13:13]

Because it would just be way too much change too fast.

[00:13:16]

Yeah. Again, society and technology have to coevolve and people have to decide what's going to work for them and not and how they want to use it. And you can criticize open Eye about many, many things, but we do try to really listen to people and adapt it in ways that make it better or more useful. And I think we're able to do that but we wouldn't get it right without that feedback.

[00:13:33]

Yeah, I want to talk about AGI and the path to AGI later on, but first I want to just define AGI and have you talk about sort of where we are on the so.

[00:13:45]

I think it's a ridiculous and meaningless term. Yeah. So I apologize that I keep using it.

[00:13:52]

I just never know what people are talking about.

[00:13:55]

They mean, like, really smart AI. Yeah.

[00:13:57]

So it stands for Artificial General Intelligence, and you could probably ask 100 different AI researchers and they would give you 100 different definitions of what AGI is. Researchers at Google DeepMind just released a paper this month that sort of offers a framework. They have five levels, level I guess they have levels ranging from level zero, which is no AI, all the way up to level five, which is superhuman. And they suggest that currently, Chat, GPT, Bard, Llama, two are all at level one, which is sort of equal to or slightly better than an unskilled human. Would you agree with that? Where are we? If you say this is a term that means something and you sort of define it that way, how close are we?

[00:14:42]

I think the thing that matters is the curve and the rate of progress, and there's not going to be some milestone that we all agree, like, okay, we've passed it and now it's called AGI. What I would say is we currently have systems that are like, there will be researchers who will write papers like that and academics will debate it, and people in the industry will debate it. And I think most of the world just cares, like, is this thing useful to me or not? And we currently have systems that are somewhat useful, clearly, and whether we want to say it's a level one or two, I don't know, but people use it a lot and they really love it. There's huge weaknesses in the current systems, but it doesn't mean that I'm a little embarrassed by GPTs, but people still like them and that's good. It's nice to do useful stuff for people. So, yeah, call it a level one, doesn't bother me at all. I am embarrassed by it. We will make them much better, but at their current state, they are still, like, delighting people and being useful to people.

[00:15:45]

Yeah. I also think it underrates them slightly to say that they're just better than unskilled humans. When I use Chachi PD, it is better than skilled humans for some things.

[00:15:52]

And worse than any human and many other things.

[00:15:56]

But I guess this is one of the questions that people ask me the most, and I imagine ask you is like, what are today's AI systems useful and not useful for doing?

[00:16:07]

I would say the main thing that they're bad at, well, many things, but one that is on my mind a lot is they're bad at reasoning. And a lot of the valuable human things require some degree of complex reasoning, but they're good at a lot of other things. GPT Four is vastly superhuman in terms of its world knowledge. It knows there's a lot of things in there and it's very different than how we think about evaluating human intelligence. So it can't do these basic reasoning tasks. On the other hand, it knows more than any human has ever known. On the other hand, again, sometimes it totally makes stuff up in a way that a human would not. But if you are using it to be a coder, for example, it can hugely increase your productivity and there's value there, even though it has all of these other weak points. If you are a student, you can learn a lot more than you could without using this tool. In some ways, value there too.

[00:17:10]

Let's talk about GPTs, which you announced at your recent developer conference for those who haven't had a chance to use one yet. Sam. What's a GPT?

[00:17:17]

It's like a custom version of Chat GPT that you can get to behave in a certain way. You can give it limited ability to do actions, you can give it knowledge to refer to, you can say like act this way. But it's super easy to make. And it's a first step towards more powerful AI systems and agents.

[00:17:34]

We've had some fun with them on the show. There's a hard fork bot that you can sort of ask about anything that's happened on any episode of the show. It works pretty well, we found, when we did some testing. But I want to talk about where this is going. What is the GPTs that you've released.

[00:17:48]

A first step toward AIS that can accomplish useful tasks? I think we need to move towards this with great care. I think it would be a bad idea to put like, turn powerful agents free on the internet, but AIS that can act on your behalf, to do something with a company that can access your data, that can help you be good at a task, I think that's going to be an exciting way. We use computers. We have this belief that we're heading towards a vision where there are new interfaces, new user experiences possible because finally the computer can understand you and think. And so the Sci-Fi vision of a computer that you just like, tell what you want and it figures out how to do it, this is a step towards that right now.

[00:18:44]

I think what's holding a lot of people back in a lot of companies and organizations back in sort of using this kind of AI in their work is that it can be unreliable. It can make up things, it can give wrong answers, which is fine if you're doing creative writing assignments, but not if you're a hospital or a law firm or something else with big stakes. How do we solve this problem of reliability? And do you think we'll ever get to the sort of low fault tolerance that is needed for these really high stakes applications.

[00:19:17]

So first of all, I think this is like a great example of people understanding the technology, making smart decisions with it. Society and the technology coevolving together. What you see is that people are using it where appropriate and where it's helpful and not using it where you shouldn't. And for all of the sort of fear that people have had, both users and companies seem to really understand the limitations and are making appropriate decisions about where to roll it out. The kind of controllability, reliability, whatever you want to call it, that is going to get much better. I think we'll see a big step forward there over the coming years. And I think that there will be a time, I don't know if it's like 20, 26, 20, 28, 20, 30, whatever, but there will be a time where we just don't talk about this anymore. Yeah.

[00:20:16]

It seems to me though, that that is something that becomes very important to get right as you build these more powerful GPTs. Right. Once I tell, like, I would love to have a GPT, be my assistant, go through my emails, hey, don't forget to respond to this before the end of the day.

[00:20:30]

The reliability has got to be way up before that happens.

[00:20:32]

Yeah, that makes sense. You mentioned as we started to talk about GPTs that you have to do this carefully for folks who haven't spent as much time reading about this, explain what are some things that could go you guys are obviously going to be very careful with this. Other people are going to build GPT. Like things might not put the same kind of controls in place. So what can you imagine other people's doing that you as the CEO would say to your folks, hey, it's not going to be able to do that?

[00:20:59]

Well, that example that you just gave, if you let it act as your assistant and go send emails, do financial transfers for you, it's very easy to imagine how that could go wrong. But I think most people who would use this don't want that to happen on their behalf either. And so there's more resilience to this sort of stuff than people think.

[00:21:22]

I think that's right. I mean, for what's worth on the Hallucination thing, which it does feel like has maybe been the longest conversation that we've had about Chachi PT in general since it launched. I just always think about Wikipedia as a resource I use all the time and I don't want Wikipedia to be wrong, but 100% of the time it doesn't matter if it does. I am not relying on it for life saving information. Right. Chachi PT for me is the same. Right? It's like, hey, it's like great, just kind of bar trivia, like, hey, what's? Like the history of this conflict in the world?

[00:21:49]

Yeah, I mean, we want to get it a lot better and we will. I think the next model will just hallucinate, much less is there an optimal.

[00:21:58]

Level of hallucination in an AI model because I've heard researchers say, well, you actually don't want it to never hallucinate because that would mean making it not creative. New ideas come from making stuff up.

[00:22:11]

That's not necessarily tethered to tend to use the word controllability and not reliability. You want it to be reliable when you want you want it to either you instruct it or it just knows based off of the context that you are asking a factual query and you want the 100% black and white answer but you also want it to know when you want it to hallucinate or you want it to make stuff up. As you just said, new discovery happens because you come up with new ideas, most of which are wrong and you discard those and keep the good ones and sort of add those to your understanding of reality. Or if you're telling a creative story, you want that. If these models didn't hallucinate at all ever, they wouldn't be so exciting, they wouldn't do a lot of the things that they can do, but you only want them to do that when you want them to do that. The way I think about is like model capability, personalization and controllability and those are like the three axes we have to push on and controllability means no hallucinations and you don't want lots of it when you're trying to invent something new.

[00:23:15]

Let's maybe start moving into some of the debates that we've been having about AI over the past year. And actually I want to start with something that I haven't heard as much but that I do bump into when I use your products, which is like they can be quite restrictive in how you use them. I think mostly for great reasons. Right. I think you guys have learned a lot of lessons from the past era of tech development. At the same time, I feel like I've tried to ask Chat GPT a question about sexual health. I feel like it's going to call the police on me. Right. So I'm just curious how you've approached that.

[00:23:44]

Look, one thing, no one wants to be scolded by a computer ever. That is not a good feeling. And so you should never feel like you're going to have the police called. Horrible, horrible, horrible. We have started very conservative, which I think is a defensible choice. Other people may have made a different one, but again, that principle of controllability, what we'd like to get to is a world where if you want some of the guardrails relaxed a lot and you're not like a child or something, then fine, we'll relax the guardrails, it should be up to you. But I think starting super conservative here, although annoying, is a defensible decision and I wouldn't have gone back and made it differently. We have relaxed it already. We will relax it much more, but we want to do it in a way where it's user controlled. Yeah.

[00:24:37]

Are there certain red lines you won't cross, things that you will never let your models be used for other than things that are obviously illegal or dangerous?

[00:24:47]

Yeah. Certainly things that are illegal and dangerous, we won't. There's, like, a lot of other things that I could say, but they so depend, like where those red lines will be so depend on how the technology evolves that it's hard to say right now. Like, here's the exhaustive set. We really try to just study the models and predict capabilities as we go, but if we learn something new, we change our plans. Yeah.

[00:25:12]

One other area where things have been shifting a lot over the past year is in AI regulation and governance. I think a year ago, if you'd asked the average Congressperson, what do you think of AI? They would have said, what's that?

[00:25:24]

Get out of my office.

[00:25:27]

We just recently saw the Biden White House put out an executive order about AI. You have obviously been meeting a lot with lawmakers and regulators, not just in the US. But around the world. What's your view of how AI regulation is shaping up?

[00:25:40]

It's a really tricky point to get across. What we believe is that on the frontier systems, there does need to be proactive regulation there, but heading into overreach and regulatory capture would be really bad. And there's a lot of amazing work that's going to happen with smaller models, smaller companies, open source efforts, and it's really important that regulation not strangle that. So it's like, I've sort of become a villain for this, but I think.

[00:26:07]

There was you have yeah.

[00:26:08]

How do you feel about this?

[00:26:10]

Like, annoyed, but have bigger problems in my life right now. But this message of, like, regulate us, regulate the really capable models that can have significant consequences, but leave the rest of the industry alone, it's a hard message to get across.

[00:26:30]

Sure.

[00:26:30]

Here is an argument that was made to me by a high ranking executive at a major tech company as some of this debate was playing out. This person said to me that there is essentially no harms that these models can have, that the Internet itself doesn't enable. Right. And that to do any sort of work like it is proposed in this executive order to have to inform the Biden administration is just essentially pulling up the ladder behind you and ensuring that the folks who've already raised the money can sort of reap all of the profits of this new world and will leave the little people behind. So I'm curious what you make of that argument.

[00:27:10]

I disagree with it on a bunch of levels. First of all, I wish the threshold for when you draft a report was set differently and based off of evals. And capability thresholds.

[00:27:23]

Not flops.

[00:27:24]

Not flops. Okay. But there's no small company training with that many flops anyway, so that's like a little bit yeah.

[00:27:29]

For the listener who maybe didn't listen to our last episode about our flops episode, the flops are the sort of measure of the amount of computing that is used to train these models. The executive order says if you're above a certain computing threshold, you have to tell the government that you're training a model that big.

[00:27:44]

But no small effort is training at ten to the 26 flops. Currently, no big effort is either. So that's like a dishonest comment. Second of all, the burden of just saying, like, here's what we're doing is not that great, but third of all, the underlying thing there, there's nothing you can do here that you couldn't already do on the internet. That's the real either dishonesty or lack of understanding. You could maybe say with GPT four, you can't do anything you can't do on the internet, but I don't think that's really true. Even at GPT four, there are some new things, and GPT five and six, there will be very new things. And saying that we're going to be cautious and responsible and have some testing around that, I think that's going to look more prudent in retrospect than it maybe sounds right now.

[00:28:40]

I have to say, for me, these seem like the absolute gentlest regulations you could imagine. It's like, tell the government and report on any safety testing you did seems reasonable.

[00:28:47]

Yeah. People are not just saying that these fears are unjustified of AI and sort of existential risk. Some people, some of the more vocal critics of OpenAI have said that OpenAI, that you are specifically lying about the risks of human extinction from AI creating fear so that regulators will come in and make laws or give executive orders that prevent smaller competitors from being able to compete with you. Andrew Ing, who is, I think, one of your professors at Stanford recently said something to this effect. What's your response to that? I'm curious if you have thoughts about.

[00:29:25]

Yeah, like, I actually don't think we're all going to go extinct. I think it's going to be great. I think we're like, heading towards the best world ever. But when we deal with a dangerous technology as a society, we often say that we have to confront and successfully navigate the risks to get to enjoy the benefits. And that's like a pretty consensus thing. I don't think that's like a radical position. I can imagine that if this technology stays on the same curve, there are systems that are capable of significant harm in the future. Andrew also said not that long ago that he thought it was totally irresponsible to talk about AGI because it was just never happening.

[00:30:18]

I think he compared it to worrying about overpopulation on Mars, and I think.

[00:30:22]

Now he might say something different. Humans are very bad at having intuition for exponentials again, I think it's going to be great. I wouldn't work on this if I didn't think it was going to be great. People love it already and I think they're going to love it a lot more. But that doesn't mean we don't need to be responsible and accountable and thoughtful about what the downsides could be. And in fact, I think the tech industry often has only talked about the good and not the bad and that doesn't go well either.

[00:30:58]

The exponential thing is real. I have dealt with this. I've talked about the fact that I was only using GPT 3.5 until a few months ago and finally, at the urging of a friend, upgraded and I.

[00:31:09]

Thought I would have given you a free account.

[00:31:10]

Sorry, I should have asked.

[00:31:12]

But it's a real improvement.

[00:31:14]

It is a real improvement and not just in the sense of, oh, the copy that it generates is better. It actually transformed my sense of how quickly the industry was moving. It made me think, oh, the next generation update is going to be sort of radically better. And so I think that part of what we're dealing with is just that it has not been widely distributed enough to get people to reckon with the implications.

[00:31:38]

I disagree with that. I mean, I think that maybe the tech experts say like, oh, this is not a big deal, whatever. Most of the world who has used even the free version is like, oh man, they got real AI.

[00:31:52]

Yeah, and you went around the world this year talking to people in a lot of different countries. I'd be curious to what extent that informed what you just said.

[00:32:00]

Significantly. I mean, I had a little bit of a sample bias, right, because the people that wanted to meet me were probably pretty excited. But you do get a sense and there's quite a lot of excitement, maybe more excitement in the rest of the world, in the US.

[00:32:12]

Sam, I want to ask you about something else that people are not happy about when it comes to these language and image models, which is this issue of copyright. I think a lot of people view what OpenAI and other companies did, which is sort of hoovering up work from across the internet, using it to train these models that can in some cases output things that are similar to the work of living authors or writers or artists. And they just think like this is the original sin of the AI industry and we are never going to forgive them for doing this. What do you think about that? And what would you say to artists or writers who just think that this was a moral lapse? Forget about the legal, whether you're allowed to do it or not, that it was just unethical for you and other companies to do that in the well.

[00:33:01]

We block that stuff. Like you can't go to Dolly and generate something. I mean, speaking of being annoyed, we may be too aggressive on that, but I think it's the right thing to do until we figure out some sort of economic model that works for people. And we're doing some things there now, but we've got more to do. Other people in the industry do allow quite a lot of that. And I get why artists are annoyed.

[00:33:26]

I guess I'm talking less about the output question than just the act of taking all of this work, much of it copyrighted, without the explicit permission of the people who created it and using it to train these models. What would you say to the people who just say, sam, that was the wrong move, you should have asked and we will never forgive you for it?

[00:33:49]

Well, first of all, always have empathy for people who are like, hey, you did this thing and it's affecting me, and we didn't talk about it first, or it was just like a new thing. I do think that in the same way humans can read the internet and learn, AI should be allowed to read the Internet and learn. Shouldn't be regurgitating, shouldn't be violating any copyright laws. But if we're really going to say that AI doesn't get to read the internet and learn, and if you read a physics textbook and learn how to do a physics calculation now every time you do that in the rest of your life, you got to figure out how to that seems not a good solution to me. But on individuals private work, we try not to train on that stuff. We really don't want to be here upsetting people again. I think other people in the industry have taken different approaches and we've also done some things that I think now that we understand more, we will do differently in the future.

[00:35:00]

Like what?

[00:35:01]

Like what we do differently. We want to figure out new economic models so that, say, if you're an artist, we don't just totally block you, we don't just not train on your data, which a lot of artists also say, no, I want this in here, I want like, whatever, but we have a way to help share revenue with you. GPTs are maybe going to be an interesting first example of this because people will be able to put private data in there and say, hey, use this version and there can be a revenue share around it.

[00:35:27]

Well, I had one question about the future that kind of came out of what we were talking about, which is what is the future of the internet as Chat GPT rises? And the reason I ask is I now have a hotkey on my computer that I type when I want to know something and it accesses Chat GPT directly through software called Raycast. And because of this, I am using Google search not nearly as much. I am visiting websites, not nearly as much that has implications for all the publishers and for, frankly, just the model itself. Because presumably if the economics change, there will be fewer web pages created, there's less data for chat GPT to access. So I'm just curious what you have thought about the Internet in a world where your product succeeds in the way.

[00:36:06]

You want it to. I do think if this all works, it should really change how we use the Internet. There's a lot of things that the interface for is like perfect. Like, if you want to mindlessly watch TikTok videos, perfect. But if you're trying to get information or get a task accomplished, it's actually quite bad relative to what we should all aspire for. And you can totally imagine a world where you have a task that right now takes like, hours of stuff clicking around the Internet and bringing stuff together, and you just ask Chat GBT to do one thing and it goes off and computes and you get the answer back. And I'll be disappointed if we don't use the Internet differently.

[00:36:59]

Yeah. Do you think that the economics of the Internet as it is today are robust enough to withstand the challenge that AI poses?

[00:37:08]

Probably.

[00:37:09]

Okay, well, I worry in particular about the publishers. The publishers have been having a hard time already for a million other reasons. But to the extent that they're driven by advertising and visits to web pages, and to the extent that the visits to the web pages are driven by Google search in particular, a world where web search is just no longer the front page to most of the Internet, I think does require a different kind of web economics.

[00:37:32]

I think it does require a shift, but I think the value is what I thought you were asking about was like, is there going to be enough value there for some economic model to work? And I think that's definitely going to be the case. Yeah. The model may have to shift. I would love it if ads become less a part of the Internet. I was thinking the other day, I just had this for whatever reason, this thought in my head as I was browsing around the Internet, being like, there's more ads than content everywhere.

[00:37:59]

I was reading a story today, scrolling on my phone, and I managed to get it to a point where between all of the ads on my relatively large phone screen, there was one line of text from the article visible.

[00:38:09]

You know, one of the reasons I think people like Chad GBD, even if they can't articulate, is we don't do ads as an intentional choice, because there's plenty of ways you could imagine us putting ads totally, but we made the choice that ads plus AI can get a little dystopic. We're not saying never. We do want to offer a free service, but a big part of our mission fulfillment, I think, is if we can continue to offer chat GPT for free at a high quality of service to anybody who wants it and just say, like, hey, here's free AI and good free AI and no ads. Because I think that really does especially as the AI gets really smart, that really does get a little strange.

[00:38:53]

Yeah, I know we talked about AGI, and it not being your favorite term, but it is a term that people in the industry use as sort of a benchmark or a milestone or something that they're aiming for. And I'm curious what you think the barriers between here and AGI are. Maybe let's define AGI as sort of a computer that can do any cognitive task that a human can.

[00:39:17]

Let's say we make an AI that is really good, but it can't go discover novel physics. Would you call that AGI?

[00:39:23]

I probably would, yeah.

[00:39:25]

Okay.

[00:39:25]

Would you?

[00:39:27]

Well, again, I don't like the term, but I wouldn't call that. We're done with the mission. We still got a lot more work to do.

[00:39:32]

The vision is to create something that is better at humans than doing original science that can invent, can discover. Well.

[00:39:40]

I am a believer that all real, sustainable human progress comes from scientific and technological progress. And if we can have a lot more of that, I think it's great. And if the system can do things that we unaided on our own, can't just even as a tool that helps us go do that, then I will consider that a massive triumph and happily be you know, I can happily retire at that point. But before that, I can imagine that we do something that creates incredible economic value, but is not the kind of AGI, superintelligence, whatever you want to call it, thing that we should aspire to. Right.

[00:40:14]

What are some of the barriers to getting to that place where we're doing novel physics research? And keep in mind, Kevin and I don't know anything about technology and unlikely to be true if you start talking like retrieval, augmented generation or anything, you might lose, but you'll lose. Casey.

[00:40:37]

We talked earlier about just the model's limited ability to reason. And I think that's one thing that needs to be better. The model needs to be better at reasoning. Like GBT. Four. An example of this that my co founder Ilya uses sometimes that has really stuck in my mind is there was like a time in Newton's life where the right thing for him to do.

[00:41:01]

You'Re talking, of course, about Isaac Newton, not my life.

[00:41:02]

Isaac Newton?

[00:41:03]

Yeah.

[00:41:03]

Okay, well, maybe you, but maybe my life.

[00:41:05]

We'll find out. Stay tuned.

[00:41:07]

Where the right thing for him to do was to read every math textbook he could get his hands on. He should talk to every smart professor, talk to his peers, do problem sets, whatever. And that's kind of what our models do today. And at some point, he was never going to invent calculus doing that what didn't exist in any textbook. And at some point he had to go think of new ideas and then test them out and build them, whatever else. And that phase, that second phase, we don't do yet. And I think you need that before it's something we want to call an AGI. Yeah.

[00:41:41]

One thing that I hear from AI researchers is that a lot of the progress that has been made over the past, call it five years in this type of AI has been just the result of just things getting bigger, right. Bigger models, more compute. Obviously there's work around the edges in how you build these things that makes them more useful. But there hasn't really been a shift on the architectural level of the systems that these models are built on. Do you think that that is going to remain true or do you think that we need to invent some new process or new mode or new technique to get through some of these barriers?

[00:42:23]

We will need new research ideas and we have needed them. I don't think it's fair to say there haven't been any here. I think a lot of the people who say that are not the people building GPT Four, but they're the people sort of opinion from the sidelines, but there is some kernel of truth to it. And the answer is OpenAI has a philosophy of we will just do whatever works. Like if it's time to scale the models and work on the engineering challenges, we'll go do that. If now we need a new algorithmic breakthrough, we'll go work on that. If now we need a different kind of data, we'll go work on that. So we just do the thing in front of us and then the next one, and then the next one, then the next one. And there are a lot of other people who want to write papers about level one, two, three and whatever. And there are a lot of other people who want to say, well, it's not real progress. They just made this incredible thing that people are using and loving and it's not real sign. But our belief is like we will just do whatever we can to usefully drive the progress forward and we're kind of open minded about how we do that.

[00:43:30]

What is super alignment? You all just recently announced that you are devoting a lot of resources and time and computing power to super alignment. And I don't know what it is. So can you help me understand that's.

[00:43:43]

Alignment that comes with sour cream and guacamole.

[00:43:46]

There you go.

[00:43:46]

San Francisco taco shop. That's a very San Francisco specific joke, but it's pretty good. I'm sorry. Go ahead, Sam.

[00:43:53]

Can I leave it at that?

[00:43:54]

I don't really want to.

[00:43:55]

I mean that was such a good answer. No. So alignment is how you sort of get these models to behave in accordance with the human who's using them what they want and super alignment is how you do that for super capable systems. So we know how to align GPT four pretty well, but better than people thought we were going to be able to do. Now there's this, like when we put out GPT-2 and three people are like irresponsible research because this is always going to just spew toxic shit, you're never going to get it. And it actually turns out we're able to align GPT four reasonably well, maybe too well.

[00:44:32]

Yeah, good luck getting it. To talk about sex is my official comment about GBT four.

[00:44:38]

But in some sense that's an alignment failure because it's not doing what you wanted there. But now we have the social part of the problem, we can technically do it right, but we don't yet know what the new challenges will be for much more capable systems. And so that's what that team researches.

[00:44:54]

So what kinds of questions are they investigating or what research are they doing? Because I confess I lose my grounding in reality when you start talking about super capable systems and the problems that can emerge with them. Is this sort of a theoretical future forecasting team?

[00:45:13]

Well, they try to do work that is useful today, but for the theoretical systems of the future. So they'll have their first result coming out, I think, pretty soon. But, yeah, they're interested in these questions of as the systems get more capable than humans, what is it going to take to reliably solve the alignment challenge?

[00:45:38]

Yeah, and I mean, this is the stuff where my brain does feel like it starts to melt as I ponder the implications. Right, because you've made something that is smarter than every human, but you, the human, have to be smart enough to ensure that it always acts in your interests, even though by definition is way smarter.

[00:45:52]

Yeah, we need some help there.

[00:45:53]

Yeah.

[00:45:54]

I do want to stick on this issue of alignment or super alignment, because I think there's an unspoken assumption in there that, well, you just put it as alignment is sort of what the user wants it to behave like and obviously there are a lot of users with good intentions.

[00:46:12]

No, yeah. It has to be like what society and the user can intersect on. There are going to have to be some rules here and I guess where.

[00:46:21]

Do you derive those rules? Because if you're anthropic, you use the UN Declaration of Human Rights and the Apple Terms of Service, and that becomes.

[00:46:32]

The most important document in rights governance.

[00:46:36]

If you're not just going to borrow someone else's rules, how do you decide which values these things should align themselves to?

[00:46:44]

We've been doing this thing, we've been doing these, like, democratic input governance grants where we're giving different research teams money to go off and study different proposals. And there's some very interesting ideas in there about how to kind of fairly decide that. The naive approach to this that I have always been interested in, maybe we'll try at some point, is what if you had hundreds of millions of chat GPT users spend an hour, few hours a year answering questions about what they thought the default setting should be, what the wide bound should be. Eventually you need more than just chat GPT users. You need the whole world represented in some way, because even if you're not using it, you're still impacted by it. But to start, what if you literally just had chat GBT chat with its users? I think it's very important. It would be very important in this case to let the users make final decisions, of course, but you could imagine it's saying, like, hey, you answered this question this way. Here's how this would impact other users in a way you might not have thought of. If you want to stick with your answer, that's totally up to you.

[00:47:45]

But are you sure, given this new data? And then you could imagine like, GPT Five or whatever, just learning that collective preference set. And I think that's interesting to consider. Better than the Apple terms of service, let's say.

[00:48:03]

I want to ask you about this feeling, Kevin, and I call it AI vertigo. I don't is this a widespread term that people use? So there is this moment when you contemplate even just kind of the medium AI future. You start to think about what it might mean for the job market, your own job, your daily life, for society. And there is this kind of dizziness that I find sets in. This year, I actually had a nightmare about AGI, and then I sort of asked around and I feel like people who work on this stuff, that's not uncommon. I wonder for you if you have had these moments of AI vertigo, if you continue to have them, or is there at some point where you think about it long enough that you feel like you get your legs underneath you?

[00:48:47]

There were some can point to these moments, but there were some very strange, extreme vertigo moments, particularly around the launch of GPT-3. But you do get your legs under you.

[00:49:00]

Yeah.

[00:49:01]

And I think the future will somehow be less different than we think. It's this amazing thing to say, right? Like, we invent AGI and it matters less than we think. It doesn't sound like a sentence that parses, and yet it's what I expect to happen.

[00:49:13]

Why is that?

[00:49:18]

There's, like, a lot of inertia in society, and humans are remarkably adaptable to any amount of change.

[00:49:25]

One question I get a lot that I imagine you do too, is from people who want to know what they can do. You mentioned adaptation as being necessary on the societal level. I think for many years the conventional wisdom was that if you wanted to adapt to a changing world, you should learn how to code. Right? That was like the classic advice may.

[00:49:43]

Not be such good advice anymore.

[00:49:44]

Exactly. So now AI systems can code pretty well. For a long time, the conventional wisdom was that creative work was sort of untouchable by machines. If you were a factory worker, you might get automated out of your job, but if you were an artist or a writer, that was impossible for computers to do. Now we see that's no longer safe. So where is the sort of high ground here? Where can people focus their energy if they want skills and abilities that AI is not going to be able to replace?

[00:50:13]

My meta answer is, it's always the right bet to just get good at the most powerful new tools, most capable new tools. And so when computer programming was that you did want to become a programmer, and now that AI tools totally change what one person can do, you want to get really good at using. So, like, having a sense for how to work with chat, GBT and other things, that is the high ground and we're not going back. That's going to be part of the world and you can use it in all sorts of ways, but getting fluent at it, I think, is really important.

[00:50:52]

I want to challenge that because I think you're partially right in that I think there is an opportunity for people to embrace AI and sort of become more resilient to disruption that way. But I also think if you look back through history, it's not like we learn how to do something new and then the old way just goes away.

[00:51:10]

Right.

[00:51:11]

We still make things by hand. There's still an artisanal market. So do you think there's going to be people who just decide, you know what, I don't want to use this stuff?

[00:51:19]

Totally.

[00:51:20]

And there's going to be something valuable in their sort of, I don't know, non AI assisted?

[00:51:29]

I expect that if we look forward to the future, things that we want to be cheap can get much cheaper, and things that we want to be expensive are going to be astronomically expensive.

[00:51:41]

Like what?

[00:51:41]

Real estate? Like handmade goods, art. And so totally. There will be a huge premium on things like that. And there will be many people who really there's always been like even when machine made products have been much better, there has always been a premium on handmade products. And I'd expect that to intensify.

[00:52:02]

This is also a bit of a curveball. Very curious to get your thoughts, where do you come down on the idea of AI romances? Are these net good for society?

[00:52:11]

I don't want one, personally.

[00:52:12]

You don't want one?

[00:52:13]

Okay.

[00:52:14]

But it's clear that there's a huge demand for this, right? Yeah. Replica is building these. They seem like they're doing very well. I would be shocked if this is not a multi billion dollar company.

[00:52:23]

Right.

[00:52:24]

Someone will make somebody for, like I just personally think we're going to have a big culture where I think Box News is going be doing segments about the generation lost to AI girlfriends and boyfriends, like, at some point within the next few years. But at the same time, you look at all the data on loneliness and it seems like, well, if we can give people companions that make them happy during the day, it could be a net good thing.

[00:52:46]

It's complicated. Yeah, I have misgivings, but this is not a place where I think I get to impose what I think is good on other people. Totally. Okay.

[00:52:55]

But it sounds like this is not at the top of your product roadmap. Is building the boyfriend API.

[00:52:59]

No.

[00:53:00]

All right.

[00:53:00]

You recently posted on X that you expect AI to be capable of superhuman persuasion well before it is superhuman at general intelligence, which may lead to some very strange outcomes. Can you expand on that? What are some things that AI might become very good at persuading us to do? And what are some of those strange outcomes you're worried about?

[00:53:19]

The thing I was thinking about at that moment was the upcoming election. There's a huge focus on the US 2024 election. There's a huge focus on deep fakes and the impact of AI there. And I think that's reasonable to worry about. Good to worry about. But we already have some societal antibodies towards people seeing, like, doctored photos or whatever. And yeah, they're going to get more compelling. It's going to be more but we kind of knows those are there. There's a lot of discussion about that. There's almost no discussion about what are the new things AI can do to influence an election, AI tools can do to influence an election. And one of those is to carefully, one on one, persuade individual people tailored messages. Tailored messages? Yeah. That's like a new thing that the content farms couldn't quite do. Right.

[00:54:05]

And that's not AGI, but that could still be pretty harmful.

[00:54:08]

I think so, yeah.

[00:54:10]

I know we are running out of time, but I do want to push us a little bit further into the future than the sort of, I don't know, maybe five year horizon we've been talking about. If you can imagine a good post AGI world, a world in which we have reached this threshold, whatever it is. What does that world look like? Does it have a government? Does it have companies? What do people do all day?

[00:54:35]

Like, a lot of material abundance. People continue to be very busy, but the way we define work always moves. Like, if you our jobs would not have seemed like real jobs to people several hundred years ago. Right. This would have seemed like incredibly silly entertainment. It's important to me, it's important to you, and hopefully it has some value to other people as well. And the jobs of the future may seem I hope they seem even sillier to us, but I hope. The people get even more fulfillment, and I hope society gets even more fulfillment out of them. But everybody can have a really great quality of life, like to a degree that I think we probably just can't imagine now. Of course we'll still have governments. Of course people will still squabble over whatever they squabble over less different in all of these ways than someone would think and then unbelievably different in terms of what you can get a computer to do for you.

[00:55:27]

One fun thing about becoming a very prominent person in the tech industry as you are, is that people have all kinds of theories about you. One fun one that I heard the other day is that you have a secret Twitter account where you are way less measured and careful.

[00:55:43]

I don't anymore. I did for a while. I decided I just couldn't keep up with the OpSec.

[00:55:48]

It's so hard to lead a double.

[00:55:50]

Your what was your secret?

[00:55:53]

I mean, I had a good alt. A lot of people have good alts.

[00:55:56]

But your name is literally Sam altman. I mean, it would have been weird if you didn't have one, but I.

[00:56:00]

Think I just got too well known or something to be doing that.

[00:56:05]

Yeah, well, and theory that I heard attached to this was that you are secretly an accelerationist. A person who wants AI to go as fast as possible. And that all this careful diplomacy that you're doing and asking for regulation, this is really just the sort of polite face that you put on for society. But deep down you just think we should go all gas, no breaks toward the future.

[00:56:25]

No, I certainly don't think all gas, no breaks to the future, but I do think we should go to the future. And that probably is what differentiates me than most of the AI companies is I think AI is good. I don't secretly hate what I do all day. I think it's going to be awesome. I want to see this get built. I want people to benefit from this. So all gas, no break? Certainly not. And I don't even think like most people who say it, mean it. But I am a believer that this is a tremendously beneficial technology and that we have got to find a way safely and responsibly to get into the hands of the people, to confront the risk so that we get to enjoy the huge rewards. And maybe relative to the prior of most people who work on AI, that does make me an accelerationist. But compared to those accelerationist people, I'm clearly not them. So I'm like somewhere. I think you want the CEO of this company to be somewhere. Acceleration, somewhere in the middle, which I think I am.

[00:57:21]

Your gas and brakes.

[00:57:24]

I believe that this will be the most important and beneficial technology humanity has yet invented. And I also believe that if we're not careful about it, it can be quite disastrous and so we have to navigate it carefully.

[00:57:38]

Sam, thanks so much for coming on.

[00:57:40]

Hard work. Thank you, guys.

[00:58:07]

Hard Fork is produced by Davis Land and Rachel Cohn. We're edited by Jen Poyant. Today's show was engineered by Alyssa Moxley, original music by Miriam Lozano and Dan Powell. Our audience editor is Nell Galogli. Video production by Ryan Manning and Dylan Berguson. Special thanks to Paula Schumann, Pui Wing Tam, Kate Lepresti and Jeffrey Miranda. You can email us at hardfork@nytime.com.