Transcribe your podcast
[00:00:00]

You.

[00:00:00]

Have you seen these new screen savers that they're doing on the Apple computers?

[00:00:03]

No, what is that?

[00:00:04]

They are so pretty. So they just went out and they filmed a bunch of nature scenes. And you can set them as both your desktop background and your screen saver, but they have this amazing feature, which is that you could just shuffle them every 12 hours. And so now, every morning when I open up my laptop for the first time that day, I'm seeing some new incredible nature vistas. There is one that is a whale. There is one that is dolphins. There are these flyovers of these lush landscapes. And every day I think the world is so beautiful. Now let me spend 14 hours sitting inside my house typing on a laptop.

[00:00:37]

Yeah, look at all the beautiful nature you could be encountering if you weren't hunched over a small screen.

[00:00:42]

I am now in a point in my life where my primary experience of nature is through a Mac OS screen saver. But you know what? It's something. And I want to thank the Apple Corporation.

[00:00:50]

There hasn't really been a lot of innovation in screen savers, so I'm glad to hear that we're making progress.

[00:00:54]

Do you remember in the 90s with the flying toasters?

[00:00:56]

I was just going to bring up the flying Toasters. I miss the flying toasters every day.

[00:01:00]

I paid, and we're going to have to look up how much that costs. But I want to say it cost at least $30 in the 90s, which is $400 in 2023 terms to buy a screen. But you would buy a CDROM that stands for Read Only Memory, a compact disk read Only Memory, and you would put it in your CDROM drive and you would download the screen savers, and there would be flying toasters and all sorts of whimsical things. And that was the last time that we actually were in a good mood in this country. That's when we had the Flying toasters.

[00:01:24]

Bring back the flying toasters.

[00:01:26]

We just need to get these back.

[00:01:27]

Onto our computers by any means necessary.

[00:01:30]

That's right. And by the way, if you're out there listening and you know what happened to the flying toasters, why didn't that business work? I would like to know.

[00:01:35]

Selling screen savers for $30?

[00:01:37]

Yes. We'll never know. Scientists have no theories.

[00:01:42]

We need a Harvard Business School case study to look into this.

[00:01:46]

If you're getting an MBA right now and you want to know why you can't sell flying toasters for money, we want to hear from you.

[00:02:00]

I'm Kevin Rus, the Tech columnist from The New York Times.

[00:02:03]

I'm Casey Newton from Platformer, and this is Hard Fork. This week, we go to OpenAI's first big developer conference and build our own custom chatbot. Then, FTC chair Lena Khan stops by to make her case for how the government should regulate AI. And finally, one of the worst things that's ever happened at a crypto conference.

[00:02:21]

And that's saying a lot.

[00:02:23]

It really is. Casey.

[00:02:37]

We had a nice field trip this week.

[00:02:38]

You know, one of my favorite things we get to do is, like, go out in the world and see tech being made in front of our very eyes. And we got to do that this week.

[00:02:45]

Totally. And this was a fun one. So on Monday, OpenAI hosted their first ever dev day. This is a developer conference, basically, where companies that build stuff using OpenAI's technology come together. OpenAI makes some announcements about its new product. It was sort of just like a little Woodstock for AI.

[00:03:03]

Yeah. And a lot of tech companies do these developer conferences, and sometimes they're relatively small, but every once in a while, a company gets big enough that in addition to just being an announcement of new APIs, it also becomes a bit of a show and a spectacle for the tech press. And that's actually one of my favorite moments in history of the development of a tech company because I love spectacles and I love shows.

[00:03:23]

Totally. So I got there early and there was already a line sort of around the block to get into this thing. It was a very intense process of actually getting in. There was like a bombsniffing dog at the door, which made me think, like, oh, God, are they announcing some sentient thing that people are going to be protesting? But what would you say was the vibe at Dev Day?

[00:03:41]

I think it was a lot of nervous anticipation. I was standing behind somebody as I waited in line who turned out to be a hard fork listener. And I was asking him, what do you want to see from this event? And he said, well, one, I want to see new technology that enables us to do things that we haven't been able to do before. And two, I want to see them introduce stuff that lets us do existing things much easier. So those were the two big hopes that he went into the event with. And I think by the end of it, he was feeling good. Yeah.

[00:04:10]

So it was held in this event space in downtown San Francisco, which is actually, I learned, a former car dealership, which explained one of my big questions, which was, like, why is there a huge ramp, like, in the middle of this event space?

[00:04:21]

You know what it was before it was a car dealership? What Fillmore West, a popular music venue. Really? Yeah. Creedon's Clearwater Revival once played there. Ask your grandparents about them.

[00:04:30]

So there was this event. There was lots of good food. There was demo stations set up all over the place where OpenAI could sort of show off its latest tech. And then Sam Altman, CEO of OpenAI, got up on stage and did sort of a Steve Jobs style, like presentation, showing off all of the latest stuff.

[00:04:47]

Can I say one more thing about the decor. Yes. So if you go to the OpenAI offices or if you went to dev day, the one thing that you will notice more than anything else is the number of plants.

[00:04:55]

So many plants.

[00:04:56]

This place is covered in green. And it felt a bit like the company was saying, look, I know that you think we're making scary robots that are going to turn into skynet and destroy the world, but no, look at us. We come from the earth, just like these plants that are sitting right here. So I'm not saying that that was the actual goal, but I'm saying that is what I took from it.

[00:05:11]

Anthropic also has a lot of plants at its office. I think there's something going on with plants.

[00:05:16]

Something is going on with plants in the AI industry. And we will get to the bottom of it on this podcast.

[00:05:21]

So, dev day, some of the big announcements that Sam Altman made were things like OpenAI said that chat GPT now has 100 million weekly users. He said that 92% of fortune 500 companies are building on OpenAI's products. They also announced some sort of incremental improvements to their model. Something called GPT four turbo now exists. It has like a larger context window. Its knowledge of world events is more up to date than the previous model. They adjusted some of the pricing. So it's now cheaper to use some of this stuff if you're a developer. Some sort of minor updates to chat GPT and this program called copyright shield, which is a program where they will pay the legal expenses if you are a chat GPT enterprise or API user who gets sued over copyright infringement related to your use of one of their tools. Did any of these sort of more minor announcements stick out to you?

[00:06:19]

Well, I mean, the first thing that I would just take a step back and say is we are witnessing the formation of a generationally important company. Right. The consumer internet has been pretty sleepy for a long time. You and I both lived through the crypto fever dream that Silicon valley had. None of those companies even sniffed 100 million weekly users. Okay, so OpenAI is just big in a way that things very rarely get big. That's the first thing I would say. The second thing on the announcement front is that, look, I'm not an AI developer. Most of this stuff isn't very useful to me. But talking to the developers there, this is what they want to see. And I think this is just evidence of competition in the marketplace. Right. Because OpenAI has gotten criticism that its models are too expensive and there is definitely a case to be made that OpenAI was actually just sort of catching up here to some things, but in ways that the developers liked. What did you think?

[00:07:17]

I thought it was a fascinating event. I talked to one developer who was there. Who I asked him what he thought of the announcements and he said, I don't know if this is supposed to be a wedding or a funeral, because on one hand, they're announcing all these price cuts and things that will make life easier for developers. On the other hand, they just killed a bunch of startups that were building these features as independent products. A lot of startups have been building kind of supplemental services, what have been called sort of derisively wrappers on top of OpenAI's technology. And OpenAI had just went ahead and built that stuff directly into its own products, which is already making life harder for some of these independent companies.

[00:07:54]

Yeah. So those were the introductory announcements from the event. But I would say that, at least from our perspective, all of that was a preamble to what I thought was the biggest announcement at the event, which were GPTs. And Kevin, why don't you tell us a little bit about what these things are and how they work?

[00:08:11]

Right, so GPTs basically are these custom chatbots that you can now make that can use private data rather than sort of all of the data scraped from the internet that can take actions and behave in a more agentic way and that can be sort of tailored to an individual use case. OpenAI has sort of developed kind of a wizard or a sort of guide to help you make a chatbot, even if you don't know how to code it'll, sort of ask you questions. What do you want this chatbot to do? How do you want it to speak? What tone do you want it to adopt? Is there any reference material you want to give it that it can use as kind of its context? So in the demo that Sam Altman showed of GPTs during his speech, he basically made like, what he called a startup mentor, sort of a chatbot that gives advice to startup founders, which he based on a talk that he had given. He sort of uploaded the text of this talk into this GPT and so it could basically give advice in the style of Sam Alt.

[00:09:11]

Yeah. And we should say this was like sort of a joke, right, because before he was running OpenAI, sam ran Y Combinator, a startup incubator. A lot of his life was just answering the same 15 questions from startup founders. And so he said, you know what, why don't I automate this? And he did it by uploading one lecture. I think if he wanted to take that project more seriously, he could have uploaded a lot more, right? He could have uploaded the text of YouTube speeches that he has given. That sort of thing probably would have improved the bot. But I did think that as an example of what a custom version of GPT could do, this is pretty good. Yeah.

[00:09:43]

So this is something that a lot of companies have been demanding and asking for since chat GPT came out, is like, well, we have data. Like maybe it's our employee handbook or some of our benefits information or our customer service manual, and we want a chatbot that can refer to that data, but we don't want to make that data public to everyone. We just want to sort of build our own versions of this stuff for internal use or very specific use cases. And so this is actually going to allow them to do that.

[00:10:11]

Yeah. So, huge day for the customer service industry.

[00:10:15]

So I want to talk about these GPTs because I think it is something that we've talked about on this show before, this sort of next phase of AI chat bots. The first phase, I would say, was like chat bots that can talk about stuff and sometimes do so in quite compelling and useful ways. But I think we're moving into this phase of AI agents where these chatbots will not only be able to talk about things, they'll be able to actually take actions and do things on your behalf. So have you been playing around with the GPT feature at all?

[00:10:44]

Yes, I built my own GPT.

[00:10:46]

What does it do?

[00:10:46]

Well, so it's called Copy Editor, and it was a reader who suggested it to me after noticing a typo in one of my columns. Shamefully. Hey, why don't you just run this thing through GPT Four before you send it out? And I said, you know what, sure, why don't I do that? So I've been manually entering the prompt before I send out my column, just saying, hey, could you identify any potential spelling or grammatical errors in this? I paste in the text usually catches one or two things. Right? The only problem with that is that I have to sort of manually do it each time. Now that there is a sort of custom GPT that I can build, I have turned this into a bot and it has multiple prompts, Kevin. So not only do I have it identify spelling and grammatical errors, I also can say poke holes in my arguments. That was the first thing I did. The second thing I did was I started to simulate some of my common readers who often complained about my call ups.

[00:11:37]

You have automated your haters.

[00:11:38]

I have automated on this podcast. We are automating the haters and the future is now. So I said, imagine that you are a tech executive who believes that the press is often too cynical and closed minded about new technologies. Critique this column from that perspective. And then I also have a button that says, imagine that you're an underrepresented minority and you feel like your voice is too often left out of tech journalism. Poke holes in my arguments. And look, I've only had access to this thing for a couple of days. My early experiments with using these bot, I would say, are like only medium. I don't think it's been anything transformative, but we are collectively going to push the technology in this direction until it is good at those things. And I'm curious to hear what you think of that.

[00:12:22]

Yeah, so I've been really interested in this idea of being able to upload your own information and kind of use that as context for a chatbot's responses. That's a feature that has sort of existed, like Claude let you upload PDFs and a couple other file types. But there was never really an easy way to make a custom chatbot that could just refer to a specific piece of information that you gave it. So let me just show you some of what I've been building.

[00:12:47]

Yeah, let's see it.

[00:12:47]

So here I've got my chat GPT and then if you look over here, this is a chat bot that I built called Daycare Helper. My son goes to daycare. They have this parent handbook that has all these rules in it. I'm constantly looking things up in the parent handbook. Can I send him with snacks that have peanuts in them? When is daycare closed? Which holidays is it closed on? What are the sick policies for sick days?

[00:13:11]

Oh, I could guess that one. Don't bring your damn kid to school if he's sick, Kevin.

[00:13:16]

Well, there's a specific cut off and I always forget what the cut off is like. If they have a temperature of 100, you're allowed to bring them in, but not if it's 101, stuff like that.

[00:13:25]

I love the idea of being like an Edge Lord parent who's always pushing the boundaries of bringing their sick kid to school.

[00:13:31]

So I uploaded the daycare parent handbook into a custom GPT. I build this daycare. Helper. Bot. It takes about five minutes and now I can actually just go in and say, can I bring peanut snacks for my kid? And it will look that up in the PDF that I've provided it and it will answer my questions. No, it says, no, you cannot bring peanut snacks for your child.

[00:13:59]

That is against the rules. Wow, another victory for the woke mind virus. Peanuts are canceled. So then I thought, well, what other.

[00:14:07]

Documents do I have that I could upload and make a chat bot around? And I thought about this PDF that I got from my grandfather, actually, my grandfather who died years ago.

[00:14:20]

Did he pass down all of his cherished PDFs to you before he went?

[00:14:23]

Well, he had a lot of typed documents and one of the things that he wrote before he died was this investing advice book, basically. And he was an economist and loved to invest and think about investing. And he had like really good advice for his kids and grandkids about how they should invest their money. And this was typed up at one point and sort of passed down to the family and it's very long and kind of complicated. And so I've always struggled to sort of apply his lessons to my own financial decisions. But I was able to sort of upload Grandpa Roose's investing book to a custom GPT. And now I have a chatbot where I can ask my virtual AI grandfather what I should do with my money. So let's see this one.

[00:15:09]

Yeah, let's ask it if you should buy Tesla stock.

[00:15:12]

Should I buy Tesla stock? So this chat bot modeled after my grandfather's financial advice, says it sort of has some preamble. It's crucial to evaluate your own investment goals.

[00:15:28]

Say the first thing it says, because it's actually quite folksy and colloquial.

[00:15:31]

Yeah, so it starts by saying it's like I always say, before making any specific investment decisions, such as buying Tesla stock, it's crucial to evaluate your own investment goals, risk tolerance and investment horizons.

[00:15:42]

Now, is it true that he always said that?

[00:15:44]

Well, I don't know, but that's actually one of the custom instructions that I gave this chatbot, is it should be folksy and it should say things like, it's like I always say. So this actually did follow that instruction. I don't remember whether or not he was actually always saying things like, before buying Tesla stock, it's crucial to evaluate your own investment goals. I would say this is sort of like stock financial advice, but it is pulling from this, in this case, literally.

[00:16:09]

Stock financial advice because it's about stock.

[00:16:12]

So this is something that I plan on using to evaluate investment decisions. Now, I'm not actually going to buy Tesla stock. I'm not allowed to buy the stocks of companies that I cover, but it could be useful. And now I can share this with my relatives and they can also ask the AI Grandpa Roos for investment advice.

[00:16:33]

That's sweet. Okay, now let's ask it. Are you proud of me, Papa?

[00:16:39]

Are you proud of me, Grandpa? I'm nervous. No, this is sweet. It says, Ah, my dear, pride is a modest word for the joy and admiration I feel when I think of you. It's like watching a sapling you've planted grow into a sturdy tree, each new leaf a testament to the care and effort invested.

[00:17:02]

Is this how your grandpa talks?

[00:17:05]

No, but it is very touching.

[00:17:08]

It's extremely touching.

[00:17:09]

But I would say these are the kinds of things that people can now experiment with doing using these custom GPTs.

[00:17:14]

So a couple of things I want to underline. Like, one, this is very cool and this is very sweet. Two, in my early experiments with this, I don't think it actually draws from the custom stuff that much. I think that it is mostly just giving you stock GPT four, or now GPT four turbo with a little sprinkling of personalization on it. And I think in many cases, that little sprinkling will be more than enough to make you feel like what you're seeing is magic. I think in a lot of cases, it's kind of faking it until it makes it so that's not necessarily a criticism, but I do want people to set their expectations going in. Like, you cannot create an omniscient AI with this stuff. You can, I would say, get a kind of hint of where we're headed.

[00:17:57]

Totally, I would agree with that. And I think this is very early technology. It's still a little bit buggy and there's still a lot of limitations on it. And one more place that we should sort of set expectations here is that when we talk about this, being able to do things on your behalf, these agents, these GPTs that can take action, it can't do a lot. They're not like driving cars, they're not doing your taxes, there are still many, many areas where they just can't actually do that much.

[00:18:24]

Yeah. And so, look, a lot of this stuff you could just do today with Chat GBT or Chat GBT Plus. Right. But in order to do it, you might have to enter multiple prompts, you might have to continually refine what you're getting, and you might not be able to personalize it as much as you want because you're not able to share these sort of reference documents that might be useful. Right. And so I think there are a lot of really cool things that these GPTs can do, but I would say they represent a step forward as opposed to maybe a leap forward.

[00:18:56]

Totally. And I do think there are some interesting threads here. One of them is that while you can't right now charge for a custom GPT, you can share them, you can get a link and put it on your website or on social media and say, like, I have created this bot that does X, Y or Z, and other people can now come and use that. Now, eventually, OpenAI said that they plan to open an app store of sorts for these GPTs, where you will be able to charge money for access to your custom chat bot. And OpenAI will take a cut of that money and pass some of that on to the creator of the chatbot.

[00:19:33]

Which, again, I'll be so curious to see how people are able to monetize these things, because at least with the GPTs that I've built, this is a bunch of words that anybody could type into the box. Right, so where is the actual value there? Is it in the custom instructions that you're uploading? I'll be very curious to see how that plays out.

[00:19:49]

Totally. So we should say these are not going to be available for free customers that don't pay. They're only available for Chat GPT Plus and enterprise customers right now and they're still being rolled out. So we got early access to these features, but I think a lot of users have not gotten these yet.

[00:20:06]

Look, there is one thing that I think we should discuss about the safety implications of these GPTs as far as what has been announced this week, this stuff feels pretty safe to me. But we are starting down a road and I want to talk about that road, right? You alluded to it earlier. These things are becoming agentic. What does that mean? Well, most of the people in this world believe that we're going to get to a place where you can just give something like GPT a goal and then GPT will decide how to execute that goal. Right? So maybe you say, hey, my parents are coming to town this weekend. Plan an itinerary for us. Right? You could do that in chat GPT right now. It would give you a list of things to do in the future, though. Maybe it books your restaurant reservation, maybe it gets you guys tickets so you can go to Alcatraz, right? And next thing you know, you just get an email that says, here's everything that I did on your behalf. And you will not have to say, here are my favorite restaurants, because it will sort of already know all of these things.

[00:21:06]

Maybe you heard that and say, great, that's like just stuff that I don't have to do anymore. And it is. But there are potentially dangerous implications of having agents out there.

[00:21:17]

Like what?

[00:21:17]

Well, let's say you tell an agent, get me a reservation at this restaurant because it's my anniversary this weekend and you should do absolutely whatever it takes to get me this reservation. And you have a budget of $100 to do this, right? Well, maybe the AI calls the restaurant and using a synthesized voice saying, hey, I really need a restaurant reservation. It's my dying wish, and sort of emotionally manipulates the person at the other end, right? Maybe they are somehow able to figure out who already has a reservation and they call that person and they threaten them. Maybe they hire a thug to go to the restaurant and to shake them.

[00:21:59]

Mean, yeah, this explains the very menacing call I got from the house of prime rib the other day.

[00:22:07]

We're joking.

[00:22:07]

But this is something that safety experts have been talking to us about for months now is the fact that by giving these AI agents access to not only information, but tools, things that it can use to take action, whether it's making a call or writing an email or making a calendar appointment or interfacing in some way with other software, you do just expand the attack surface. For someone who wants to use these things to wreak havoc, yeah, I mean.

[00:22:34]

What was announced this week by OpenAI is not that, right? And the world in which we are building agents is that so we are now starting down that road. And I think it is important to acknowledge that because while I do think that Openi is generally responsible about the way that it rolls the stuff out and talks about it, it is also the company that is nudging this frontier up and up month by month by month. And so if that's going to happen, we need to show people what is on the other side of that frontier.

[00:23:06]

Totally. I totally agree. And OpenAI was very cautious in how it talked about these agents. Sam Altman said that their strategy has been to sort of gradually deploy this kind of stuff so that they can see how people are using it, adjust the safety as needed. They also said that they are going to be approving just like Apple does with its App Store approving or rejecting various bots that people want to sell and just making sure that they're not unsafe or violating some, you know.

[00:23:32]

So after the event there was a little Q A where I got to ask one question of Sam Altman, the CEO. And my question was just basically like, how are you thinking about which powers you will give these GPTs and which ones you won't? And his answer was basically like we're just not at a point where we want these things to do any kind of high level planning. So it's sort of much more about performing a simple action. We will link to your calendar and we will say, yeah, you can book a meeting at this time and again, that feels like a very safe space for them to be. I have no concerns about that. But a year from now, can I imagine them saying, you know what, we're going to let these agents do a little bit of higher level planning? I absolutely can. So let's start talking about what that means now.

[00:24:13]

Totally. One other thing that I wanted to chat about with you was just the potential of all these AI agents to displace jobs. Right. My first thought when I see things like an AI chatbot being given a corporate benefits guide or a customer service manual and being able to answer questions based on that material is like, there go the jobs of a bunch of benefits administrators and customer service agents. And I'm not saying those jobs consist exclusively of answering people's dumb questions that they could have looked up in the PDF that you sent them. But I'm also thinking about how many jobs just do involve kind of taking questions, referencing some preordained material and spitting back an answer. That is a lot of jobs in this economy. And I do think that the AI agent era that we're entering into is one where a lot of jobs stand to be radically changed or even displaced.

[00:25:08]

Yeah, I think that right now in the economy there is just kind of a lot of slack and redundancy in the system where maybe your company, depending on its size, maybe it has ten benefits administrators. And I would not be surprised if there is somebody who is looking at the OpenAI keynote this week, which by the way, in a day had over 700,000 views on YouTube. People are paying attention to this. Stuff. I can imagine someone looking at this and saying, maybe we need five benefits administrators next year, and then maybe next year after that, we can cut it down to two or three that I absolutely do think is going to be a result of what we saw this week.

[00:25:40]

Totally. So I actually wanted to show you something else I have been working on, which is that I have been trying to build a Hard Fork GPT.

[00:25:47]

Now that's what I call a GPT.

[00:25:49]

Oh, come on. Okay.

[00:25:51]

I'll show you this.

[00:25:52]

I'll share my screen.

[00:25:53]

All right?

[00:25:54]

Okay. So yesterday I took all of the transcripts of all 50 plus episodes that we have done.

[00:26:02]

Okay.

[00:26:02]

Our producer helped me combine them into a single document, and I uploaded that document into a custom GPT called Hard Forkbot.

[00:26:12]

Amazing.

[00:26:13]

And now we can ask questions of our podcast, and it will try to respond using what it knows about our past episode. So what would you like to ask the Hard Fork bot?

[00:26:23]

Well, let's see. Can you ask, what did Sunar Pachai say on this podcast? Yeah. All right. Sunar Pachai, CEO of Google.

[00:26:33]

So it says, during the Hard Fork podcast interview, sundar Pachai discussed a variety of topics, including Google's work on AI and the company's response to competition and innovation in the tech industry. Pachai talked about the AI conversational tool Bard, acknowledging the mixed initial reviews it received upon release. He emphasized the potential of AI to serve as a powerful assistant when fine tuned to specific data with appropriate privacy and security measures. What do you think?

[00:26:56]

I think that's pretty good. But I want to ask it one more specific question.

[00:27:00]

Yeah, go ahead.

[00:27:01]

Did Sundar Pachai order the code red? Now, if you haven't followed this story, there was a story in the New York Times saying that Sundar had ordered what Google calls a Code Red, which is just basically, hey, everybody, we need to really nail this AI stuff. And when he came on the podcast, sundar said that he did not, in fact, order a code.

[00:27:20]

Wait, it already answered this in its last answer. It said, in response to questions about Google's urgency in AI and the Code Red that was reported, pachai clarified that he did not issue a Code Red, but was communicating the need for urgency and harnessed resources across the company to rise to the moment.

[00:27:34]

So I gotta say, this is the best thing that I've seen so far from one of these GPTs.

[00:27:37]

It's pretty good.

[00:27:38]

Yeah. And the reason that I say that is it is giving us highly accurate information and I think would be useful as a fact checking tool for us, or like a way for us to find stuff that we said. Totally.

[00:27:49]

We could jog our memory. What have we said about NFTs in the last six months?

[00:27:55]

What did Kevin predict? That definitely did not happen.

[00:27:59]

We can also I'll show you you can give this to anyone. So we can make this public. And I think we actually should, so that our listeners can go and chat with our chatbot.

[00:28:08]

In the spirit of full transparency, I think we should share the hard fork.

[00:28:11]

So I'm going to set this to public and I'm going to publish it. And now anyone can go chat with our chatbot.

[00:28:19]

This might be the first podcast that you can chat with via AI.

[00:28:23]

Kevin there we go.

[00:28:24]

Incredible. When we come back, it's on. Like Lenacon FTC chair Lena Khan joins us to talk AI regulation.

[00:28:58]

We got a big guest today. That's right, if you will.

[00:29:01]

It is a huge guest, and someone that I think both of us have been wanting to talk to for a long time now. Yeah.

[00:29:05]

So, Lena Khan is the chair of the Federal Trade Commission, the FTC, which is the sort of main regulator tasked with making markets in this country more competitive and protecting consumers from various corporate misdeeds. And I would say that she is sort of seen as the top cop when it comes to enforcing antitrust law in Silicon Valley.

[00:29:23]

That's right. And for the past several years, there has been a lot of pressure on government to take action in this regard. Congress has not passed one law during this time. And so it really has been up to the FTC to do something about the perceived lapses in competition and a million other things in the tech industry. And Lena Khan, I think, has been a really transformative figure since she took this post.

[00:29:46]

Totally. And we should just give the brief outline of her career. She was an unusual pick to lead the FTC. She was the youngest FTC chair in history. She was just in her early 30s when she was tapped to lead the agency. She sort of became famous in antitrust circles because of this law review article that she wrote about Amazon and antitrust while she was studying at Yale Law School. It went viral, which law review articles don't tend to do. And it made her sort of a rising star in the world of antitrust enforcement. So much so that when the Biden administration was looking for who should lead the FTC, they picked her.

[00:30:21]

That's right. And, you know, her big idea was that for many years, antitrust regulators had been overly focused on price, and that if prices weren't going up for consumers, antitrust regulators said, Leave it alone. Belina Khan came along and she focused on Amazon as an example, and she said, here is a company that is working to lower prices, but is being anti competitive in all of these ways. And it really galvanized a whole movement which sometimes gets derisively, referred to as hipster antitrust. Although I think it's funny.

[00:30:49]

Yeah, I don't know that it's derisive. I think that's kind of cool.

[00:30:52]

Do you think it's not derisive or do you think it's not derisive? Because I struggled a little bit with how I was going to pronounce that. I don't know.

[00:30:57]

That was a toss up.

[00:30:58]

Yeah, we'll take that offline. But the point is, she really did get some new ideas into the mainstream about thinking about antitrust. But now that she is in this job, she is trying to execute them. And of course, it is much easier to have big ideas than to put them into practice in a huge federal agency.

[00:31:13]

Totally. So she has been leading these campaigns against some of the big tech companies trying to sort of unwind some of these mergers that she believes are anti competitive. The FTC sued to stop Microsoft from acquiring Activision Blizzard, the game studio. The FTC also sued Metta to stop it from acquiring this VR company within that makes a fitness app called Supernatural. Those cases have not succeeded. There's another case involving Amazon where the FTC is alleging that the company acted in anti competitive ways. So lots of action. Not just talk out of the Lena Khan FTC, but recently she has been turning her focus to an area that we are very interested in, which is AI, because this is a new market, one that is developing very quickly and when there are lots of swirling questions about what is competitive and what is anti competitive. And so it's very exciting to get to talk to her about that. And I really am interested to hear what she has to say.

[00:32:06]

All right. Well, with all these swirling questions, Kevin, I think it's time to bring her in.

[00:32:13]

Hey there.

[00:32:14]

Hi. Hello.

[00:32:16]

Nice to see you guys. Can you hear me?

[00:32:18]

Oh, no. I can't hear you.

[00:32:21]

I don't know if this yes, yes. Whatever you did just worked for some reason. When you guys can hear me, I can't hear you.

[00:32:29]

So you can't hear us now?

[00:32:30]

Can you hear us now?

[00:32:32]

I can't hear you guys.

[00:32:33]

Oh, no.

[00:32:35]

Okay.

[00:32:36]

How many tech reporters does it take to hear us now?

[00:32:39]

I can hear you. Perfect.

[00:32:40]

We did it.

[00:32:41]

Amazing.

[00:32:41]

Yay. Let's talk about technology now, guys.

[00:32:45]

Let's do it. Look, this is what modernizing the federal government is all about, is working through difficult challenges with a can do attitude.

[00:32:55]

We didn't even require a task force.

[00:32:57]

To fix that headset or an executive order.

[00:33:03]

Well, Lena Khan. Welcome to Hardfork.

[00:33:05]

Thanks for having me.

[00:33:06]

So you wrote an op ed for The New York Times back in May about AI and the need to regulate it, and basically just making the argument that the last time there was a huge shift in technology. The sort of Web 2.0 era policymakers just didn't really understand how big these companies would get and how deeply they would become entrenched into our lives. And you said that now with AI, we may be at another one of these moments and that regulators should act more quickly this time. When did you start realizing that AI was going to be this kind of an inflection point?

[00:33:38]

I mean, look, I think we've been hearing about AI and its potential for really revolutionizing technology in the same way that the microchip or the Internet did. Obviously, the widespread availability of Chat GPT seemed to be a key catalyst in the public's awareness and I think set off a whole chain reaction within government to make sure that we're not repeating the missed opportunities and mistakes of Web 2.0.

[00:34:01]

Did you play around with Chat GPT when it came out? And did you have sort of experiences with AI yourself that informed your thinking as you approached this subject?

[00:34:09]

I did nothing as wild as Kevin's experience, thank God. But I'd read somewhere that somebody had gotten it to very effectively start contesting medical bills, which I had been navigating. And so I entered some of those prompts and actually got a successful outcome.

[00:34:29]

Wow. So you used Chat TPT to navigate a medical bill issue?

[00:34:32]

That's right.

[00:34:33]

That's amazing.

[00:34:34]

And it worked? You got your bill paid?

[00:34:36]

It worked, yes.

[00:34:37]

Amazing.

[00:34:38]

I know that you have been thinking through these issues for many months now. Last week we also talked about the executive order that President Biden signed around AI. What does he direct the FTC to do? Is it going to shake up the way that you're approaching this subject at all?

[00:34:53]

So the executive order really suggests that the FTC use all of its tools, the competition tools, the consumer protection tools, the privacy tools, to make sure that Americans continue to enjoy these protections even in an AI world. And we've been very clear know there may be conversations in Congress about new legislation that we may need, but there's no AI exemption from the laws already on the books. And so as a law enforcer, we're going to keep enforcing the laws against collusion. We're going to keep enforcing the laws against certain types of discrimination. We're going to keep enforcing the laws against fraud and deception. And so right now, we're just focused on making sure that we're able to pivot and fully protect people even as these new tools come on the market.

[00:35:36]

This is less of a question about you and your capacity as the chair of the FTC and more just sort of personally as you've sort of dealt with these AI tools in your own life and as you've thought about the future. We talk a lot on the show about some of these more existential risks from AI, these ones that maybe don't get as much attention from regulators. This idea that AI could be used to create bioweapons or cyber weapons or to get sort of recursively better until it takes over and kills us all, is that something that you have worried about or that you are worried about?

[00:36:08]

Look, I've certainly seen some of the statements that we've seen coming from the executives about the kind of existential risks. I'm very grateful that it's not my job to think about that level of risk. Right. We're really focused on the here and now harms, be it with voice cloning scams that are already defrauding families out of thousands of dollars, or automation, which is automating discrimination. Right. Because you can have error ridden information that's being fed in without any checks. And so decisions about whether people have access to housing or have access to credit or whether they're being put in jail can be made based on these really flawed algorithms or concerns about privacy. Right. I think the other thing to be mindful of here is with social media, we saw how when you have business models that are premised on monetizing people's data, that can really run headfirst into people's privacy interests. Right. And so behavioral ad based business models incentivize the endless vacuuming up of people's data. I think similarly, we're now at a stage where companies have another vector of being interested in hoovering up as much data as possible as they're looking to feed and refine their models.

[00:37:16]

And so similarly, we need to be thoughtful about what does that mean for people's privacy? The FTC already this past year brought a couple of cases, including against Ring and Alexa, for using either sensitive videos or sensitive voice recordings in ways that ran into people's privacy interests. One of the arguments that we rejected in that case was the idea that justifications relating to training your models can override legal prohibitions on indefinite retention. And so we came out and said, even if indefinitely retaining this voice data could be used to further improve your algorithms, that can't override the legal prohibition on indefinitely retaining data. And so we're already encountering some of these key questions with regards to these models and algorithms. And so those are the types of issues we're focused on.

[00:38:07]

You just named a bunch of really important harms that I think tend to get short shrift from us reporter types because it's more fun for us to think about an AI apocalypse. And I just wonder if, as you sort of look out at the media landscape, if you do find yourself frustrated that people aren't writing about sort of some of the scams and the frauds and the algorithm that make discrimination that is already going on in favor of sort of navel gazing about Skynet.

[00:38:33]

I've been really impressed by the reporting that is happening, and I think it's actually sometimes beat reporters outside of Tech that are seeing, say, how this is affecting health care or how this is affecting finance or how this is affecting housing that can sometimes be on the front lines of it because they're already seeing that.

[00:38:49]

Right. And when you think of reporters who are on the beat doing a good job, any names come to mind? Anyway.

[00:39:01]

One thing that has been very confusing to me about this sort of moment that we're in with generative AI is thinking about competition. And I know this is a subject that your sort of job is all about sort of making markets more competitive. And I'm curious about how we would actually know the AI industry is competitive. I mean, that sort of seems like an obvious question. Maybe there is an obvious answer. But when I look out at an industry like AI, there's so much competition going on, it feels so competitive to me. But how would we know? How would we know that the project of making AI competitive was working? What would a very competitive AI industry look like to you?

[00:39:39]

Yeah, it's a good question. And again, I think this is where we need to be a bit more precise about what are we talking about in terms of the different layers. For example, how do you make sure that firms are not, say, conditioning access to their models on reupping a cloud contract, right. Or using their dominance in different lines of business in coercive ways to reinforce? So those are tactics that we don't want to see. I think you're right that fully envisioning what an open competitive market in AI could look like can be difficult. But I think the proof of concept will be if we do see a whole set of apps and a whole set of downstream companies that are able to really enter and thrive and compete without necessarily being at the whims of a particular gatekeeper or feeling like in order to succeed, they have to bow down to one of the existing incumbents. Right? So if you go back to Microsoft's monopoly over the operating system, it was really the emergence of Netscape and Java and these middleware providers that basically created a platform neutral way that disintermediated Microsoft's monopoly and really allowed the Googles and Amazons of the world to thrive and grow in ways where they weren't subjected to Microsoft's gatekeeping.

[00:40:49]

One thing we heard this week is that OpenAI is building what amounts to an App Store for chatbots, these kind of personalized, what they're calling GPTs, where developers will be able to build sort of custom GPTs for specific purposes. And OpenAI will essentially do what Apple does for iOS apps, which know we'll approve or reject them, we'll take a cut of the revenue, basically, kind of the classic App Store model. Does that worry you when you hear things like that? That some of the new AI companies are starting to build things that resemble the platforms of the last era?

[00:41:23]

I mean, look, we have to assess that on a case by case basis, based on the particular facts, but at a high level. We've certainly heard from founders and startups that we're working on these really interesting apps, apps for which you would think there would be a whole lot of consumer demand. But really the thing they had to think about was, okay, there are two companies that have their own private rules and regulations, and in order to get my app available, to customers. I have to make sure I'm not running afoul of these private rules and regulations. And hey, even if I do meet them on a whim, that could change.

[00:41:53]

Meaning Apple and Google, are those the.

[00:41:55]

Two companies, the app stores? Yeah, exactly. And so, you know, I think when you have these kind of key gatekeepers that are not necessarily governed by publicly accountable rules, I think we've seen how that can potentially harm innovation. And so we want to be thoughtful anytime you see that structure come up.

[00:42:11]

One interesting point that I've heard from founders and investors in the tech industry is about this idea that monopolies can sort of subsidize innovation. Like take something like self driving cars, which are very expensive to develop and put into place, and they're losing tons of money right now. These are not profitable endeavors, at least at first. But you did see Google, which used its sort of search monopoly profits to subsidize research and development into self driving cars through waymo, and now we have self driving cars in San Francisco. Also things like sort of fundamental AI research. A lot of that has come out of companies like Google, Meta, other companies that we might say in other contexts are behaving anti competitively. So what do you think about this idea that sort of with AI specifically, you kind of want a big sort of profitable company to subsidize some of the costs of bringing this technology to the world?

[00:43:09]

It's a really good question. And this kind of goes back to an age old debate around what is the ideal market structure to promote innovation? Is it in fact monopoly because the monopolist will be able to sink back its monopoly profits into all of this R and D? Or in fact, is it an open competitive market? And this is called the Schumpeter arrow debate. And interestingly, you really have to drill down one layer to say what kind of innovation do you want to promote? And the empirical debate comes out in saying it's true that the monopolies can be quite good at producing incremental innovation. But historically, the breakthrough innovation, the kind of paradigm shifting innovation that has traditionally come from outsiders, in part because these are people who are not already reliant on existing technologies. It's not a situation where Kodak is sitting on the digital camera because it doesn't want to cannibalize its existing sales. Sometimes people are just able to see possibilities that the existing giants aren't. And so I think from a breakthrough innovation perspective, you want to keep the market open. The other thing we've seen, and again this goes back to the Microsoft case, is that oftentimes innovation comes not from direct replicas of the existing giants, but from adjacent markets that kind of open up new platforms.

[00:44:24]

And so similarly, from that vantage point, you at least want to make sure that the existing cummins aren't able to squash those upstarts. If they do, come up with the next best idea.

[00:44:34]

Speaking of incumbents potentially squashing, I want to look at this idea of should AI be developed as an open source thing or should it be developed in a closed way? Right. There are some folks who read the Executive Order and thought, the government is starting to push us away from openness. And that going forward, in order to develop a very powerful AI model, there are going to be this sort of increasing number of rules and regulations that are going to ensure that only a very small number of companies are able to comply. I wonder if you have a view on that or if the federal government has a view on sort of open source versus closed.

[00:45:11]

Yeah. I mean I won't claim to speak for the entire federal government, but it's so interesting you say that because I was just out in san francisco last week and met with a lot of founders and startups was able to sit down with y combinator hear from some of their companies and you're exactly right was really top of mind for them. And it was interesting to hear from them why they worried that parts of the EO and the fact that the EO was not clear about embracing openness may have contributed to that. At the FTC, we're looking at openness quite closely. I think historically we've seen that open source in the software context was a really important tool. And similarly here, open source models could potentially level the playing field and ensure that LLMs large language models aren't concentrated in the hands of a few gatekeepers. So I think we have some cautious optimism there. We'll want to be thoughtful about the specific context. The other thing we want to think about is what do we really mean by openness? This is the term that arose in the context of software, and what it precisely means in the AI context, I think is still an open question.

[00:46:13]

And so we're going to be wanting to look closely at the details and look at a whole set of variables like who owns the open models, what are the licensing terms, what's the price and performance of running the models and the security safety concerns to really figure out is this truly open? Is openness just being used as a veneer to actually concentrate power? We've also not too long ago seen what's been known as the open first closed Slater model, where firms will use openness as a way to build up their own scale and get a key foothold in the market. And then we'll turn the switch. Right. And we've seen how that can also be devastating for ecosystems and startups that have become so dependent, or if they don't turn the switch, they might just jack up the price right. Quadruple the price overnight. And so that's why we need to just be very thoughtful. Even while we might be optimistic on the front end yeah, I mean, it.

[00:47:06]

Strikes me that your job is not just to protect the competitiveness of markets. It's also to protect consumers. And when I think about open source AI projects, I think, well, it's going to be a lot easier to do bad stuff to people with open source models.

[00:47:20]

Right.

[00:47:20]

It's going to be much easier to run a scam or a phishing attempt using an open source AI model because it's not going to necessarily have those same guardrails built in. So how do you think about the trade offs of openness versus a more closed approach and the sort of two jobs that you have of protecting competition but also protecting consumers?

[00:47:40]

It's a really good question, and we have to be thoughtful about those trade offs and figuring out in what context may the trade offs lend themselves towards actually having a more closed system. Right? So if you're talking about national security, for example, you might just decide, hey, the stakes here are so high that we really are going to decide to have this be more closed, as opposed to another context where if you just get the liability regime right, you could have more openness without having total proliferation of scams and frauds through these tools. And so it's really, I think, going to be a context by context determination. I also don't know that we fully have to accept that there's going to be that deep trade off. I think you're right that fewer entities and more closed systems can create clearer checkpoints, but I think that itself can create its own risk, right? So we've seen how consolidation and concentration can create, say, homogeneity. It could make it easier to hack systems. And so I think we've also seen purely, say, even from a security perspective, how decentralization can be really helpful. So I think we want to think not just linearly about the trade offs here, but actually understand that closed systems and centralization can carry their own risks here, too.

[00:48:56]

I want to ask about what you just called a liability regime, because as I've been writing about this open versus closed issue, the open source folks argument often make a version of this argument, which is, look, just punish the crime, right? Did you perpetrate a fraud or a scam? Well, the fraud or the scam is illegal, and the fact that you might have used an AI tool is irrelevant. Right. So I'm curious how you view, like, can we just sort of use the laws that we have in place to punish violations of consumer protection? Or does the government have an interest in saying, no, we actually want to regulate the technology because we think that we can maybe mitigate some of these harms before they're caused?

[00:49:35]

The FTC has had a whole set of experiences that make me think we absolutely want to be looking upstream. Right. So the FTC has been one of the key enforcers on Robocalls, right? And a whole set of other kind of frauds and scams. And what we've seen is that if you just look downstream, you can end up playing Whack a Mole. Right. These are fly by night actors. And so what we've been focused on is where do we look upstream? Right. Can we go after the VoIP providers and some of the others that are actually enabling these? Robocalls? I think similarly, we want to think here, how do we look at upstream, the actors that may have the authority and the knowledge and the resources to prevent the harm on the front end? I think what we need to be wary of is liability regimes that actually outsource the responsibility to under resource civil society, under resource academics, under resource government enforcers who are then left doing the cleanup on the back end rather than the actors with the power and the knowledge and the money to prevent the harm in the first instance.

[00:50:34]

Right. Which is what we saw with social media, where it's like any harm that happens on social media, well, the civil society can deal with know, it's not these people's fault they built an. Right? Right.

[00:50:43]

We just make the tools.

[00:50:44]

Yeah.

[00:50:45]

It sounds like this trip that you took to San Francisco and Silicon Valley was educational, that you had a good experience and maybe that it started to influence some of your thinking on some of these issues. What was the vibe of that trip? What else did you learn? And were there other conversations you had with founders or investors or other people in the industry that provided food for thought?

[00:51:05]

It was a very warm reception. I think sometimes, especially in policy conversations, conversations about tech can be painted with a broad brush. Right. You kind of think of tech as a monolith, and it was clear that tech is not a monolith. Right. You have the bigger firms who've been around for a while, but then you also have the startups and the founders and sometimes their interests align, but sometimes they don't. And so as an enforcer, you want to make sure that you're hearing from all sides. I did a sit down with Gary Tan, the head of Y Combinator, and he's come up with this term little tech and the interests of little tech, not just big tech. And so it was just really fantastic to get to hear from them.

[00:51:42]

So this week, the FTC submitted a comment to the US copyright Office, basically giving the agency's kind of opinion or some ways that it's thinking about this issue of AI and copyright, which is a subject we've talked about on this show, talked about it last week with a copyright lawyer. And as part of that comment, the FTC laid out some things that it heard during a discussion with artists and creators. Can you just give us a little insight into how you're thinking about AI and copyright? And why is that an issue that. The FTC is concerned with, why is that a competition issue?

[00:52:13]

So we don't work squarely on copyright issues, but we do work on competition and consumer protection issues. And so we've really been thinking about these issues through those prisms. We held this workshop the other week where we had authors and artists, graphic designers, fashion models, people who, on the whole, talked about how they were actually really excited about AI and how they had already seen how in certain instances, these tools could be really useful to creative professionals. What they were really anxious about was the way in which they just woke up one day and their life's work had been ingested by these LLMs without their consent. In some instances, these models were now spitting back out versions of their work in ways that was undermining their reputation. And so from a consumer protection perspective, we want to make sure that the public is not being deceived, right? So people are not pretending to be these artists or pretending to be these authors. From a competition perspective, we want to make sure that firms are not able to use somebody else's work to then directly appropriate it and then enter into direct competition with them. So there's a consumer protection angle, a competition angle that we wanted to make sure we were fully understanding.

[00:53:31]

And the comment that we submitted to the Copyright Office just really summarizes some of those experiences.

[00:53:37]

When we were interviewing the copyright expert last week, my big takeaway from her was that artists and writers were not going to have a lot of protections on copyright grounds. Right. That if you feel upset that your work was trained, you're probably not going to be able to win a copyright case. What I'm hearing you say is, whether or not that's true, the FTC might still want to come in and find ways to protect people whose work was maybe trained without their consent. Is that what I'm hearing?

[00:54:03]

If we think that people are being deceived or there are competition problems, then we absolutely want to make sure that we're using our tools to provide those protections, which Congress has already granted.

[00:54:13]

Let's talk about the FTC's recent track record in antitrust enforcement. You all have pursued some high profile cases against Meta and Microsoft. Those two cases wound up losing. And I just wonder, looking back, are you still glad you filed those cases? Would you do anything differently in pursuing them?

[00:54:32]

So, look, the Activision case is still on appeal, so I'll be limited in what I can say about that. On Meta within yeah, I think it was the right thing to do. We believe there was a law violation. We believe we had the evidence. There's been some evidence that's come out, actually, in the last month or so that also suggests we were right to bring the case.

[00:54:51]

What was that? I missed that. What was it?

[00:54:53]

It was some suggestions that Metta had in fact been pursuing its own in house apps, the VR apps that it ended up trying to buy instead. More generally, though, whenever you're looking to protect Nascent competition or potential competition, that can always be more challenging just from an evidentiary point of view, right? There's not always going to be a dead body there. It's really about protecting future competition that can be much more Nascent. That challenge can be compounded in digital markets where the type of evidence that you need is going to look different than the type of evidence you had in the smokestack industries. And so we're still fully developing the analytical tools to make sure we know how to fully protect competition in these markets and can convince the judge that, hey, I know this looks different. It's not going to be the type of evidence we saw in the smokestack industry, but these are digital markets and protecting future and potential competition is key as well.

[00:55:47]

I'd always wanted to ask you about that case because I'm somebody who's been very sympathetic to the idea that we need more antitrust enforcement in the tech industry. If we could go back in time and stop Meta from buying Instagram and WhatsApp, I think that would probably be a great thing. When it came to the FTC stepping in and saying, we don't want you to buy this company that makes this fairly rinky dink VR app, that was just one that I struggled to wrap my head around. And so I wonder if it's worth just sort of saying why, in addition to going after these giants, where at least to me, the case for antitrust enforcement is really clear, you're also taking a look at these smaller cases, what you just called Nascent competition. Why is that of such interest to you?

[00:56:23]

So the case that we brought against Meta for its acquisition within laid out potential competition, and there were two aspects of that. So one was this idea that Meta actually already was producing in house the technology that it ultimately tried to short circuit innovation by just buying, and that the existence of Meta as a potential entrant into this market was also providing key competition when it came to within and the other key players in that market. So there were two dimensions of the potential competition story there. I think you're absolutely right. That when we're looking back now, and there are lawsuits that are now trying to fix these. Know, the FTC has a lawsuit saying that facebook's acquisition of instagram and WhatsApp were anti competitive, that they were designed to really allow facebook to maintain its monopoly even as the market was transitioning from desktop to mobile. The Justice Department earlier this year brought a lawsuit against Google for what they allege is, google's monopoly in the ad tech stack. And they identified a whole set of acquisitions, including of double click and AdMob. You have to remember that these were acquisitions that at the time similarly, seemed small, seemed trivial, seemed like they wouldn't really be a big deal.

[00:57:35]

And so I think we're in a process of looking to make sure we're not replicating those mistakes and trying to address these harms in the first instance rather than waiting a decade or so before we say, Ah, we got it right.

[00:57:48]

Right. Well, we only have a few minutes left. One thing that's been on my mind is when you think about bringing an action against Elon Musk for violating X's consent decree with your agency, does that feel like more of a 2023 thing or a 2024 thing? Just sort of for coverage planning.

[00:58:06]

Look, we've said publicly that we want to make sure our consent decrees have teeth that they're followed. Twitter has been under a consent decree at the FTC going back a decade before change in ownership. So that process is continuing.

[00:58:19]

Yeah. It must be nice after having taken on so many difficult cases, knowing that you at least have one easy one coming up.

[00:58:26]

No, it's not that easy, because they changed the name of the company, so now the old consent decree doesn't apply.

[00:58:31]

Rascal and Busky's gotten away with it.

[00:58:32]

Again, twitter, we don't know her. All right, last question. This is something that we ask many podcast guests. What is your p doom? Lena Khan, what is your probability that AI will kill us all?

[00:58:48]

I have to stay an optimist on this one, so I'm going to hedge on the side of lower risk there.

[00:58:54]

Are you zero?

[00:58:56]

No, not zero. Maybe like 15%.

[00:58:59]

All right. Yeah. And they say there are no techno optimists in the government.

[00:59:05]

Chair. Lena Khan, thank you so much for coming on hard Fork. Really great to talk to you.

[00:59:09]

Thanks so much.

[00:59:09]

Thank you so much.

[00:59:14]

When we come back, crypto goes full laser eyes. Ow.

[00:59:46]

Well, Kevin, I'll tell you, in addition to all this news this week, there was one story that captivated me more than any other, and that was the events that transpired at Apefest.

[00:59:55]

Oh, boy.

[00:59:56]

Now, I know you have been following the news, and I would like to tell you what we know. Kevin? Yeah?

[01:00:00]

What do we know?

[01:00:01]

Well, Apefest is a endeavor organized by Yuga Labs, makers of the board, Ape Yacht Club, intellectual property. It is a famous series of NFTs. I think for a time, some of the priciest NFTs in the world were these pictures of, I think we could sort of say, like stoner monkeys.

[01:00:22]

I would call them board apes, to be technical. But yes, they were pictures that people, in some cases, were paying millions of dollars for. Celebrities Jimmy Fallon, Paris Hilton, et cetera.

[01:00:32]

Eminem were on the stage of the MTV Music Awards promoting the board apes. Right.

[01:00:37]

This was sort of supposed to be like the first crypto era intellectual property empire, a new Disney, if you will, based around these very expensive NFTs. It was a whole thing.

[01:00:47]

And as crypto has kind of faded from the public consciousness, so have the board apes. But they roared back this week and I started seeing all these reports of people who had attended Apefest experiencing a burning sensation. And normally when that happens to you after a crypto conference, you probably just have chlamydia. But my understanding is that that's not what happened here. So you actually talked to somebody who went to Apefest. Kevin, what happened?

[01:01:10]

I did. So I talked to a guy named Adrian Zadunchik who goes by Crypto Burb on social media, and he is a board Ape owner. He spent around $100,000 on it and he was excited to go to Apefest and meet other collectors of board apes and sort of hang out in real life with the people that he spends a lot of time talking to online.

[01:01:31]

Because it is a community. These folks like to talk to each other about their apes and other crypto related subjects. Yes.

[01:01:37]

So Adrian goes to HK Apefest, he spends time at this party. And he told me that the night after this party, he was back in his hotel room with his wife and he started feeling his eyes in serious mean.

[01:01:53]

That actually sounds quite scary to me.

[01:01:55]

It sounds horrible. I mean, this is a really horrible thing that happened to these people, so we shouldn't make fun of it. But I do think it is instructive of just how sloppy and kind of careless the crypto industry can be.

[01:02:05]

Absolutely.

[01:02:06]

So he starts feeling this pain in his eyes. He's trying everything to sort of make it go away. He couldn't figure out what was going on. He goes back the next day to Apefest and runs into a guy who was at the same event the night before. And that guy says that he too was having eye pain. And then when Adrian goes on social media, he sees that lots of other people are reporting very similar problems and they start putting together the pieces and they realize that at this event, this party that had been thrown the night before at the Apefest, there was something called the bathroom wall, which was a room. It was not actually the bathroom, but it was a room that was kind of decorated in the style of a bathroom. It had toilets and art in it, and it also had these black lights, or what seemed like black lights, but which Adrian has since learned were probably UVA lights, which are the kind of lights that sometimes get installed in tanning beds. Yuga Labs, the company that is behind the board Ape Yacht Club, took a few days to conduct investigation and announce their findings.

[01:03:11]

On Wednesday evening, they put out a statement on X saying that they had, quote, determined that UVA emitting lights installed in one corner of the event was likely the cause of the reported issues related to attendees'eyes and skin. They also encouraged anyone experiencing pain to seek medical attention.

[01:03:28]

Do you know how messed up a situation has to be for you to get injured at an art exhibit? Like, my goodness, Kevin, I thought we knew every single way that crypto could hurt you. Right. It's like we knew that you might lose a bunch of money. We knew that your wallet might get hacked. Right. Never in a million years did I think these people will get their eyes burned out by an art exhibit at their community meetup. Yeah.

[01:03:51]

And I asked Adrian, I was like, does this make you trust the board ape yacht club less. You're part of this community that has now subjected you to severe eyeburns. Like, are you going to sell your board ape? And he kind of said like, no, not going to sell my board ape. This kind of stuff happens.

[01:04:06]

It doesn't. But that's interesting that he said that.

[01:04:09]

But he told me that he felt the organizers definitely could have communicated better about this.

[01:04:14]

Look, this is obviously like a freak situation, and if you love crypto out there, you might be annoyed that I'm going to try to draw this maybe rather tenuous connection. But look, ever since the tide went out on crypto, I've been thinking about the people who said, we're actually glad that this happened. The tourists are gone. We can set about the hard work of just making great products and showing people how valuable this stuff really is. We're going to make it easier for people to onboard. We're going to protect them. We're going to make it safer so that you can feel comfortable recommending it to your friends and family. And I just want to say that is not happening. Okay? In the, like, two years since the tide started to go out, that is not happening. Stuff like chat JPT has come along and shown us what it looks like when people actually get excited about something. Crypto is not that. And just when it seemed like things could not get worse for them, they start burning people's eyes out at a danged art exhibit. Yeah.

[01:05:04]

So not a good look for the board ape yacht club. And if the care that they're putting into their events and the lighting of these events is any indication, this is not a company that I would trust with hundreds of thousands of dollars of my money. It is also just like a real twist of the knife in what has been like a really bad month for the world of crypto.

[01:05:22]

Why? Did something else happen?

[01:05:25]

Well, it's no apefest, but there was a little trial in New York recently that just wrapped up.

[01:05:31]

Let's just say it was its own kind of monkey business.

[01:05:33]

Exactly. So Sam bankman Fried, the fallen wonder boy of crypto, former head of FTX, former billionaire, was convicted on seven counts of fraud after a very short deliberation by the jury. Did it surprise you?

[01:05:49]

It didn't surprise me. It's what everyone expected. You talked to anyone around the case? They all saw this coming. It seemed like he did not do himself any favors at the trial. And I'm just left thinking, are there any sadder words in the United States of America than former billionaire? I don't know if there are, Kevin. I really don't know.

[01:06:06]

Who would you rather be right now sam bankman freed or someone who got their eyes burned out at the Apefest?

[01:06:11]

Well, I don't know. But I know what the common thread is. Involvement in the cryptocurrency industry.

[01:06:16]

Yes. You will not only lose your money, but you may also lose your eyesight. Be careful out there, folks.

[01:06:23]

Be be very careful out there.

[01:06:59]

Hard Fork is produced by Davis Land and Rachel Cohn. We're edited by Jen Poyant. This episode was fact checked by Caitlin Love. Today's show was engineered by Chris Wood. Original music by Alicia Baitube, marion lozano Rowan Nemistow and Dan Powell. Our audience editor is Nel Galogli. Video production by Ryan Manning and Dylan Bergeson. Special thanks to Paula Schumann, Pui Wing Tam, Kate Lepresti and Jeffrey Miranda. You can email us at hardfork@nytimes.com. Let's see those GPTs. It.