Transcribe your podcast
[00:00:00]

You are watching The Context. It is time for our weekly new segment, AI decoded. Welcome to AI decoded, where we look at some of the most eye-catching stories in the world of artificial intelligence. We start with the independent newspaper and a warning from a computer scientist who says, There's no evidence artificial intelligence can be controlled and made safe. Meanwhile, Sam Altman, CEO of OpenAI, the people behind ChatGPT, has been losing sleep over the AI as well. He, too, worries things could go, quote, horribly wrong. That's in futurism there. Reuters reports lawmakers at the European Parliament have ratified a provisional agreement on artificial intelligence rules. This is ahead of a vote by the legislative assembly in April that will pave the way for the world's first legislation on the tech. In The Guardian, an AI copyright infringement lawsuit brought by comedian Sarah Silverman against artificial intelligence company OpenAI, been partially dismissed in court. On the BBC website, built-in mini nuclear reactors could be the solution to providing data centers with the enormous amounts of energy they need to power AI. And finally, Valentine's Day, just been and gone, but it appears some Chinese women prefer a different romantic partner.

[00:01:27]

One woman says her AI chatbot boyfriend has everything she could ask for in a man. He's kind, empathetic, and she says he knows how to talk to women better than real men. Well, with me now, Stephanie Hair, technology author, journalist. Thank you very much for coming on the program. Great to see you. We've got lots to get through there. Should we start with some of the alarming, potentially alarming? You tell me whether it's alarming. Bad news first. Yeah, exactly. Let's see the bad news first, potentially. There is no evidence that AI can be controlled. What does that headline actually mean?

[00:02:03]

I thought it was quite strange, actually, because I thought it's almost like, do you want evidence to prove a negative or a positive? I will leave that to the legal eagles out there. But this scholar, it's a Russian computer scientist based at the University of Louisville, says that we can't control AI, and that he's done a detailed review of the existing scientific literature, which is going to be published in his forthcoming book, which is not called AIDoom. It's called AIUnexplainable, Unpredictable, Uncontrollable. At least we know what side of the fence he's coming down on. He just says that it can't be fixed, and therefore, we should all be worried. This isn't really new for anyone who's been following the AI debate very closely for several years, and in fact, the entire time of the field since it really started. But really in the past year, we will all remember several of the AI godfathers, the big cheering prize winners, all saying that it might lead to nuclear war, pandemic risks. We had the AI Safety Summit Here in the United Kingdom in November, the UK, Singapore, and the United States have created AI safety summits.

[00:03:06]

You could argue that actually there's quite a lot of evidence to suggest we could control AI, that we're controlling it all the time. We use AI all the time.

[00:03:14]

Okay, Well, let's dive a bit deeper into this. Sam Altman. Remind us who he is and what his concerns are.

[00:03:21]

I was going to say, potentially undermining everything I just said is Sam Altman. He is the head of OpenAI, which created ChatGPT. That's what he's best known for, backed by Microsoft to the tune of $13 billion, so lots of people betting on him. He also likes to go around saying that he's very worried about AI, and then he thinks it could all lead to terrible things. It could go horribly wrong, really ruin society. But at the same time, his stated goal is to build artificial general intelligence. That's his dream. That's the sci-fi fantasy of when machines surpass human-level intelligence. So one might ask, Why is Mr. Altman so worried and yet still building? There's a disconnect.

[00:04:05]

Interesting. And is the concern less about the huge Terminator 2 film type? I'm going to show my age a little bit there on that reference. It's a great film. Less about that stuff. More things like unconscious bias getting into systems. Is that the greater concern? It's, I suppose, a bit more subtle and maybe a bit harder to actually control.

[00:04:28]

Exactly. That's precisely what he says that he's worried about. But again, that's exactly the tool that he's helping to build and disseminate into all of the businesses that are now using ChatGPT, even into government, et cetera. It's just very strange there. He also says that the industry should not be allowed to regulate itself, but he's one of the biggest lobbyists against regulation. So find me an AI that can figure this man out because I don't get it yet.

[00:04:55]

Okay, got you. Should we move on to regulation then? Yes. Hopefully, potentially, some efforts to harness it for the power of good rather than evil. Eu lawmakers ratify political deal on artificial intelligence rules, says Reuters. Slightly complicated there. Can you say what's going on in the simplest way possible for us?

[00:05:19]

I can. Last year, Prime Minister Richie Sunak of the United Kingdom said that AI was too complicated for us to regulate. We need to figure it out, which is weird because we are so close to the European Union, which is about to regulate it. We are going to have landmark global regulation coming into force within two years from this year or so, 2026, that's going to be affecting all companies. It must be said that most AI apps are not considered high risk. They're low risks, they won't be affected. The high risk ones are things like social credit scoring, biometric surveillance, anything involving facial recognition technology. Very, very hot. They're also creating something called regulatory sandboxes. That's where developers can get in, play in the sandbox, just like children, and design, build, and test their apps before they're released in the wild. So there's a lot of good here.

[00:06:06]

A sandbox, I like that phrase, but that is a point they've been talking about before, because with the world of social media, that didn't happen so much. There wasn't that playing and testing by anyone outside the company before it was released. And so this is a deliberate attempt to say, right, should we do it the other way around this time, test it before it's released?

[00:06:27]

Yeah, I think we are trying to learn from the social media the mistakes of the past. I also think it's about confidence building. Artificial intelligence is something that on this show, we talk about it all the time. The people who work on it, it's their bread and butter. But it has not percolated out into the wider society as much as we might think. People who want to sell artificial intelligence are going to have to take society with them on a journey. That means building trust. For trust, you need transparency, explainability, and accountability. That is what the EUAI Act is the first regulation to do.

[00:06:59]

I We're going to carry on with our explainability. We're doing our bit here. Well, you are. You're doing a bit. I'm asking the questions. Okay, this one is, I think, an interesting one because we're going to get issues like this coming up again and again, aren't we? This is from The Guardian. So two open AI book lawsuits partially dismissed by California Court. So let's get on to the specifics of the court case in just a moment. But first of all, what's the issue here?

[00:07:25]

So we've got the comedian Sarah Silverman and novelist Paul Chemberly. They are, it must be said, too, of many creatives and writers who are claiming that OpenAI has taken their copyrighted work and used it to train their algorithms, which is why when you're using ChatGPT and it sounds so human, here's a reason. It's been trained on other human's intellectual property, and those people didn't consent to it, and they didn't get paid for it. So now they're suing. And that brings us to this headline, which is that the lawsuit was partially dismissed by a California court. I'm afraid that this one is going to run and run, and it's not just going to be in California. We are seeing lawsuits multiplying across the United States on this. Some of them will be class actions or grouped together, different people. And what they have to do is they have to demonstrate the threshold of copyright violation, which, apparently, I'm not a lawyer, is very high. That's going to be really tricky. But we've already had, again, Sam Altman, OpenAI chief, has said, We can't build generative AI if we don't take people's outputs and train our data train our data sets on them.

[00:08:31]

We have to take people's books. We have to take their songs, their photos, their movies, everything, and that's just the price of poker.

[00:08:37]

It feeds in to what we were talking about just a moment ago, again, which is everything it scrapes. Scrapes everyone's biases, everyone's flaws, everyone's mistakes, which then feed how the AI learn and develop. We're going to keep an eye on this particular court case, but you were saying there's going to be plenty of others as well.

[00:08:57]

Lawyers will be busy for years.

[00:08:59]

Okay, right. Let's look into an article on the BBC now. Future data centers may have built-in nuclear reactors. I don't know why it sounds so dramatic. What could go wrong? But basically, you In anything like this, you need energy to supply it, don't you? And the more computing power, the more the energy. So they're trying to look at different solutions here.

[00:09:24]

Yes. So we know already generative AI, in particular, is very carbon-intensive. It's actually very water-intensive. So you got 100 million monthly active users of ChatGPT, each using half a liter of water every time they interact with the machine.

[00:09:38]

That's not something you'd think about, particularly.

[00:09:40]

Exactly. Well, and they don't want you to think about it. They keep all that information very, very quiet. That other people know about it and are doing research on it. We know, for instance, a normal data center needs 32 megawatts of power flowing through the building. For an AI data center, it's 80 megawatts, so 32 versus 80. Way more energy intensive. How are you going to We don't want to do dirty fossil fuels. Gross. Will we be able to use renewables? Maybe. We hope so. But some people think that we could go nuclear using the technology that you would use to power a nuclear sub. The big difference being that nuclear submarines are managed by highly trained people who've passed a lot of security tests and checks. Putting this out into the commercial world feels slightly problematic.

[00:10:26]

Interesting. But presumably, there will be regulation and safeguards. Well, I mean, presumably, that's one of the great quotes in this.

[00:10:36]

It says, We can do it, but we just have to get it past the regulators, which I love. One of them says, These guys have ootles of cash. The private sector is going to make it happen because there's just so much money, which is great. But then you've got Greenpeace's chief scientist saying, Look, we've got your safety risks here of accidents. There's a whole question of nuclear waste. What are we going to do here? We need to get a nuclear scientist on this show and interrogate this Interesting.

[00:11:00]

We will look into that. But also this idea of getting greener, ensuring that green energy powers this. This has got to be something that's absolutely front and center behind all these companies, surely.

[00:11:11]

I think the AI boom is going to become a boom for anyone who's investing in renewables, because we need to power all of this. The two should go hand in hand. It should be symbiotic.

[00:11:21]

Okay, talking about hand in hand. The Japan Times, Better than real men. Young Chinese Women Turned to AI Boyfriends. Take it away.

[00:11:33]

Well, it's the brave new world of dating. Now, obviously, men have been having AI girlfriends for a long time. You can't watch a sci-fi film these days without seeing that as basically the plot. For anyone who finds this really shocking, I'm afraid to say it's just gender reversal. At last, women are getting some action with their AI boyfriends, so to speak. What does this mean? When we read the article, it's actually not a tech story. It's a society story. You go to your AI boyfriend because you're lonely, because the economy is really bad and it's hard to be finding people and making plans for the future and you feel like no one understands you. She says she can talk about her period pains, which she can't do with a normal guy. Why is that? Human guys are perfectly capable of having that conversation. She doesn't feel she can. What's interesting is she then builds out her product suite. She doesn't just want to have an AI boyfriend to talk to on an app. She wants a robot boyfriend who can go into bed with her and give her body heat, like curl up with her robot man.

[00:12:29]

Ai power to be empathetic and caring and be 24/7 available.

[00:12:34]

It sounds far away right now, but it could be one of those things that...

[00:12:38]

Well, I mean, we've just had Valentine's Day.

[00:12:40]

It was upon us faster than we know. Right, we will leave it there, Stephanie.Thank you so much.Thank you. That's it. We'll do this again, same time next week.