Transcribe your podcast
[00:00:00]

You're watching the context. It's time for our newly minted segment, AI Decoded. Welcome to AI Decoded. It's the time of the week when we dig deep into some of the most eye-catching stories in the world of artificial intelligence. We're again this week with this story from Tech Radar. Stunning achievement in humanoid robotic development. Openai and the startup robotics firm, Figure AI, have released a video this week demonstrating the real sci-fi capabilities capabilities of a new visual language model, and we'll be showing you a clip of that in just a minute's time. The Medical Express details how AI is being used now to detect heart defects in newborns. The model they have developed gives the correct diagnosis in to 90% of cases. Fortune says artificial intelligence is supercharging the threat of election disinformation worldwide, making it easy for anyone, even with a smartphone, to create fake but convincing content aimed at fooling voters. On a similar theme, The Mirror says Russia is already using AI deepfakes to spread chaos in third countries. A significant threat, says a new US intelligence report to the 2024 election. The Intercept reports there is now a network of AI-powered autonomous surveillance towers, about a thousand of them that have been put up along America's Southern borders, a system that gives border guards a unified vision of unauthorized movement.

[00:01:26]

Finally, in The Guardian, what does the EU's new AI Bill, approved yesterday, mean for us, the consumers? We'll give you the low down later in the program. Sitting alongside tonight, our AI commentator and technology author, Stephanie hair.Hello, fresh back from Paris.Yes.Nice to see you. Let's start Shall we? With this glimpse of the future that's already here. It's quite fascinating this. Pretty remarkable example of the promise of artificial intelligence. Let's take a look.

[00:02:00]

I see a red apple on a plate in the center of the table, a drying rack with cups on a plate, and you standing nearby with your hand on the table.

[00:02:13]

Great. Can I have something to eat?

[00:02:18]

Sure thing.

[00:02:26]

Great. Can you explain why you did what you just did while you pick up this trash?

[00:02:36]

On it. So I gave you the apple because it's the only edible item I could provide you with from the table.

[00:02:50]

Great. Based on the scene right now, where do you think the dishes in front of you go next?

[00:02:59]

The dishes on the table, like that plate and cup, are likely to go into the drying rack next.

[00:03:07]

Great. Can you put them there? Of course.

[00:03:30]

It's remarkable, isn't it? I mean, everything that I've been shown so far in the world of AI, the avatars that I've seen, which are on screen, they're instructed by text. The difference with this is that it's seeing, it's tasting, it's feeling, it's environment, and that is what blows my mind.

[00:03:52]

Yeah, I thought it was fascinating that the voice even has, because it's obviously a fake voice, probably taken from real voice samples and maybe even melded together, but it had vocal fry. I often have vocal fry because I've got asthma and I have to cough a lot, and so you'll hear it in my voice. It's not a pure tone. It sounds a little bit harsh. They've even put vocal fry in to make it sound like a real human. That's crazy.

[00:04:15]

I was saying last week to Priya that one of the things somebody has said to me who's interacted with one of these models is that they feel empathy, and that's why they feel empathy, because actually the voice is colloquially, and as you say, it feels real. There's a slight pause. There's a drag, isn't there? As it processes, but as that gets quicker, it's going to be the thing that some people would prefer to hang out with.

[00:04:41]

Yeah, and there's an ethical question in the way that we've even designed this It's a choice to try to make it look and sound and feel human. That doesn't have to be that way. We could actually design robots to be really explicitly not human, other. Why they've taken someone's voice? Even I was listening, the accent and the grammar is very specific to the United States West Coast. So he says, Can I have something to eat? And somebody goes, On it. This is a very American way of speaking that wouldn't necessarily translate globally. So I'm sure what you could do is say, I want it to I wanted to speak with a French accent, or I wanted to speak in Mandarin, and do exactly that thing to make you feel a connection to it. All of that is actually really manipulative as well.

[00:05:24]

This is called artificial general intelligence. Just to explain the technicals to people, normally we're talking text to speech. This is speech to speech. As well as seeing its environment, it is able to process a bit like Alexa in our kitchens at the moment. It's able to process what you're saying to it.

[00:05:45]

Yes. I was noticing, even as we were just watching that again, these are quite long pauses between the human interrogator and the robot. Also, the human is giving really clear, short, specific instructions. Is that going to work very well in a real-world context where if people are asking a robot to do something and it doesn't do it, just like human beings, sometimes ask questions in a way that's not very clear, we have to repeat ourselves, we get angry at each other, frustrated, kick the robot. How's that going to work? This is great, and the potential for it is clearly there. You can imagine what this would mean if we got these robots truly up and running with very minimal lag time. Could they be rolled out to start replacing humans in all sorts of jobs? Yeah.

[00:06:25]

But of course, to just take that on a peg, obviously, the database is growing. It will write and rewrite its own code. So as it makes mistakes, it will learn and it will develop and its language patterns will grow and evolve.

[00:06:41]

Yeah, that's the thing. It almost reminds me in a weird way of how children learn language and they're studying from everybody around it. It's learning from us as ever, as with all machines.

[00:06:51]

Yeah. Thirty years from now, when I'm a pensioner, it's going to be putting me to bed, isn't it? I just know it. I can already see it. Maybe that's the solution to social care problems, I don't know. Talking of which, and I've chosen this story tonight because I am conscious, and there are a flood of stories that will worry people tonight, but there is so much good in artificial intelligence, and this is a really good example. Artificial intelligence that can detect heart defects in newborns, which is really important because pulmonary hypertension, if you can catch it early, if you can treat it early, then the prognosis is pretty good. Yeah.

[00:07:27]

I mean, this is a classic case of where AI is really good is in healthcare because you're looking at patterns, spotting anomalies or things that are good. And as ever, to reassish everyone, it would always be the human doctor that's going to be making the final call. This is a tool to help doctors diagnose better. And the percentages on some of them, the correct diagnosis in some for 80 to 90% of cases, that's pretty good. Able to determine the correct level of disease severity in around 65 to 85%, so it's better than chance. It's maybe not what you want full betting the farm on, but always followed up by human intervention. What's great is it also shows how it's making the decision. It will show the doctor, This is what I'm looking at. Check here. Again, the learning from that over time should make it better.

[00:08:14]

I think of how many children go from hospital or did in the past with holes in the heart, and only to find them later in life. As you say, they've looked at the hearts of 192 newborns, hundreds of video recordings from different angles, as well as diagnosis by experienced pediatric cardiologist, 80 to 90% successful, which is pretty extraordinary. We got a Senate intelligence briefing this week. Avril Haynes, the Director of National Intelligence, was talking about the threat that AI poses to elections. It's the super-election year. I think around, what, 50% of the world is going to elections. How is AI going to affect what we hold so dear?

[00:08:57]

What we saw a couple of years ago with elections was the role of social media in spreading misinformation and disinformation. You didn't need AI for that. You just needed lots of people on the platforms. What we're seeing now, though, is different. It's called deep fakes. You'll have image manipulation, you'll have video manipulation, really good quality still. Then the one that's really difficult for a lot of people to detect is audio. In that case, it might be a candidate rings you. Exactly. That's a perfect example of it. People are less on their guard about that. Until very recently, a lot of the image and video deepfakes, you could tell. It wasn't quite right. There was something off about it. It's just getting better and better all the time. You've still got people who will believe whatever they see on Facebook. I do not know why. They have been given plenty of evidence not to do that, but it's going to be a real public health, almost education campaign that we're going to all have to do to be like, How do we instill the critical skills people need to assess this? That becomes the question of who is the arbitrator of truth?

[00:10:01]

The cost of it, I don't know if people followed the AI robo call in New Hampshire to its conclusion, but the guy who was behind it was a Democrat, and he spent $150 getting an artist who could do an impersonation of Joe Biden. He tweaked it, and there was $5 million of investigation and resourcing that was put into it just to deal with one call. That tells you how big the The problem is.

[00:10:30]

He said he did it to illustrate the risk of using AI to deepfake like this. He was apparently trying to prove a point. That's his defense. That's my line and I'm sticking to it.

[00:10:41]

Just this mirror story about Russian hackers and what they're able to do and the AI with these AI deepfakes. I remember the special counsel when he investigated Russian interference in 2016, the findings of that was that there was a troll farm in Moscow that was pumping out all these bots. Now, Presumably, you can do this much more cheaply and with fewer people.

[00:11:04]

Yeah. The thing is, when I was in Paris, I read a book called L'image du Kremlin, which is the Wizard of the Kremlin. It was wonderful.It's a little bit in Paris. I really recommend it, though, because it's all about how Putin came up through the ranks and one of his top media advisors who has a background in theater, the BBC has actually done a show on him. The goal in election interference isn't to try to change your mind on how to It's to undermine your trust in the entire process so that you don't know what to believe. What's the fact? What is fake? What's the point, therefore, of voting? Who cares? That's how you undermine democracy.

[00:11:40]

Yeah, they call that the liars dividend. Let's talk about That's a number of towers. I think a thousand towers now along the Southern border. Such a big election issue, the illegal migration over the US-Mexico border. Why are you concerned about this?

[00:11:55]

Well, so immigration is a big issue in a lot of countries. I don't want to We've not particularly castigised the United States for this, but it is true on the Southern border in particular, but also the Northern border with Canada, often weaker. In many cases, lots of people have been getting in, and this upsets people for all sorts of reasons, as you can imagine. So what to do? It's expensive hiring humans to patrol. It's a huge country. These borders are thousands of miles long. So instead, invest billions of dollars, slip it into the Homeland Security budget to the tune of six billion, and build up machines effectively that are surveying in a hopefully unified way, they aim, starting with the two land borders and eventually the Pacific and the Atlantic as well. Interesting. So first of all, the question is, does it work? It might not. This could just be a massive waste of taxpayer money because we've invested all sorts of surveillance systems before in border control. The other thing, though, is if it does work, what does that mean? Lots of people who are fleeing these countries and coming to the United States are doing so because that's the only option that they have.

[00:13:00]

There's going to be a... I know that's not politically correct for some people, but there's a human rights asylum piece to this. There's also danger in crossing those borders. I don't know if this is the solution.

[00:13:11]

Okay, just very quickly, I just want to quickly get your thoughts. I have any minute left on AI and the bill that was put in place yesterday in Europe.

[00:13:17]

Good thing, bad thing? I think it's a great thing. That doesn't mean I think it's a perfect thing. It's an important good start. What does it mean? It means we finally have landmark legislation that's going to allow consumers and others to sue in the case of harm. It also sets a really nice level of playing field, very clear if you want to operate in the European Union, grading risk. The higher the risk, the greater the regulation. It's not going to affect most people, but where it does, things like predictive policing, social scoring, real-time facial recognition technology, banned. There's a few exceptions for law enforcement in case of terrorism, which you probably would want. Other than that, I think it's a great start.

[00:13:56]

We got through them all. I always want more time in this segment, but that was a really good analysis for all the stories that are around today on AI. I hope you'll join us same time next week for this. Stephanie, thank you very much for your time as ever.