Transcribe your podcast
[00:00:00]

You're watching the context. It is time now for our new weekly segment, AI decoded. Welcome to AI decoded, that time of the week when we look in depth at some of the most eye catching stories in the world of artificial intelligence. We're going to start this week with this from the South China Morning Post. US. President Joe Biden and his Chinese counterpart Xi Jinping King, agreeing yesterday to start a dialog on AI with respect to their defense systems. As part of the proposed agreement, AI would be banned from autonomous systems like drones, as well as systems used to control and deploy nuclear warheads. The New York Times asks whether we are witnessing the first AI election in Argentina. Both leading contenders have been using AI to create images and videos to promote themselves and attack each other. So which is real and which is fake? Deutsche Vella reports artificial intelligence that analyzes speech patterns can detect type two diabetes with astonishing accuracy. A simple ten second voice sample analyzed by AI could also help detect heart disease, Parkinson's or Alzheimer's. We'll discuss that and also this story in Billboard Magazine. YouTube has launched a new AI feature that gives the creators the power to generate musical accompaniment based on text prompts using and this is the key bit, the voice of well known artists.

[00:01:26]

With us this week to unpack all the week's AI stories is Stephanie Hare, who's a researcher, commentator, and author on artificial intelligence. Thank you for being with us, Stephanie. It's timely that we're talking about President Biden, President Xi's agreement yesterday on AI and particularly how it applies to defense systems, because they're currently in the room. What do you think will come of this dialog that they're going to begin?

[00:01:50]

I think it's going to be a commitment to keep on having dialogs rather than a treaty or some sort of ban that we might hope to see in terms of allowing autonomous systems to decide things like the nuclear command and control or for drones to be allowed to just make decisions to kill people or not. Those things are technically possible. What we want is to keep a human in the loop so that people are making the decisions as to whether or not to take a life. And that's part of military ethics as well as AI ethics.

[00:02:22]

Autonomous weapons, they're already a clear and present danger, aren't they? They'll become more intelligent, nimble, more accessible. And I think what worries a lot of people is the advent of swarm technology. So drones that can pick out certain people or can operate together and would be so small that you wouldn't be able to stop it. How do you think what they discussed yesterday appertains to those fears?

[00:02:50]

Well, I think what you're identifying here is that pretty much everything is going to be on the table. And while it's great that the United States and China are talking about this, both of them are also a little bit nervous at the other. And of course, it's not just them. Will North Korea or Iran or Israel or Russia agree to abide by something that the United States and China agree to? So this is what I mean when I say they're beginning a series of dialogs that would eventually need to be agreed as a sort of Geneva Convention, if you will, for the 21st century. We have new weapons. We're going to need new rules for warfare.

[00:03:24]

Let's talk about that New York Times story about Argentina's election, because Rishi Sunak was making this point at his Bletchley Park summit just a couple of weeks ago, that with so many important elections coming up and AI taking a really big role in all of them, what are you seeing in Argentina?

[00:03:42]

Well, we're seeing a couple of things, which is political parties using artificial intelligence to either create fake audio so it sounds like a politician is saying something that he or she never did, or adopting photos or even videos in a fashion that's called deep fakes. So it looks real, but it isn't. And I think the big tell for most of us at the moment is if it makes you feel angry, right? If you really agree with it, if it really gets your gut going, you're probably being manipulated. So you need to be really careful about that. But it's not enough for individuals to be checking it. We've seen platforms like Meta, which runs Facebook and Instagram say that any political parties that are using AI generated content have to label it as such. So there's a role for everyone to play here in signaling to people what is AI generated content and what's real.

[00:04:34]

Just quickly, are we seeing any evidence of AI targeting certain ads to certain people? Because I think that's what concerns people the most.

[00:04:44]

Absolutely. And that's the whole point with targeting, is when people know what messages to craft to different demographics, they can get people riled up and thinking certain things. And you're going to be getting a different message than I will, and different people who are watching this will be getting messages that work for them and not us. That's where this starts to become very complicated, and spotting it's going to be very difficult.

[00:05:05]

Interesting, there's a story in this week's Deutsche Vella about AI telling me what sort of health problems I have through the pattern in my voice. Tell me about that. How does it do that?

[00:05:18]

Okay, so this is fascinating technology using a voice sample. So if they were to record you or me for anywhere between six to 10 seconds, they could run it through an algorithm that would identify along with other data. It must be like our age or our weights or our height. It would.

[00:05:39]

Stephanie, are you there?

[00:05:41]

Stephanie, people who are walking around with type two diabetes don't know it.

[00:05:47]

Just pick that up because we just lost you there just for a second. Just tell me that last 20 seconds of what you were saying.

[00:05:53]

Yeah. So most people who have type two diabetes don't know and it's really expensive to test people. This would be a really fast way to get that diagnosis. And it's all done through your voice.

[00:06:05]

Let me quickly pick up this story on billboard. YouTube have developed this technology. You can type in something that you want to create, and then nine artists who've signed up to this AI technique will then sing it for you. Can we play I think we've got a clip. Can we play the clip? Let's play it.

[00:06:30]

Baby, we've got nothing in common but I know that I wish you've been wanting for so long.

[00:06:40]

Now so, Stephanie, they've agreed with nine artists. You might recognize some of them. Alec. Benjamin. John legend. SIA. They are signing up. And what's interesting about this is that the music industry's had real concerns about AI and how their voices might be used, but this is them being used in a much more positive way.

[00:07:00]

Yeah. And this is exactly what we would hope to see, is for a while, everybody's been so passive with AI, but now people are getting on the front foot and saying, okay, if change is coming, we have to adapt. So here you've got musicians sampling their voice and agreeing to be part of a tool that will help other musicians, and indeed anyone, really, to create songs. So we'll be seeing people getting their data used and getting paid for it, which is what hasn't been happening. And new people will be able to make music who couldn't do so before.

[00:07:28]

Wow. Amazing. I've been trying to use that today, Stephanie, but to no avail. I can't actually find it yet, but it's going to be the future. We'll all be creating things like that. Stephanie Hare, thank you very much for being with us. That's it. We're out of time. We'll do this again same time next week. Hope you'll join us for that.