Transcribe your podcast
[00:00:00]

You're watching The Context. It's time for our weekly segment, AI decoded. Welcome to AI decoded. That time of the week when we look in-depth at some of the more eye-catching stories in the world of artificial intelligence. We're going to start with Futurism, which reports that the humans behind the accounts of virtual influences or AI-generated characters that masquerade as the real thing on social media are now pasting fake faces onto the bodies of real models. Meanwhile, business insider asks whether AI will put an end to Gen Z's obsession with becoming an influencer after major brands begin to show interest in using AI models instead of humans. Mashable says, A miscaptioned video falsely claiming to be an Iranian missile strike on Tel Aviv was promoted as legitimate by social media platform X. The fake headline, Iran strikes Tel Aviv with heavy missiles, was apparently generated by X's own official AI chatbot. The Verge looks at AI and copyright in the US. A new bill could force tech companies to disclose any copyrighted materials they use to train their AI models. The bill from Representative Adam Schiff would require anyone making a training data set for AI to to submit reports on its contents to the copyrights register.

[00:01:36]

There's been a pet hate of teachers everywhere, but not in Texas, where they plan to replace thousands of human exam markers with artificial intelligence. It's believed the AI-graded tests could save $20 million a year. That's from Tech Spot. And finally, popular science feature AI developer Google DeepMind, who are now able to train tiny off-the-shelf robots to play soccer. We'll show you a bit of that later. With me is Priya Lakhani, who's CEO of Century Tech, an artificial intelligence education technology company that develops AI-powered learning toolsThanks very much for coming in the program.

[00:02:16]

Good to see you, Lewis.

[00:02:17]

Great to see you. Okay, we're going to start now with this AI Influencer, so Artificial Influence. We can take a look at a little bit of what we are talking about. Those AI Influences are deep faking fake faces onto real women's bodies without permission. The subheading there is, It's a real problem. Is that how you see it?

[00:02:43]

Yeah, absolutely. The first story is about these people that are taking the real bodies of women, mostly, and then they're putting on a deep fake face, right? And then they're posting that and earning money, actually, from it most of the time on competitors is of OnlyFans, which is a platform where essentially creators can put on their own content. Now, in the context of OnlyFans, more creators on OnlyFans are women rather than men. 85% of the top 10% of earners are women. An average creator, Lewis, just to give you a bit of context, earned about $180 a month. So this is a business for some of these creators where they're using their own images, their own bodies, and creating that content. And so essentially, their content and their videos are being ripped off because then you superimpose an AI face and post it elsewhere and you make money off that content. So firstly, you're violating personal rights. But the second thing you're doing is obviously infringing on the woman, the creator's intellectual property. And 300 people, I think 300 odd creators have surpassed a million dollars annually earning money from doing this. So they have a very good right to be very, very upset about this.

[00:03:54]

So it is a real problem. And just so you get it personally, this took me 60 seconds to do, actually, if we can put it up on the screen. So I created one of you, but actually, I did the opposite. Because there were so many tools that allowed us to do this. So this literally took me less than a minute. Actually, I've done the reverse here. I've got your face, but I've actually changed your body. You look great. Casual, Lewis. And so that's a bit of fun. But can we have a look at it in this context? So I just created this before I got here, so I didn't have time to send it to the producers. But if you look at my iPad, Lewis. So I took a video of you on the BBC. Just lift it up. I lift it up. We've got you on the BBC presenting a story about TikTok. So hopefully the viewers can see that. I apologize, this is really last minute. And then through Magic Our AI, I've masted your face with Christians, Christian phrases.

[00:04:50]

Oh my, that is a terrifying thought in every way.

[00:04:53]

But you see how this is you in your day job. I've got your body, I've I've got the BBC.

[00:05:01]

Yeah, it stopped.

[00:05:02]

Yeah, it stopped. Right. You and Christian together, that's what you guys would look like. But what's interesting about that, I think your floor manager is laughing. No one can hear that right now. The reason that's really important to show is that that is you in your day job. And so creators are obviously creating this content, which we've shown viewers, and they're earning money from it. So not only is it a personal affront, but it's also a professional affront.

[00:05:28]

And how do you just knocked that up pretty quickly and pretty disturbing me.

[00:05:32]

That's in the cab on the way here.

[00:05:33]

So how do you begin to try and stop that? How do you stop it? If it's so easy, how can you try and stop?

[00:05:40]

Well, I think some of the stories that we're going to come on to will help with this. But obviously, the input here is an infringement. It's an infringement of IP rights. There are laws to protect copyright. We're going to talk about that in a minute, in a future story. But what's really important is when we're talking about this generative AI, separate it into two areas. Separate it into the input. This is really important for the rest of how regulation and law develops in this area. Separate AI into the training data. What was the input? What data was used to train the artificial intelligence models? And then the output. What came out of the other side, right? Now, actually, in the cases in the story, the reason why it's just so blatant is because the output is also substantially similar, right, to the input. Okay, so how do you do that? Well, there are laws to protect in these areas, but what I find and find really, really difficult is the fact that law is sometimes so expensive and difficult to access for some people. So it's not actually as fair as saying, well, you've got recourse.

[00:06:42]

Okay, let's look at this. This is very, very linked as you were talking. Business Insider, Gen Z's Fading Dream. Human influencers are being replaced by artificial intelligence, and maybe that's a good thing.

[00:06:53]

Yeah, well, so apparently a lot of Gen Z, and I got anywhere from half of Gen Z to a of Gen Z, that's a lot of Gen Z, want to be influencers. That's the future career aspiration of these young people. About 25% of marketers work with influencers. And again, it's a job for them, right? So when it comes to influencers, they earn a lot of money. They create content. They partner with brands, for example, and brands find this really great because the return on investment from an influencer promoting your content is higher on average than if you just advertise your content. So a brand would earn an average of about just over $5 in media, finally, value for every $1 spent on an influencer. So it's a huge market. But the reason why this article is basically saying your hopes and dreams are no longer going to be alive, Gen Z, is because people are using AI influencers. These are completely fictitious.

[00:07:43]

Which is what we're seeing right now. So this woman here is not a real woman. That's AI on top.

[00:07:49]

No, that's AI. Absolutely. Yeah, exactly. So they're using AI to actually... Why? Because it's cheaper. So if I can do that in a few minutes, and if you're where you're not substantially copying. This is a different story. This is not about copying on real people's bodies. But if you can create truly original output content, I should say, otherwise every tech is going to have a go at me saying, Hang on, what was the input? But you can do that cheaper. Brands can do that cheaper. You don't have to pay people money. You could run many characters on platforms yourself. You don't have to deal with any egos. That would be pretty cool, right?

[00:08:23]

The videos-I don't know why you look at me with a presenter ego. Absolutely not.

[00:08:25]

I have absolutely no idea what you're talking about. Therefore, they This is a potential solution to that issue. But I'll tell you what could happen, which is really interesting. If you end up with AIs out there influencing and partnering with brands, the area that most users influences is lifestyle, and second is beauty. The point is that you trust influencers. That's why they work. Are you going to trust an AI putting cream on their face saying, My visible in my '40s lines have disappeared. Actually, that authenticity of a human, I think, will become even more valuable in the future.

[00:08:58]

Therefore, still a potential career path. We're going to move on to MASHable. We're going to whip through a couple of these stories now. Elon Musk's ex pushed a fake headline about Iran attacking Israel. Talk us through this because this is a potential real-world warning signal.

[00:09:16]

This is what actually people have been talking about for so long. The fact that we're just seeing it play out is actually really, really sad for the people that have been in this space for years talking about this. Essentially, all that happened was that there was content on X That was fake news. The fact that it stated that Iran was responsible for these strikes. Actually, the footage was from Ukraine, I think, launching strikes against Russians in Crimea. But Anyway, it was fake news. Then Grok, which is Elon Musk's big investment in generative AI that operates on the X platform, took essentially this content, saw it as a trending topic, made up, this is an AI, made up its its own headline, made up its own contextualized story, all fake news, and then actually posted that to X users. The problem with this, obviously, is it's dangerous, and it reminds you of that 1938 Orson Welles story where War of the Worlds, where he was on the radio and he was actually talking about Martians and aliens descending on Earth. People tuned in a bit late and there was mass hysteria. Apparently, people left their homes.

[00:10:26]

How you can have an influence by posting fake news is real. This is really dangerous, and this information in fake news has to be tackled, and we need to hurry up in terms of regulation. I mean, EX is already in trouble with the EU at the moment, and I think it's only a matter of time, really, for other states to take action as well.

[00:10:47]

Okay, we're going to have a look at legislation attempts anyway. The Verge, a new bill wants to reveal what's really inside AI training data. So what's the attempt here?

[00:10:57]

Yeah, okay. So this is very much linked to all of this, right? So the data. So what's happened here is Adam Schiff, who is, I believe, you're going to correct me, a Democrat senator. Have I got that right? But he's proposed a bill to the House of Representatives, where essentially any generative AI platforms would have to list if the bill was passed, so post-bill, not the current platforms or their current training data, just to be clear, any copyright sources, copyrighted sources that they've used to train their data models, which actually gives you that transparency of what's been used. The EU already have this in their draughts for the EU AI Act. So essentially, it would put you on this level playing field with the EU and the UK saying that... People in the UK are saying we should hurry up as well. And so it gives you that transparency of what has been the input content. So when there are cases, there are plenty of lawsuits at the moment. For example, John Grisham and a bunch of authors, The New York Times has taken OpenAI to court. It would give content creators and copyright holders the ability to essentially We essentially check if their content has been used and if copyright has been infringed.

[00:12:03]

I see, but it's not law yet, so we keep an eye on it, see how it develops.

[00:12:07]

It's moving quickly.

[00:12:09]

We're going to stay in the US. Texas is replacing thousands of human exam scourers with AI. This is from Tech Spot. How is this going to work?

[00:12:19]

This is really interesting, right? It does cost millions and millions of pounds in the UK, dollars in the US, all over the world to mark exam scripts. They've changed the exam in Texas so that it has more long-form answers, so less multiple-choice exact short answer. They're going to be using trained AI models to mark students' answers. Now, before you look really skeptical, okay, what I would say is if you look at GCSEs in the UK, in England, for example, if you look at the GCSE papers and long-form answers, for example, in a history essay, actually, it's pretty proven that marking can be a bit skew whiff in cases because you have the human marking, and it can be sometimes quite subjective, even if you have a rubric and a mark scheme. We've actually developed this technology at Century as well when it comes to literacy. What you do is you train your own proprietary models. You don't just plug it into a GPT. Why? Because when a third party upgrades its own models, actually, you don't know what changes have been made, and then you can see a prompt that you entered today could actually give you a completely different answer next time.

[00:13:18]

Then you use the exam scripts, you look at the... You essentially constantly have this human evaluation, which is what Texas will do. They've trained it on 3,000 transcripts. They'll mark and say, It's lots of money, and apparently, you need to use 2,000 people rather than 6,000 people this year. But it actually is an ongoing investment because every time an exam takes place, you take lots of scripts. You look at the scripts where there was a low probability of those scripts being marked properly. You retrain your models, and it's an ongoing process very quickly.

[00:13:47]

That seems to make sense. I can't imagine marking script after script if you're a human teacher or exam is the most fun.

[00:13:53]

I bet some of them find it really fun, actually. I'm not going to make any judgment. But it's efficient. It's an efficiency.

[00:14:00]

We have 30 seconds left. Do you want to watch some cute little robots playing football? Of course you do. Let's take a look at that. Just in 20 seconds, Priya, what's going on?

[00:14:14]

Essentially, deep have built these mini robots playing football. They use what's called deep reinforcements learning. It's a really, really powerful technique in artificial intelligence, where you look at all the permutations of how a football play might take place. You reward the system or you punish it if it doesn't, for example, score a goal. Then they're basically through brute force, training it with tons and tons of data using deep learning as well. The technique is super interesting. Here you go. Will one of those be lifetised and play for Man United and help my team? I don't know. Hopefully.

[00:14:47]

He's fallen over again. It is adorable when they fall over. All right, Priya, amazing. Thank you so much. We are out of time.Thank you.Thanks, Lewis.