Transcribe your podcast
[00:00:00]

You are watching The Context. It's time for our new weekly segment, AI Decoded. We've got a lot to get through, but we begin with the Financial Times and the godfather of artificial intelligence who's issued a stark warning about the technology in a lecture at the University of Oxford. The Washington Post says, Top military officials at the Pentagon are meeting with AI experts to accelerate the discovery and implementation of the most useful military AI applications. Meanwhile, on the giant freaking Robot Science website, the University of Cambridge is proposing using remote kill switches and lockouts to mitigate a potential AI Apocalypse. Creative publication Ad Age argues when it comes to creativity, having a human touch is irreplaceable, and human rates like empathy and strategic thinking make all the difference. Upworthy focuses on artificial intelligence, unlocking our past and how college students are using AI to decode an ancient scroll burned following an eruption by Mount Vesuvius. Some of these texts, they say, could completely rewrite the history of key periods of the ancient world. And finally, in the independent, ChatGPT apparently suffered a recent breakdown users complaining the AI system started speaking nonsense and were sending out alarming messages.

[00:01:37]

Well, I said there was a lot to get to, didn't I? With me is Priya Lakhani, who's the CEO of Century Tech, an artificial intelligence education technology company that develops AI-powered learning tools. Hello to you. Hi. Look at all your papers. You are good to go. First thing, let's go back to the Financial Times, the headline, How Fatalistic Should We Be on AI? This is a speech by Jeffrey Hinton. Why should we care about what he says?

[00:02:04]

Jeffrey Hinton, as you said, is the godfather of AI. He essentially works with teams and invented some of the key techniques and architecture that allows us to have artificial intelligence the way that we do today. We should listen to him. He's a very, very serious, very, very intelligent individual. He's done this Oxford lecture. This is not the first time he's given this warning, Sarah. His issue, and this is a really bold statement, he said it was a very strong claim, he thinks that these models, these AI models, like the ChatGPT, the Larmative, all of these different types of large language models that create this generative AI, that they have this level of understanding. That's a really big claim because what most people say is, no, it's just a lot of pattern recognition. You take a lot of data, like the internet, all the data from the internet, you throw it through these models, you've got these algorithms, and it shoots out an output. The output with AI is always a probability. Is the probability of this word, is it the highest of being the correct word? Whereas he's saying they're starting to display a level of understanding, and that's quite a scary proposition.

[00:03:05]

They aren't just statistical pattern recognition. So they're learning. They're learning. In this article, John Thornhill, who's one of my favorite writers at the FT, also He talks about Noam Chomsky and how he contrasts human linguistic abilities rooted in genetics to machines that lack an inherent understanding of language. The issue is that Jeffrey Hinson, you can't ignore what he's saying. But at the same time, I have to say that I'm on the side of people, his critiques, who are saying, Look, how does this stuff actually work? When we get to one of the later articles, you'll see actually how some of these models can spurt out a load of nonsense, and they're not quite human. But what's really important about his warnings that are true is that he's warning about essentially racing ahead with the developments of technology. Why? Because he's saying this could be massive job displacement, disinformation, and deep fakes, which in every segment of AI decoded, we've generally covered deep fakes. Then he says that what if AI evolves with these intentions to control? Just to give you a little bit of context, there was a research paper by a group who simulated the Othello board game.

[00:04:14]

All they did with a GPT was train it with the moves of Othello, the board game, but they didn't give the GPT. They didn't train it with the rules of the board game. Actually, what they found at the end was that through this bit of research, the GPT understood the board game, what it was like, and the rules. And so there's an argument that eventually, these language models could understand the rules of the world and world order. And then we can go to that really, really fantastic global abductive reasoning test, the duck test. And there you can say, look, if it swims like a duck, if it quacks like a duck and it looks like a duck, it's probably a duck. And so that's where you get this huge issue of people saying, does it matter if it understands or if it's simulating understanding?

[00:04:57]

So many big issues. Let's move on to the Washington Post because this is military. And one of the things actually Hinton was talking about was pointing that our government's putting profits before safety. And when that comes to something like military, I think everybody's alarm bells start ringing.

[00:05:14]

They do, although... The Washington Post have got this piece about the Pentagon talking to tech industry leaders saying, How can we leverage AI in the future, but safely? But actually, what AI is fantastic at doing is analyzing a lot of information faster than human capabilities. What they're initially talking about is, for example, intelligence gathering. For example, if you have conversations where you have the transcripts of what people are saying, for a human to go through and analyze a lot of that, a vast amount of that, takes an enormous amount of resource and an enormous amount of effort. Can we leverage artificial intelligence for those purposes, say for intelligence analysis. Then they've looked at training officers. Can we train members of the military in terms of wartime scenarios? Although, two weeks ago, we covered story with Christian Fraser on this program from The Mail. The Mail basically had a terrifying study where they looked at AI, large language models, and what they would do in five military conflict scenarios, and nearly all of them went to war, and some of them nuclear pretty quickly. The AI is clearly not there, and that was very well recognized during this particular conference.

[00:06:18]

But there are efficiencies that can be gained, and they're not going to stop. It's just like nuclear. Do you want nuclear weapons in the world at all, ever? No. But the problem is, is there's an arms race. There's a race for countries to build AI quicker than the other country over there.

[00:06:33]

I did think it was... I thought it was quite interesting in the article where it talks about on Tuesday, the Pentagon began meetings with tech industry leaders talking about AI. Normally, the military are out in the front, and they using the technology before everyone's even caught up.

[00:06:46]

I think where that's from is, later down in the article, I'm pretty sure it's not a began meetings. However, OpenAI, really interestingly, removed restrictions against military applications from its usage policies page only in January. I I think that it is possible that because they were having those meetings in this conference, there are lots of people vying for military contracts, which are nice and juicy and large, aren't there? It seems like there'll be commercial relationships, although the British were there, and the British said, actually, we're going to develop our own large language model solution, our own big AI solution, because we've got concerns that our staffers otherwise might be tempted to put in very, very sensitive data into these other models that are operated and owned by third parties, and obviously, that's not safe.

[00:07:27]

Can we move on to the article, giant freaking robots? Ai Apocalypse kill switches could save humanity. Nothing like dramatic headlines.

[00:07:37]

It is a really dramatic headline. It's talking essentially about having a switch that's fitted into the underlying hardware. What's really, really important about this, you've got these three key components to have artificial intelligence. You have all this data, you train these algorithms on, you have the algorithms, and then you have compute power. You've got the compute power that's used to execute the algorithms and execute the models. For the compute power, you have chips. So NVIDIA, for example, you've seen the share price rocket. Recently, you've got these computer chips that are trading anywhere between $20,000 to $40,000 a chip. And there, basically, they go through a design, fabrication, and testing phase, and then they're distributed to these data centers. You I don't have AI without them. The paper's brilliant. This paper is brilliant. I read the whole thing. And what they're saying is they're saying, Look, it's really actually quite difficult to regulate the software producers, the people who are developing the algorithms and creating the training data, because they could be anyone. But actually, the supply of AI chips is highly inelastic. There are only a handful of parties in the world that do that.

[00:08:41]

Actually, if we create governance models over the producers of the chips, then you have at least a point of governance that could be pretty critical for policymakers if you want to have visibility and track and assess the development of AI. If you want to influence who can actually create AI, you can stop chips being given to various parties where you think they might use it for nefarious purposes. Then you can enforce standards. If you see a rogue output somewhere out there in the world, can you trace the output to the AI model, the AI model to the data center, and then essentially switch it off.

[00:09:17]

And then switch it off.

[00:09:18]

That model. Then it has a brilliant suggestion that it requires compute providers to have something similar to banking providers like KYC, your customer checks. The idea is that you can then have that traceability. Honestly, for anyone who's interested in regulation in governments who hasn't read that paper, it's absolutely brilliant, and I recommend that they do.

[00:09:37]

Okay, there we go. That's an interesting one. Now, this is really absolutely fascinating, upworthy. The concept of a scroll that was burned to a crisp, almost, when Mount Vesuvius erupted 2,000 years ago, but now using AI, people are being able to decipher what's written is quite astonishing.

[00:09:56]

It's astonishing. They're using computer vision, 3D scanning, and then machine learning to essentially be able to see by... So they virtually, there's a film of this. They virtually unwrap the scroll, and it's brilliant. You can see it in the scroll challenge, the Vesuvius scroll challenge. Then the idea is the ink is actually invisible to the human eye. Then they use these scanning techniques. They read using AI the text. Then these teams are asked to help decipher them. It was this amazing Dr. Seals who pioneered all of this. They had some of the AI models, but then they had a small team. By launching a challenge, this is what's great about AI, if you launch a challenge, you get lots more people involved, lots more data, then actually you can do a lot more. It's phenomenal. This is the intersection of archeology and artificial intelligence. Of all the examples I've ever given of AI on this program, I've never really talked about archeology, and it is absolutely phenomenal.

[00:10:53]

Well, I'm very pleased. I'm very pleased that you started here. But it is incredible. I think it's such a futuristic thing, artificial intelligence, but allowing us to learn about our past, our descendants. It is a fascinating-Yeah, what we're going to uncover about philosophy, it will be mind-blowing, I'm sure. Can we move on to add age?

[00:11:11]

You and I talked about this already.

[00:11:13]

We did. Creatives and AI, why the human touch is irreplaceable. Now, I read this. It felt like a bit of a motivational tool. I don't know. Saying to add people, Don't worry, AI is not going to take your jobs. You're fine. We still need the creative. But I'm not sure I was I'm convinced if I'm honest.

[00:11:31]

No, you shouldn't be. You shouldn't be. This is a very lovey-dovey piece, and I'm not sure whether I love it or love it. It's very sweet, by the way, whoever writes it. But all over social channels, you've seen AI won't replace humans, but humans using AI will replace humans without AI. Now, this piece is all about, look, your human touch is irreplaceable when it comes to marketing techs. Can we just be a little bit practical here? That's true. I think in the future, we crave that human connection. It will actually probably be what makes some things just very different. But at the moment, about not replacing you as a marketer, there are many marketing jobs that I'm sure will still be out there, but there will also be many that won't because AI at the moment, when you can increase by global value by $15 trillion by 2030, increase GDP globally by 28%, 100%. What companies are doing is they're saying, Can I use these tools to automate and replace, to augment my labor force and make them more efficient and to personise? When it affects your OpEx, your operating expenditure, and you can reduce your labor costs, then that is what some companies will do.

[00:12:33]

There is some truth to the article. I have some sympathy to it, but it was written by a brand creative agency to marketers who I'm assuming are their clients of large companies. It's a very sweet article, but I just don't think it's true.

[00:12:48]

But essentially saying you still have your human creativity, but actually AI can enhance it is what the article is saying. There is some truth in that.

[00:12:55]

Some, although Jeffrey Hinton might disagree.

[00:12:59]

Going back to Jeffrey Hinton. Let's finish with ChatGPT, which for many people who don't know much about artificial intelligence, probably the one thing they will have really heard about may have even used is ChatGPT. It's quite user friendly, quite big. Apparently, it's had a meltdown, and it's sending alarming messages to users. This is according to the independent.

[00:13:17]

Yeah, so it did. It started sending out loads of gibberish to people. At one point, I think in the past, actually, it was being really sassy and lazy. Look, it could either be bugs in the system, errors the training data, issues with the algorithms, or what people are calling the temperature. If you set the temperature of the models to being more random and more creative.

[00:13:37]

What does that mean, set the temperature?

[00:13:38]

Yeah, it's where you set, essentially, the ability for the model to start being a bit more random, so therefore it just becomes super creative. You can set that to a point where actually it just spouts a load of nonsense. But this then goes back to saying, Well, look, do humans do this? If it's understanding the world and understanding how we operate, it's already been trained, it's already been out there for so long. Would we do if our brains were intact? Just a few hours ago, it's hot off the press, but Google Gemini disabled the ability for images of people to be created on Google Gemini because we saw in the last 24 to 48 hours, it was generating misleading images of people from different races in terms of historical context. You can start to see this is all about the training data. It is about the algorithms and then about the output. It's highly mathematical. But at the same time, I have to heed Jeffrey Hinson's advice because he knows what he's talking about in many senses, and that debate will continue. It's the hottest question in the field at the moment.

[00:14:38]

Well, let's leave it on the hottest question. That's a good place to leave. Priya Lakhani, thank you so much. We are out of time. You will be pleased to know we will be doing this again same time next week.