Transcribe your podcast
[00:00:00]

Elon Musk is suing OpenAI for breach of contract. The billionaire entrepreneur says the US firm is now putting profit before its founding principle of developing AI responsibly. Mr. Musk, who helped set up the firm, says Microsoft has plowed billions into OpenAI and has, in effect, turned it into a subsidiary. The two companies deny the claim, but US regulators are investigating the parameters of Microsoft's investment. Mr. Musk left OpenAI in 2018 to set up his own rival. He has warned before that the unfettered use of generative artificial intelligence could pose an existential threat. Microsoft just bought one of the emerging AI companies in France this week. They're already under investigation, Annabelle, over competition laws. Do you think Microsoft is starting to suffocate the space that there is for innovation?

[00:00:54]

Certainly, I think it's problematic when it comes to artificial intelligence. It's a field where we need as many contenders able to climb the ladder and fewer large organizations which are creating this revolutionary technology but pulling the ladder up behind them. I think there are serious questions to be asked about Elon Musk's motivations, about the world's richest man complaining about profit making over benefit to humanity, which some may view as a little self-serving. That said, I think that there are very serious questions that we We need to ask at the government level across the world about whether AI is going to benefit humanity or whether it is going to harm mankind. If we have fears over the latter, which certainly some politicians have voiced, then how are they going to regulate it? Because in the UK, at least, they were very slow to regulate the internet. When they did eventually, on the online safety bill, it was a very far wide-reaching piece of legislation which encompassed arguably far more than actually needed to without addressing the fundamental concerns that we have over safety online.

[00:02:09]

Yeah, I was about to ask you, Ian, whether actually maybe it's a better thing that the big companies are in charge of this and controlling the rollout of this extremely powerful technology. But Anabel makes a very good point that the big companies that have had charge of social media, Meta, Twitter, TikTok, these big companies, they've not done a very good job of that. So maybe we'd be better going the other way.

[00:02:38]

They've done a horrible job of it. Social media has basically regulated itself, which meant that it's had no interest in taking responsibility for the well-being of either the people that are on their platforms or the political systems that they operate in. And democracy and our citizens are worse off as a consequence. Now, when it comes to AI, certainly the governments are trying to get a move on governance with much greater urgency, in part because of the lessons that they have learned from the hands-off on social media. And companies are trying with various degrees of success to cooperate with those governments. But it's not at all clear that it's going to succeed. The technology is moving a lot faster than the governments are, and that means the business models are going to matter a lot more for the governance. I do We worry that when we think about the future of AI in three or five years time, it's quite possible that with the amount of compute required and the amount of energy required to run that compute, that it's only going to be perhaps a very small number of systemically important companies, maybe even working with governments, that will be capable of operating at the cutting edge.

[00:03:52]

That will require a much more assertive governance if that's where the technology goes.

[00:03:58]

Just in terms, maybe get a quick It's a thought from both of you because we are a bit pressed for time. But just one social media post that I saw from Miles Taylor, one of our panelists this week, he's been talking to Senator Mark Warner, Ian, and he said that they have a dossier in their hands that the Russians are now able to create these bots and personas on social media using AI in a much more efficient, much more dangerous way than they were able to in 2016, 2020. The concern was that Senator Warner thinks that they've just We've not had a conversation, really, within Congress about how they're going to stop that or how they're going to tackle it.

[00:04:35]

That's right. It's going to require a crisis first. We already saw one deep fake with a robo call, pretending to be Biden in the run up to the New Hampshire primary. There's going to be a lot more of this, and the companies are going to have to respond very, very quickly with governments as things start to break. We had that with cyber. An entire cybersecurity industry came up after things started to break. That's going to have to happen on AI.

[00:05:00]

Annabelle, what about this side? Because I know that there is a intelligence-led approach to it. Mi5, I think, are involved in it. Are you convinced by what the government is saying to protect the upcoming British election?

[00:05:14]

I certainly have concerns. Now, the country at the moment is being run by tech bro, Rishi Sunak, who has wanted to try and lead the world on AI regulation. He had hosted a global summit last autumn to that effect. But there There are very serious concerns. What really worries me is the pace of change. It was 10 years ago that we were worried about the internet and its ability to be weaponised in order to spread disinformation, to discredit certain candidates in an election. Now we have the rise of AI. As Ian says, deep fakes, which are going to convince members of the public at a speed that may not be corrected before they go to the polls. Certainly, there's a very real concern there. I'm not sure that the government has the tools yet to address.