Happy Scribe Logo

Transcript

Proofread by 0 readers
Proofread
[00:00:02]

This is Amitay Technology Review. Well, let's go in machines we trust, I'm listening to a podcast about the automation of everything, you have reached your destination.

[00:00:26]

Hi, I'm Jennifer Strong, host of a new series about trust, artificial intelligence and how these impact our daily lives. If seeing is believing, how do we begin to navigate a world where we can't trust our eyes or ears? And so, you know what you're listening to. It's not just me speaking. I had some help from an artificial version of my voice filling in words here and there. Meet synthetic Jennifer. Hi there, folks. I can even click to adjust my mood.

[00:00:57]

Hi there. Yeah, let's not make it angry.

[00:01:02]

Privacy bias, deep fakes explain ability. These are the most talked about issues in AI and they all come down to trust. We don't always notice as it blends right into our routines, but we rely on AI to make us more productive. Manager health even matches on dating apps. So what's next? This is about the dismantlement and really the reorganization of work itself. We'll meet people designing that future. So we build algorithms that can understand your facial expressions like your smiles or your frowns or eyebrow raises, and map that into an understanding of what your emotional and mental state is.

[00:01:43]

We had to build this A.I. system. This is a very large body of knowledge that no human doctor can hold in their head.

[00:01:51]

It's simply impossible unreason, the gains that we'll also dig into thorny questions about ethics, consent and fairness.

[00:02:01]

It isn't just about visas anymore. It isn't just about predictive policing anymore. Do I get benefits? Am I thought to be a benefit fraudster? Should I get bail? Is my child at risk of being abused and maybe need to have a social worker come and talk to us or at risk maybe being taken into care? We're still thinking about how to responsibly use these technologies and what the rules are going to be. We only figured out that cars need seatbelts after we started driving and that's going to happen here, too.

[00:02:29]

But tech companies are filling more and more roles that used to belong to governments. This is about corporate power and this is about the way in which these companies are producing technologies that make fantastical claims almost always hidden behind veils of trade secrecy. They're unaudited. They are unexamined. I think we need the consumers to step up and say this is the kind of A.I. or this is the kind of tech company I'm going to buy from or partner with. And they can't do that if they don't really understand the consequences.

[00:02:59]

Right.

[00:03:00]

And as I gets more sophisticated, those who would regulate it understand it even less.

[00:03:06]

The problem with this deification of government and buying these software systemsand is that it risks kind of hiding them behind a process that looks technical, that looks neutral and kind of well, it doesn't have anything to do with humans anymore.

[00:03:20]

It doesn't have to do with our decisions. Right. It's just the recommendation of the software system in machines we trust.

[00:03:27]

A podcast from MIT Technology Review is coming soon to an airport near you. So please hit, follow or subscribe on Apple, Google, Spotify or wherever you get your podcasts in machines. We trust a podcast about the automation of everything.