Happy Scribe Logo

Transcript

Proofread by 0 readers
Proofread
[00:00:02]

This is Amity Technology Review. This place was eerily quiet the first weekend of the curfew. There were very quiet police cars driving very slowly around here as well.

[00:00:16]

That's Toussant Morrison horseshoeing my producer, Ryan Mosley around a park in Minneapolis, Minnesota. It's a two and a half mile drive from where George Floyd was killed on May 25th, sparking what's likely to be the largest protest movement in American history.

[00:00:32]

About a half mile east was where an attempt was made to burn down the Fifth Precinct police station.

[00:00:37]

Morrison is a musician, actor, filmmaker, and in the past few months, he's also become an organizer of the Black Lives Matter movement in Minneapolis.

[00:00:46]

The anger is absolutely perfectly justified. His skin is beautiful. Mind his spirit and, you know, had to be. He's well known here. Activists know who he is.

[00:00:58]

Government officials know who he is. And he thinks the police probably do, too.

[00:01:03]

Have you had any conversations with people who are worried about being identified by police officers who might have photos of them at the protests being part of the protests? There's definitely a fear. I learned about that a little bit too late. My government name and face is out there. So. So, yeah, that has been definitely a concern. And I've heard it from black folks to white folks to to every everything in between folks.

[00:01:28]

Although he says protesters are more concerned about their physical safety right now, such as being hit by a car identity outing has not been something I've heard as much as being fearful for one's life at an actual march.

[00:01:43]

We know from an investigation by BuzzFeed that police here have access to lots of surveillance tech, including clear views, facial recognition software, which you'll remember from episode two. Plus they have stingray's that act as cell towers to grab mobile data and audio surveillance system called ShotSpotter and a camera system with video analytics layered on top. Federal agents also flew a Predator drone over the protest, too. And Morrison believes these tools are targeted differently towards communities of color.

[00:02:14]

Now, they don't need a reason to be suspicious. Oh, I saw that your faces at this rally and I see that it lines up with something else. Well, what if the technology is wrong? They're not going to think that you possibly, you know, because they're trusting the computer so that technology is heightening a danger that's already dangerous and it's already pointed at people of color, disabled people, trans folks. We're the brunt of that technology. So when you create that technology, who does it affect the most?

[00:02:42]

I'm Jennifer Strong and this is part four of our series on police and facial recognition, where we'll explore the way forward and what regulation might look like moving forward.

[00:02:52]

I don't know what's going to happen, but I do know is that there will be no going back. You know, there's not going to be any going back.

[00:03:02]

Welcome to The Age of with predict what's possible in the age of with, then translate insight into trustworthy performance. Deloitte brings together end to end offerings with domain and industry insight to drive stronger outcomes through human and machine collaboration. Hmm, well, let's go in machines we trust. I'm listening to a podcast about the automation of everything.

[00:03:36]

You have reached your destination. We don't know if police are using face recognition on the current wave of protests, but we do know two things. One, that many of them have the ability to do so. And two, it's happened in the past. Jamison Spivak is a policy associate at the Center for Privacy and Technology, which is an independent think tank that's based at Georgetown Law School.

[00:04:00]

Back in twenty fifteen, police in Baltimore use social media tracking on people protesting the death of Freddie Gray. Facial recognition help police identify protesters with outstanding warrants, and they arrested them directly from the crowds. Spivak is worried about what this means for free speech if it continues.

[00:04:20]

This is really troubling because it discourages political speech and participation, which is protected by the First Amendment. So if people think they're being identified or arrested for a crime completely unrelated to the protests, they're not going to attend. So this is targeting and discouraging black political speech specifically and more broadly, it shifts the balance of power significantly towards governments. It gives them the ability to identify and track many people from a distance and in secret. And the government has never had the ability to surveil the public like this.

[00:04:52]

This is essentially a workaround that allows police to run warrantless searches.

[00:04:58]

This is how he sees regulation starting to take shape.

[00:05:01]

One option that has already been discussed and it already has been put into place is to disband police use of this technology. Overall, it's flawed. It facilitates unprecedented level of government surveillance and police have been shown to misuse it. Similarly, another option is to place a moratorium on police use of facial recognition. And what this does is it gives the public and elected officials time to get up to speed on what this technology is, how it works, how police are using it.

[00:05:29]

And then another option is to just pass regulation that allows police to use it, but has certain restrictions on their ability to use it.

[00:05:37]

Over the course of this reporting, we've spoken to several people who believe reforming the use of face ID just isn't possible. The ACLU says it should be nationally outlawed.

[00:05:48]

And so I'm curious what type of regulation he thinks would really make a difference, things like requiring a probable cause back search warrant for any face recognition search, restricting the use to violent felonies, and prohibiting the use face recognition for immigration enforcement, narrow bans on the use of facial recognition in conjunction with things like drones or in police worn body cameras or for ongoing surveillance because face recognition should not be used in life or death situations. Another thing is to have mandatory disclosure to defendants that police use face recognition to identify and then eventually arrest them.

[00:06:27]

But even if those regulations don't come to pass, he says, we need testing to make sure it's accurate and it's not bias in having reports about how it's used and transparency. Those are all good and they're all needed, but they're not enough. We really need these deeper reforms, he says.

[00:06:45]

It can't just be up to the companies making the technology to be responsible for the rules that govern it.

[00:06:50]

We need to be very vigilant and ask ourselves, are the things that these companies supporting in terms of legislation, are they really going to protect people or is it just a way for the companies to have clarity about how the technology is regulated, but not really regulate it in a way that's strong enough, that actually protects people and then actually really affects the companies ability to produce it? I don't think that they are going to voluntarily give up selling this technology.

[00:07:18]

So it's really on lawmakers to step in. Most of the major companies that are developing face recognition for police and for the government are smaller and more specialized companies that most people have not heard of.

[00:07:32]

One of those companies you've likely never heard of is in Tech Lab, even though it first made waves about five years ago when, as a brand new startup, it beat Google and won an international competition, scoring 95 percent accuracy. And one of the categories since then, the Russian company has repeatedly won biometrics competitions held by companies such as Amazon, by U.S. government agencies and universities. And the founder of the company is this man, Artyom Kroeker. InTech Lab is best known for its app called Find Face, which lets people search social media profiles with photos on their phones.

[00:08:09]

Find face.

[00:08:09]

Is it meant for a certain group of people or you want it to be available to anybody on social media?

[00:08:14]

It was available for anybody in the Internet.

[00:08:19]

This is how John Oliver described the app during a recent episode of HBO's Last Week.

[00:08:23]

Tonight, if you want a sense of just how terrifying this technology could be if it becomes part of everyday life, just watch. As a Russian TV presenter demonstrates an app called Find Face. If you find yourself in a cafe with an attractive.

[00:08:37]

Girl, and you don't have the guts to approach her. No problem. All you need is a smartphone and the application fine face.

[00:08:45]

The man in this video uses the app to take a photo of a woman at a different table. It instantly pulls up her profile on Russia's version of Facebook.

[00:08:53]

Just imagine that from a woman's perspective. I'll pick you up from your place at 8:00. Don't worry, I already know where you live.

[00:09:01]

The app was a viral hit, but these days in tech labs, attention is on live facial recognition, meaning the algorithm works in video. In real time. A system they installed in the city of Moscow is believed to be among the largest of this type in the world right now.

[00:09:16]

More than the 100000 video cameras connected to the system and the system proved to be very helpful and useful to the city.

[00:09:30]

So 100000 video cameras capturing a million faces per month. And he claims the system is very, very accurate. So it's only one false. Accept 10 billion of comparisons. It's one in 10 zeros.

[00:09:46]

That kind of accuracy is unheard of, but we can't say it's impossible either. And I'll get to why in just a moment. What we do know is it's much harder to achieve accuracy on live video than it is on photos.

[00:09:58]

Earlier in the series, we talked about trials of live facial recognition by London police that produced an accuracy rate of about 20 percent and another one in New York City that during its testing period didn't produce even one correct match. But in Moscow, Kirienko says his system is being used to solve crimes in real time, including at the world's largest soccer competition during the FIFA World Cup in 2018.

[00:10:25]

In Moscow, more than 100 criminals was caught due to the system in Tech Lab works, with more than 100 clients in 20 countries, including U.S. chip maker Invidia and the Chinese Telecom Huawei. It also has smart city projects in Dubai, fintech projects in Europe and retail partnerships in North and South America. The company submits some of their algorithms for testing by the U.S. government, but he says they can't do that for their most advanced work because Nyst or the National Institute of Standards and Technology tests facial recognition algorithms on photos and his latest systems use video that this is quite far from real life scenarios.

[00:11:09]

And if government bodies don't catch up with the tech companies are more or less left to audit themselves.

[00:11:15]

The companies in the field have their own tests now. Company, we have a lot of different tests before we send it to production. But the problem is that there is no independent test which will be opened where anyone could see and anyone could test algorithms.

[00:11:38]

This summer, InTech Lab added silhouette detection to their video platform. It's used to identify people in profile.

[00:11:45]

They've also taken on a new role with the global pandemic measure distance between people and find areas where a lot of people are standing close to each other so that this city could improve the processes which are happening in these areas. It's also helped to stop for expansion of coronavirus.

[00:12:09]

Moscow now. But as we are in the middle of this global pandemic, how well does the technology work?

[00:12:15]

When someone's wearing a mask, it works with the same accuracy as a result mask. So it's almost the same, of course. And we also have special algate, which could tell whether there is a mask on a person and whether it's correctly wet or not.

[00:12:35]

Wearing a mask is typically caused the accuracy of these systems to drop, including in a pre pandemic test of intact lab by NYST. But the pandemic has created something of an arms race between companies trying to build systems that can read masked faces. The agency also says the company's algorithms are often among the more accurate they test.

[00:12:55]

We simply don't know.

[00:12:57]

But masked or not, face recognition isn't the only thing happening on those video feeds called detection license plate recognition.

[00:13:05]

And we combine all this video analytics together so that it can work as a whole system and extract as much information from the video stream as possible. Ideally, the system will be able to extract as much information as humans can see in the video, but algorithms can do it with a much better speed. And if a human can process only one video stream at the time, that as the system could process hundreds and thousands. Thousands of videos in time. Do you ever worry that somebody might take all of your hard work and use it to build a world you don't really want to live in?

[00:13:49]

I don't I don't actually believe in this scenario because it's a it's a good scenario for somewhere, but it's a very unlikely scenario in real life as a technology company will always try to tell people so that people understand and make their decision whether they want it or not.

[00:14:13]

Like the founder of Clearview, he says it's up to us everyday people and citizens all over the world to decide whether and how to live with this technology. Now, considering the many issues around transparency and accountability, it would seem not quite that easy. But there are people trying hard to shoulder that responsibility. And you'll meet one in just a moment. At Deloitte, we believe the age of width is upon us, what's happening around us, shared data, digital assistants, cloud platforms, connected devices.

[00:14:51]

It's not about people versus I, it's about the potential for people to collaborate with A.I. to discover and solve complex problems. But for organizations to enable innovation and break boundaries, they must harness this power responsibly. Deloitte Trustworthy, a framework, is a common language for leaders and organizations to articulate the responsible use of A.I. across internal and external stakeholders prepared to turn possibility into performance.

[00:15:16]

To take a closer look at the Deloitte Trustworthy Framework, visit Deloitte Dotcom Slash U.S. Trust I. I guess the journey to where we are today has arisen from this first sort of cracking the rose colored lenses of this is a technology that works and demonstrating that it doesn't work for very specific people. And then later on, opening up this conversation around what does it actually mean for facial recognition to work? I am Deborah Oggi and I'm a tech fellow at the Institute.

[00:15:52]

It's based at NYU and works to understand the social impact of facial recognition and other A.I. technologies.

[00:15:59]

How can we actually begin to have conversations around its restriction, around disclosure of its use? And how does that play out in terms of policy restrictions as an A.I. researcher?

[00:16:09]

She has a superpower. Most of us don't. She can audit the algorithms that make Pharcyde products work so long as companies provide access and her efforts are forcing change. The spark that sent her down this path came from what she describes as a horrible realization during her college internship at a machine learning startup.

[00:16:28]

Wait a second. Facial recognition doesn't actually work for everybody.

[00:16:32]

She was working on a computer vision model that would help clients flag inappropriate images as not safe for work. The trouble was, it flagged photos of people of color at a much higher rate. So she looked for the problem and she found it. The model was learning to recognize not safe imagery from pornography and safe imagery from stock photos. It turns out porn is much more diverse and that diversity caused the model to automatically associate dark skin with salacious content.

[00:17:03]

The startup refused to do anything about it, so she went to work on these issues with a woman we met earlier in the series. Joab, William Welney, who has a grad student, made a more diverse and balanced data set, and they used it to audit algorithms and face ID products already out on the market. This work has a whole lot to do with today's understanding of how these products fail women and people of color.

[00:17:25]

But it came at a cost.

[00:17:27]

The computer vision community at that time wasn't having these conversations around ethics and society and fairness like now. We're a lot more comfortable with this work. But there was a time when even the research community was also very hostile and questioning, sort of like, what is this? What's your point here? What's the significance of this that changed over time?

[00:17:49]

And she says she's found support within the companies they audit to.

[00:17:53]

And even though their institutional level or corporate level stance was defensive, these individuals within these companies fought really hard to change the position of their companies and to push for some of these positions that we see today.

[00:18:05]

Amazon and Microsoft recently put a pause on selling their face ID systems to law enforcement. IBM stopped working on it altogether.

[00:18:14]

There's this kind of additional acknowledgement with these moratoriums to say wait. And actually, while this nuanced conversation is happening with respect to establishing this policy and this regulation that we desperately need, we're not going to sell that technology at the same time. And I think that realization in that gap is an important step forward in the conversation.

[00:18:32]

So there is urgency to this moment, which she is calling a pause. And during this pause, there's a lot we need to sort out. But if we rely on tech companies to go all the way with regulation, they're always going to fall short.

[00:18:45]

And she warns face ID is just the tip of the iceberg. It's much easier to have this conversation about faces than it is to have it about insurance data or medical data or even some of these Social Security systems, even though this exact situation of this disproportionate performance also applies to those cases.

[00:19:07]

So going forward, she wants disclosure and transparency. She's also calling for a proper evaluation system.

[00:19:14]

Ultimately, though, a lot of the power is in the hands of the policymakers because big tech companies should definitely not be controlling the conversation.

[00:19:23]

We may be at an inflection point in our relationship with facial recognition and with how it gets used. And yet it seems safe to say the adoption of this tech is likely to continue at a breakneck pace, leaving our understanding of its power and impact in the dust. Unless we really do stop and take a breath and set some rules for who gets access to images of our faces and what they can do with them.

[00:19:48]

When you back the next episode, we go swabbing for coronavirus on the New York City subway as we explore AI's role in getting the world's public transit systems back up and moving.

[00:20:03]

This episode was reported and produced by me, Thate Ryan Mosley, Emma Silicon's and Karen. How We Had Help from Benji Rosen were edited by Michael Reilly and Gideon Lichfield. Our technical director is Jacob Goreski. We'll see you back here in a couple of weeks. Thanks for listening. I'm Jennifer Strong. This is an MIT technology review.