Transcribe your podcast
[00:00:06]

Pushkin. You're listening to Brave New Planet, a podcast about amazing new technologies that could dramatically improve our world or if we don't make wise choices, could leave us a lot worse off utopia or dystopia. It's up to us. On September 26, 1983, the world almost came to an end just three weeks earlier, the Soviet Union had shut down Korean Airlines Flight seven, a passenger plane with 269 people aboard.

[00:00:56]

I'm coming before you tonight about the Korean airline massacre.

[00:01:00]

President Ronald Reagan addressed the nation, the attack by the Soviet Union against two hundred and sixty nine innocent men, women and children aboard an unarmed Korean passenger plane. A crime against humanity must never be forgotten here or throughout the world.

[00:01:16]

Cold War tensions escalated with the two nuclear powers on high alert. World War Three felt frighteningly possible. Then on September 26th, in a command center outside of Moscow, an alarm sounded the Soviet Union's early warning system reported the launch of multiple intercontinental ballistic missiles from bases in the United States.

[00:01:42]

Stanislav Petrov, a 44 year old member of the Soviet Air Defense Forces, was the duty officer that night.

[00:01:51]

His role was to alert Moscow that an attack was underway, likely triggering Soviet nuclear retaliation, an all out war.

[00:02:02]

Petrov spoke with BBC News in 2013. The sirens sounded very loudly and had just sat there for a few seconds, staring at the screen with the word launch displayed in bold red letters. And minutes later, the siren went off again. The second missile was launched.

[00:02:21]

Then the third and the fourth and the fifth of the computers changed their alerts from launch to missile strike. Petrovs instructions were clear report the attack on the motherland. But something didn't make sense. If the U.S. were attacking, why only five missiles rather than an entire fleet? And then I made my decision. I would not trust the computer. I picked up the telephone handset, spoke to my superiors and reported that the alarm was false.

[00:02:56]

But I myself was not sure until the very last moment.

[00:03:01]

I knew perfectly well that nobody would be able to correct my mistake if I had made one. Petroff, of course, was right, the false alarm was later found to be the result of a rare and unanticipated coincidence, sunlight glinting off high altitude clouds over North Dakota at just the right angle to fool the Soviet satellites.

[00:03:25]

Stanislav Petrov story comes up again and again in discussions of how far we should go in turning over important decisions, especially life and death decisions to artificial intelligence. It's not an easy call.

[00:03:40]

Think about the split second decisions in avoiding a highway collision. Who will ultimately do better?

[00:03:47]

A tired driver or a self-driving car? Nowhere is the question more fraught than on the battlefield.

[00:03:56]

As technology evolves, should weapons systems be given the power to make life and death decisions? Or do we need to ensure there's always a human Stanislav Petrov in the loop? Some people, including winners of the Nobel Peace Prize, say that weapons should never be allowed to make their own decisions about who or what to attack.

[00:04:19]

They're calling for a ban on what they call killer robots.

[00:04:25]

Others think that idea is well-meaning but naive.

[00:04:31]

Today's big question, lethal autonomous weapons. Should they ever be allowed? If so, when? If not, can we stop? My name is Eric Lander, I'm a scientist who works on ways to improve human health. I helped lead the Human Genome Project and today I lead the Broad Institute of MIT and Harvard. In the 21st century, powerful technologies have been appearing at a breathtaking pace related to the Internet, artificial intelligence, genetic engineering and more. They have amazing potential upsides, but we can't ignore the risks that come with them.

[00:05:17]

The decisions aren't just up to scientists or politicians, whether we like it or not. We all of us are the stewards of a brave new planet. This generation's choices will shape the future as never before.

[00:05:38]

Coming up on today's episode of Brave New Planet, fully autonomous lethal weapons or killer robots? We hear from a fighter pilot about why it might make sense to have machines in charge of some major battlefield decisions. I know people who have killed civilians and in all cases where people made mistakes, it was just too much information. Things were happening too fast.

[00:06:07]

I speak with one of the world's leading robo ethicist. Robots will make mistakes, too, but hopefully, if done correctly, they will make far, far less mistakes than human beings.

[00:06:17]

We'll hear about some of the possible consequences of autonomous weapons.

[00:06:22]

Algorithms interacting at machine speed faster than humans can respond might result in accidents.

[00:06:29]

And there's something like a flash war. I'll speak with a leader from Human Rights Watch. The campaign to stop killer robots is seeking new international law in the form of a new treaty.

[00:06:41]

And we'll talk with former Secretary of Defense Ash Carter, because I'm the guy who has to go out the next morning after some women and children have been accidentally killed.

[00:06:50]

As suppose I go out there, Eric, and I say, oh, I don't know how it happened. The machine did it. I would be crucified. I should be crucified. So stay with us.

[00:07:04]

Hey there. I'm Bill Nye, host of Science Rules, where we talk about all the ways in which science rules our universe, you never know what you might learn on our show.

[00:07:13]

Evolution does some pretty funky things, talking about birds, learning from other birds. This is what we call a delicious dilemma in astrophysics. Oh, hey, here's a thing this field doesn't actually understand. Stay tuned. Turn it up. Wow. There are worlds outside our solar system. There are thousands and thousands of other worlds. I can totally talk to this cuttlefish.

[00:07:34]

We're also bringing you expert analysis on the biggest science story of them all, the coronavirus.

[00:07:40]

This is about the health of the whole planet. Everybody has to take a calculated risk. I've just reviewed this literature. How bad does it have to get before everybody pays attention? Whatever your problem, wherever you are in the universe, science rules.

[00:07:56]

Science Rules is out right now. Subscribe and Stitcher, Apple podcast, Spotify or wherever you listen.

[00:08:04]

Chapter one, Stanley, the self-driving car. Not long after the first general purpose computers were invented in the 1940s, some people began to dream about fully autonomous robots, machines that used their electronic brains to navigate the world, make decisions and take actions.

[00:08:28]

Not surprisingly, some of those dreamers were in the U.S. Department of Defense, specifically the Defense Advanced Research Projects Agency, or DARPA, the visionary unit behind the creation of the Internet. They saw a lot of potential for automating battlefields, but they knew it might take decades. In the 1960s, DARPA funded the Stanford Research Institute to build Shaki the Robots, a machine that used cameras to move about a laboratory in the 1980s. It supported universities to create vehicles that could follow lanes on a road.

[00:09:06]

By the early 2000s, DARPA decided that computers had reached the point that fully autonomous vehicles able to navigate the real world might finally be feasible. To find out. DARPA decided to launch a race. I talked to someone who knew a lot about it.

[00:09:26]

My name is Sebastian Thrun. I mean, the smartest person on the planet and the best looking the skiting.

[00:09:31]

Sebastian Thrun gained recognition when his autonomous car, a modified Volkswagen with a computer in the trunk and sensors on the roof, was the first to win the DARPA Grand Challenge.

[00:09:44]

The difficult challenge was as momentous. Government sponsored robot race epic race. Can you build a robot that can navigate 130 punishing miles through the Mojave Desert and the best robot like seven miles and then we literally went up in fire. Many, many, many resources, it concluded, can't be done. In fact, many of my colleagues told me, want to waste my time in my name if I engage in this kind of super hard race.

[00:10:10]

And that made you more interested in doing it, of course.

[00:10:12]

And so you built Stanley. Yes. So my students, both Stanley and this started as a class and Stanford students are great. If you tell them go to the moon in two months, they're gonna go to the moon.

[00:10:23]

So then 2005, the actual government sponsored race, how did Stanley do?

[00:10:30]

We came in first. So we focused insanely strongly on software and specifically on machine learning. And that differentiated us from pretty much every other team that focused on hardware. By the way, there were five teams that finished this grueling race within one year. And it's the community of the people that build all these machines that really won.

[00:10:51]

So nobody made it a mile in the first race and five different teams made it more than 130 miles through the desert just a year later.

[00:11:00]

Yeah, that's kind of amazing to me. That just showed how fast this technology can possibly evolve and what's happened since then.

[00:11:09]

I worked at Google for a while and then eventually this guy, Larry Page, came to me and says, Hey, Sebastian, I thought about this long and hard. We should build a self-driving car. They can drive on all streets in the world. And with my entire authority, I said, that cannot be done. We just had driven a desert race. There was never pedestrians and bicycles and all the other people that we could kill in the environment.

[00:11:31]

And for me, just the sheer imagination. We would drive a self-driving car to San Francisco side almost like a crime.

[00:11:38]

So you told Larry Page, one of the two co-founders of Google, that the idea of building a self-driving car that could navigate anywhere was just nuts.

[00:11:48]

Yeah, a few days later, I came back and said, hey, Sebastian, look, I trust you. You're the expert. But I want to explain Eric Schmidt, then the Google CEO and co-founder, Sergey Brin, why it can be done. Can you give me the the technical reason? So I went home in agony thinking about what is the technical reason why it can't be done.

[00:12:08]

And I got back the next day and I said, so what is it? And I said, I can think of any. And lo and behold, eighteen months later, roughly ten engineers, we drove pretty much every street in California. Today, autonomous technology is changing the transportation industry, about 10 percent of cars sold in the U.S. are already capable of at least partly guiding themselves down the highway. In 2018, Google's self-driving car company, WAMMO, launched a self-driving taxi service in Phoenix, Arizona, initially with human backup drivers behind every wheel, but now, sometimes even without.

[00:12:48]

I asked Sebastian why he thinks this matters will lose more than a million people in traffic accidents every year, almost exclusively to us not paying attention. When I was 18, my best friend died in a traffic accident and it was a split second poor decision from his friend who was driving in, who also died. To me, this is just unacceptable. Beyond safety, Sebastian sees many other advantages for autonomy during a commute, you can do something else that means you're probably willing to to commute further distances.

[00:13:26]

You could sleep or watch a movie or two a.m. and then eventually people can use cars that they can't operate them. Blind people, old people, children, babies. I mean, there's an entire spectrum of people that can't be excluded. They would not be able to be mobile.

[00:13:43]

Chapter two, the Tomahawk.

[00:13:47]

So dorpers efforts over the decades helped give rise to the modern self-driving car industry, which promises to make transportation safer, more efficient and more accessible. But the agency's primary motivation was to bring autonomy to a different challenge. The battlefield.

[00:14:05]

I traveled to Chapel Hill, North Carolina, to meet with someone who spends a lot of time thinking about the consequences of autonomous technology. We both serve on a civilian advisory board for the Defense Department.

[00:14:17]

My name is Missy Cummings. I am professor of electrical and computer engineering at Duke University. And I think one of the things that people find most interesting about me is that I was one of the U.S. military's first female fighter pilots in the Navy.

[00:14:34]

Did you always want to be a fighter pilot? So when I was growing up, I did not know that women could be pilots. And indeed, when I was growing up, women could it be pilots?

[00:14:44]

And it wasn't until the late 70s that women actually became pilots in the military.

[00:14:53]

So I went to college in 1984. I was at the Naval Academy. And then, of course, in 1986, Top Gun came out.

[00:15:02]

And then I you know, who doesn't want to be a pilot?

[00:15:04]

After you see the movie Top Gun, Missy is tremendously proud of the 11 years she spent in the Navy. But she also acknowledges the challenges of being part of that first generation of woman fighter pilots.

[00:15:17]

It's no secret that the reason I left the military was because of the hostile attitude towards women.

[00:15:24]

None of the women in that first group stayed in to make it a career.

[00:15:27]

The guys were very angry that we were there and I decided to leave. When they started sabotaging my flight gear, I just thought, this is too much. If something really bad happened, you know, I would die.

[00:15:40]

When Missy Cummings left the Navy, she decided to pursue a Ph.D. in human machine interaction.

[00:15:46]

In my last three years, Flying Eighteens, there were about 36 people I knew that died about one person a month. They were all training accidents. It just really struck me how many people were dying because the design of the airplane just did not go with the human tendencies.

[00:16:06]

And so I decided to go back to school to find out, you know, what can be done about that.

[00:16:11]

So I went to finish my PhD at the University of Virginia, and then I spent the next ten years at MIT learning my craft.

[00:16:20]

The person I am today is half because of the Navy and half because of the MIT.

[00:16:24]

Today, Missy is at Duke University where she runs the Humans and Autonomy Lab, or for short, how it's a nod to the sentient computer that goes rogue in Stanley Kubrick's film 2001 A Space Odyssey.

[00:16:40]

This mission is too important for me to allow you to jeopardize it.

[00:16:46]

I don't know what you're talking about, how I know that you and Frank were planning to disconnect me, and I'm afraid that something I cannot allow to happen.

[00:16:56]

And so I intentionally named my lab. How so? That we were there to stop that from happening. Right. I had seen many friends die not because the robot became sentient, in fact, because the designers of the automation really had no clue how people would or would not use this technology.

[00:17:16]

It is my life's mission statement to develop human collaborative computer systems that work with each other to achieve something greater than either what alone?

[00:17:26]

The humans. An autonomy lab works on the interactions between humans and machines across many fields. But given her background, Mrs thought a lot about how technology has changed the relationship between humans and their weapons.

[00:17:43]

There's a long history of us distancing ourselves from our actions. We want to shoot somebody. We wanted to shoot them with bows and arrows.

[00:17:52]

We wanted to drop bombs from five miles over a target. We want cruise missiles that can kill you from another country, right? It is human nature to back that distance up.

[00:18:03]

Missy sees an inherent tension. On one hand, technology distances ourselves from killing. On the other hand, technology is letting us design weapons. Those that are more accurate and less indiscriminate in their killing, Missy wrote her PhD thesis about the Tomahawk missile, an early precursor of the autonomous weapons systems being developed today.

[00:18:27]

The Tomahawk missile has these stored maps in its brain, and as its skimming along the nap of the earth, it compares the pictures that it's taking with its pictures and its database to decide how to get to its target.

[00:18:40]

This Tomahawk was kind of a set it and forget it kind of thing. Once you launched it, it would follow its map to the right place and there was nobody looking over his shoulder.

[00:18:49]

Well, so the Tomahawk missile that we saw in the Gulf War, that was a fire and forget missile, that a target would be programmed into the missile and then it would be fired. And that's where it would go later around 2000, 2003.

[00:19:07]

Then GPS technology was coming online.

[00:19:10]

And that's when we got the tactical tomahawk, which had the ability to be redirected in flight. That success with GPS and the Tomahawk open the military's eyes to the ability to use them and drones.

[00:19:24]

Today's precision guided weapons are far more accurate than the widespread aerial bombing that occurred on all sides in World War Two, where some cities were almost entirely leveled, resulting in huge numbers of civilian casualties in the Gulf War. Tomahawk missile attacks came to be called surgical strikes.

[00:19:47]

I know people who have killed civilians and I know people who have killed friendlies. They have dropped bombs on our own forces and killed our own people. And in all cases where people made mistakes, it was just too much information. Things were happening too fast. You've seen some pictures that you got in a brief hours ago, and you're supposed to know that what you're seeing now through this grainy image, 35000 feet over a target, is the same image that you're being asked to bomb.

[00:20:20]

The Tomahawk never missed its target. It never made a mistake unless it was programmed as a mistake.

[00:20:28]

And that's old autonomy and it's only gotten better over time.

[00:20:35]

Chapter three, kicking down doors. The Tomahawk was just a baby step toward automation with the ability to read maps, it could correct its course, but it couldn't make sophisticated decisions. But what happens when you start adding modern artificial intelligence?

[00:20:54]

So where do you see autonomous weapons going? If you could kind of map out, where are we today and where do you think we'll be 10, 20 years from now?

[00:21:03]

So in terms of autonomy and weapons, by today's standards, the Tomahawk missile is still one of the best ones that we have. And it's also still one of the most advanced.

[00:21:13]

Certainly there are research arms of the military who are trying very hard to come up with new forms of autonomy. There was the verdicts that came out of Lincoln Lab and this was basically a swarm of really tiny waves that could coordinate together.

[00:21:33]

A you have an unmanned aerial vehicle is military speak for a drone. The products, the drones that Missy was referring to, they were commissioned by the Strategic Capabilities Office of the U.S. Department of Defense. These tiny flying robots are able to communicate with each other and make split second decisions about how to move as a group.

[00:21:56]

Many researchers have been using bio inspired methods BS, right? So bees have local and global intelligence like a group of bees.

[00:22:04]

These drones are called a swarm collective intelligence on a shared mission. A human can make the big picture decision and the swarm of micro drones can then collectively decide on the most efficient way to carry out the order in the moment. I wanted to know why exactly this technology is necessary, so I went to speak to someone who I was pretty sure would know I'm Ash Carter.

[00:22:31]

Most people will probably have heard my name as the secretary of defense who preceded Jim Mattis.

[00:22:38]

You will know me in part from the fact that we knew one another way back in Oxford when we were both young scientists. And I guess I should start there. I'm a physicist.

[00:22:47]

When you were doing your Ph.D. in physics, I was doing my PhD in mathematics at Oxford. What was your thesis on?

[00:22:54]

It was on Quantum Kermani Nomex. That was the theory of quarks and gluons.

[00:23:00]

And how in the world is somebody. Who's an expert in quantum chromeo dynamics become the secretary of defense. It's an interesting story.

[00:23:09]

The people who were the seniors in my field of physics, the mentors, so to speak, were all members of the Manhattan Project generation.

[00:23:22]

They had built the bomb during World War Two, and they were proud of what they'd done because they believed that it had ended the war with fewer casualties than otherwise there would have been and a full scale invasion of Japan. And also that had kept the peace through the Cold War.

[00:23:41]

So they were proud of it. However, they knew there was a dark side and they conveyed to me that it was my responsibility as a scientist to be involved in these matters. And the technology doesn't determine with the balance of good and bad is we human beings do.

[00:24:02]

That was the lesson. And so that's what got me started.

[00:24:06]

And then my very first Pentagon job, which was in 1981, right through until the last time I walked out of the Pentagon as secretary of defense, which was January of 2017.

[00:24:17]

Now, when you were secretary, there was a strategic capabilities office that it's been publicly reported, was experimenting with using drones to make swarms of drones that could do things, communicate with each other, make formations. Why would you want such things?

[00:24:37]

That's a good question. Here's what you do with the drone like that. You put a jammer on it, a little radio beacon, and you fly it right into the eye of a enemy radar.

[00:24:50]

So all that radar sees is the energy emitted by that little drone.

[00:24:56]

And it's essentially dazzled or blinded if there's one big drone, that radar is precious enough that the defenders are going to shoot that drone down.

[00:25:08]

But if you have so many out there, the enemy can't afford to shoot them all down. And since they are flying right up to the radar, they don't have to be very powerful.

[00:25:20]

So there's an application where lots of little drones can have the effect of nullifying enemy radar.

[00:25:28]

That's a pretty big deal for a few little little micro drones to learn more. I went to speak with Paul Shaari. Paul's the director of the Technology and National Security Program at the Center for a New American Security. Before that, he worked for Ash Carter at the Pentagon studying autonomous weapons. And he recently authored a book called Army of Non Autonomous Weapons and the Future of War.

[00:25:54]

Paul's interest in autonomous weapons began when he served in the Army.

[00:25:58]

I enlisted in the Army to become an Army Ranger. That was June of 2001, did a number of tours overseas and the wars in Iraq and Afghanistan.

[00:26:09]

So I'll say one moment that stuck out for me.

[00:26:12]

Well, I really sort of the light bulb went on about the power of robotics in warfare. I was in Iraq in 2007.

[00:26:20]

We were on a patrol driving along on a Stryker armored vehicle, came across an IED, improvised explosive device, makeshift roadside bomb.

[00:26:28]

And so we called up bomb disposal folks.

[00:26:31]

So they show up and I'm expecting the bomb tech to come out in that big bomb suit that you might have seen in the movie The Hurt Locker, for example, and instead out rolls out a little robot.

[00:26:42]

And I kind of thought, oh, that makes a lot of sense, have the robot defuse the bomb.

[00:26:47]

Well, turns out there's a lot of things in war that are super dangerous where it makes sense to have robots out on the front lines, giving people better stand off a little bit more separation from potential threats.

[00:26:58]

The bomb diffusing robots are still remote controlled by a technician. But Ash Carter wants to take the idea of robots doing the most dangerous work a step further somewhere in the future.

[00:27:10]

But I'm certain will occur is I think there will be robots who will be part of infantry squads and that will do some of the most dangerous jobs in an infantry squad, like kicking down the door of a building and being the first one to run in and clear the building of terrorists or whatever.

[00:27:30]

That's a job that doesn't sound like something I would like to have a young American man or woman doing if I could replace them with a robot.

[00:27:41]

Chapter four, Harpies. Paul Schori gave me an overview of the sophisticated unmanned systems currently used by militaries.

[00:27:52]

So I think it's worth separating out the value of robotics versus autonomy, removing a person from decision making.

[00:28:00]

So what's so special about autonomy? The advantage is there are really about speed.

[00:28:07]

Machines can make decisions faster than humans.

[00:28:08]

That's why automatic braking in automobiles is valuable. You could have much faster reflexes than a person might pull.

[00:28:16]

Separates the technology into three baskets. First, semiautonomous weapons.

[00:28:21]

Semi-autonomous weapons are widely used around the globe today, where automation is used to maybe help identify targets. But humans are in the final decision about which targets to attack.

[00:28:33]

Second, there are supervised autonomous weapons.

[00:28:37]

There are automatic modes that can be activated on air missile defense systems that allow these computers to defend the ship or ground vehicle or land based all on its own against these incoming threats. But humans supervise these systems in real time. They could, at least in theory, intervene.

[00:28:57]

Finally, there are fully autonomous weapons. There are a few isolated examples of what you might consider a fully autonomous weapons when there's no human oversight and they're used in an offensive capacity.

[00:29:10]

The clearest today that's in operation is the Israeli HAPI drone that can loiter over a wide area of about two and a half hours at a time to search for enemy radars. And then when it finds one, it can attack it all on its own without any further human approval once it's launched.

[00:29:28]

Not a decision about which particular target to attack. That's delegated to the machine. It's been sold to a handful of countries, Turkey, India, China and South Korea.

[00:29:38]

I asked Missy if she saw advantages to having autonomy built into lethal weapons while she had reservations, she pointed out. But in some circumstances it could prevent tragedies.

[00:29:49]

A human has something called the neuromuscular lag. In them, it's about a half second delay. So you see something you can execute an action a half second later.

[00:29:59]

So let's say that that guided weapon fired by a human is going into a building. And then right before it gets to the building at a half second, the door opens and a child walks out. It's too late. That child is dead.

[00:30:16]

But a lethal autonomous weapon who had a good enough perception system could immediately detect that and immediately got itself to a safe place to explode.

[00:30:29]

That is a possibility in the future.

[00:30:34]

Chapter five bounded morality. Some people think and this point is controversial, that robots might turn out to be more humane than humans, the history of warfare has enough examples of atrocities committed by soldiers on all sides. For example, in the middle of the Vietnam War in March 1968, a company of American soldiers attacked a village in South Vietnam, killing an estimated 504 unarmed Vietnamese men, women and children, all noncombatants.

[00:31:10]

The horrific event became known as the My Lai massacre in 1969. Journalist Mike Wallace of 60 Minutes sat down with Private Paul Medlow, one of the soldiers involved in the massacre.

[00:31:24]

I'm not talking about 10, 15 men, women and children, men, women and children and babies and those. You're married, right? Children, too. How can a father of two young children shoot babies? Don't want to one anything, of course, the vast majority of soldiers do not behave this way, but humans can be thoughtlessly violent.

[00:31:52]

They can act out of anger, out of fear. They can seek revenge. They can murder senselessly.

[00:31:58]

Can robots do better? After all, robots don't get angry. They're not impulsive. I spoke with someone who thinks that lethal autonomous weapons could ultimately be more humane.

[00:32:10]

My name is Ronald Arkin. I'm a regents professor at the Georgia Institute of Technology in Atlanta, Georgia. I'm a roboticist for close to 35 years. I've been in robot ethics for maybe the past 15.

[00:32:24]

Ron wanted to make it clear that he doesn't think these robots are perfect, but they could be better than our current option.

[00:32:31]

I am absolutely not pro lethal autonomous weapons systems because I'm not pro lethal weapons of any sort. I am against killing and all of its manifold forms. But the problem is that humanity persist in entering into warfare. As such, we must better protect the innocent in the battle space far better than we currently do.

[00:32:52]

So Ron thinks that lethal autonomous weapons could prevent some of the unnecessary violence that occurs in war.

[00:32:59]

Human beings don't do well in warfare in general, and that's why there's so much room for improvement. There's untamed fire, there's mistakes, there's carelessness. And in the worst case, there's the commission of atrocities. And unfortunately, all those things lead to the deaths of noncombatants. And robots will make mistakes, too. They probably will make different kinds of mistakes, but hopefully, if done correctly, they will make far, far less mistakes than human beings do and in certain narrow circumstances where human beings are prone to those errors.

[00:33:32]

So how will the robots follow these international humanitarian standards?

[00:33:37]

The way in which we explored initially is looking at something referred to as bounded morality, which means we look at very narrow situations. You are not allowed to drop bombs on schools and hospitals and mosques or churches. So the point is, if you know the geographic location of those, you can demarcate those on a map, use GPS and you can prevent someone from pulling a trigger. But keep in mind, these systems are not only going to decide when to engage, but also where not to engage a target.

[00:34:10]

They can be more conservative. I believe the potential exists to reduce non-combatant casualties and collateral damage in almost all of its forms over what we currently have.

[00:34:23]

So autonomous weapons might operate more efficiently, reduce risk to one's own troops, operate faster than the enemy, decrease civilian casualties and perhaps avoid atrocities.

[00:34:36]

What could possibly go wrong? Chapter six, what could possibly go wrong? Autonomous systems can do some pretty remarkable things these days, but of course, robots just do what their computer code tells them to do. The computer code is written by humans or in the case of modern artificial intelligence, automatically inferred from training data.

[00:35:04]

What happens if a robot encounters a situation that the human where the training data didn't anticipate, well, things could go wrong in a hurry?

[00:35:15]

One of the concerns with autonomous weapons is that they might malfunction in a way that leads them to begin erroneously engaging targets, robots run amok.

[00:35:27]

And this is particularly a risk for weapons that could target on their own. Now, this builds on a flaw.

[00:35:36]

A known malfunction of machine guns today called a runaway gun. A machine gun begins firing for one reason or another. And because of the nature of a machine gun where one bullets firing cycles the automation and brings in the next bullet, once it starts firing, you end up doing it and they'll continue firing bullets.

[00:35:56]

The same sort of runaway behavior can result from small flaws in computer code, and the problems only multiply when autonomous systems interact at high speed. Paul Shaari points to Wall Street as a harbinger of what can go wrong.

[00:36:13]

And we end up someplace like where we are in stock trading today, where many of the decisions are highly automated and we get things like flash crashes.

[00:36:24]

What the heck is going on down here? I don't know, there is fear, this is capitulation, really. I mean, it is.

[00:36:30]

In May 2010, computer algorithms drove the Dow Jones down by nearly 1000 points in 13 minutes, the steepest drop it had ever seen in a day.

[00:36:42]

The concern is that a world where militaries have these algorithms interacting at machine speed faster than humans could respond might result in accidents.

[00:36:54]

And there's something like a flash war by flash, where you mean this thing just cycling out of control somehow.

[00:37:01]

Right.

[00:37:01]

But the algorithms are merely following the programming and the escalating conflict into a new area of warfare, a new level of violence in a way that that might make it harder for humans to then dial things back and bring things back under control.

[00:37:17]

The system only knows what it's been programmed or been trained to.

[00:37:20]

No human can bring together all of these other pieces of information about context and human can understand what's at stake.

[00:37:28]

So there's no Stanislav Petrov on the loop. That's that's the fear, right, is that if there's no Petroff there to say no, what might the machines do on their own? Chapter seven, Slaughter Bot's. The history of weapons technology includes well-intentioned efforts to reduce violence and suffering that end up backfiring.

[00:37:55]

I tell in the book the story of the Gatling Gun, which was invented by Richard Gatling during the American Civil War.

[00:38:01]

And he was motivated to invent this weapon, which was a forerunner of the machine gun, as an effort to reduce soldiers deaths and more.

[00:38:10]

He saw all of these soldiers coming back maimed and injured from the civil war. He said, wouldn't it be great if we needed fewer people to fight?

[00:38:17]

So we invented a machine that could allow for people to deliver the same lethal effects in the battlefield as 100.

[00:38:24]

Now, the effect of this wasn't actually to reduce the number of people fighting.

[00:38:28]

And we got to World War One.

[00:38:30]

We saw massive devastation and a whole generation of young men in Europe killed because of this technology.

[00:38:37]

And so I think that's a good cautionary tale as well, that sometimes the way the technology evolves and how it's used may not always be how we'd like it to be used.

[00:38:47]

And even if regular armies can keep autonomous weapons within the confines of international humanitarian law. What about rogue actors?

[00:38:56]

Remember those autonomous swarms we discussed with Ash Carter, those tiny drones that work together to block enemy radar?

[00:39:04]

What happens if the technology spreads beyond armies? What if a terrorist adds a gun or an explosive and maybe facial recognition technology to those little flying bots?

[00:39:16]

In 2017, Berkeley Professor Stuart Russell and the Future of Life Institute made a mock documentary called Slaughter Bots as part of their campaign against fully autonomous, lethal drones.

[00:39:28]

The nation is still recovering from yesterday's incident, which officials are describing as some kind of automated attack which killed 11 U.S. senators at the Capitol Building.

[00:39:38]

They flew in from everywhere, but attacked just one side of the aisle. It was chaos. People were screaming.

[00:39:45]

Unlike nuclear weapons, which are difficult to build.

[00:39:48]

You know, it's not easy to obtain or work with weapons grade uranium.

[00:39:52]

The technology to create and modify autonomous drones is getting more and more accessible.

[00:39:57]

All of the technology you need from the automation standpoint either exists in the vehicle already or you can download from GitHub.

[00:40:06]

I asked former Secretary of Defense Ash Carter if the U.S. government is concerned about this sort of attack.

[00:40:12]

You're right to worry about drones.

[00:40:14]

And of course, it only takes a depraved person who who can go to a store and buy a drone to at least scare people and quite possibly threaten people hanging a gun off of it or putting a bomb of some kind on it.

[00:40:31]

And then suddenly people don't feel safe going to the Super Bowl or landing at the municipal airport. And we can't have that. I mean, it's certainly as your former secretary of defense, my job was to make sure that we didn't put up with that kind of stuff. I'm I'm supposed to protect our people. And so how do I protect people against drones in general?

[00:40:52]

They can be shot down, but they can put more drones up than I can conceivably shoot at.

[00:40:59]

Not to mention shooting at things in a Super Bowl stadium is an inherently dangerous solution to this problem.

[00:41:05]

And so there's a more subtle way of dealing with drones.

[00:41:09]

I will either jam or take over the radio link and then you just tell it to fly away and go off into the countryside somewhere and crash into a field. All right.

[00:41:23]

So help me out if I have enough autonomy, couldn't I have drones without rEU links that just get their assignment and go off and do things? Yes.

[00:41:33]

And then your mind as a defender goes to something else now that they've got their idea of what they're looking for set in their electronic mind, let me change what I look like. Let me change with the stadium. Looks like to it. Let me change what the target looks like.

[00:41:49]

And for the Super Bowl, what do I do about that? Well, once I know I'm being looked at, I have the opponent in a box. Few people know how easy facial recognition is to fool because I can wear the right kind of goggles or stick ping pong balls in my cheeks.

[00:42:11]

There's always a strategem. Memo to self, next time I go to Gillette Stadium for a Patriots game, bring ping pong balls. Really? Chapter eight, the moral buffer. So we have to worry about whether lethal autonomous weapons might run amok or fall into the wrong hands, but there may be an even deeper question could fully autonomous lethal weapons change the way we think about war?

[00:42:45]

I brought this up with Army of None author Polcari.

[00:42:48]

So one of the concerns about autonomous weapons is that it might lead to a breakdown in human more responsibility for killing in war if the weapons themselves are choosing targets that people no longer feel like they're the ones doing the killing.

[00:43:02]

Now, on the plus side of things that might lead to less post-traumatic stress in war, all these things have real burdens that weigh on people.

[00:43:12]

But some argue that the burden of killing should be a requirement of war.

[00:43:18]

It's worth also asking if nobody slept uneasy at night, what does that look like?

[00:43:24]

Would there be less restraint in war and more killing as a result?

[00:43:28]

Missy Cummings, the former fighter pilot and current Duke professor, wrote an influential paper in 2004 about how increasing the gap between a person and their actions creates what she called a moral buffer.

[00:43:43]

People ease the psychological and emotional pain of warfare by basically superficially layering in these other technologies to kind of make them lose track of what they're doing. And this is actually something that I do think is a problem for lethal autonomous weapons.

[00:44:03]

If we send a weapon and we tell it to kill one person and it kills the wrong person, then it's very likely that people will push off their sense of responsibility and accountability on to the autonomous agent because they say, well, it's not my fault. It was the autonomous agents fault.

[00:44:21]

On the other hand, Paul Shaari tells a story about how when there's no buffer, humans rely on an implicit sense of morality that might be hard to explain to a robot.

[00:44:33]

There was an incident earlier in the war where I was part of an Army Ranger sniper team up on the Afghanistan Pakistan border.

[00:44:40]

And we were watching for Taliban fighters infiltrating across the border.

[00:44:45]

And when dawn came, we weren't nearly as concerned as we had hoped to be.

[00:44:50]

And very quickly, a farmer came out to relieve himself in the fields and saw us.

[00:44:55]

And we knew that we were compromised. What I did not expect was what they did next, which was I sent a little girl to scout out our position. She was maybe five or six.

[00:45:05]

She was not particularly sneaky.

[00:45:07]

She stared directly at us and we heard the chirping of what we later realized was probably radio that you had on her, that she was reporting back information about us.

[00:45:16]

And then she left it not long after some fighters did come and then the gunfight that ensued brought out the whole valley, so we had to leave.

[00:45:24]

But later that day, we were talking about how we would treat a situation like that. Something that just didn't come up in conversation was the idea of shooting this little girl. Now, what's interesting is that under the laws of war, that would have been legal. The laws of war don't set an age for combatants, your status as a combat is based on your actions and by scouting for the enemy.

[00:45:46]

She was directly participating in hostilities.

[00:45:49]

If you had a robot that was programmed to perfectly comply with the laws of war, it would have shot this little girl. There are sometimes very difficult decisions that are forced on people in war, but I don't think this was one of them.

[00:46:02]

But I think it's worth asking, how would a robot know the difference between what's legal and what's right and how would you even begin to program that into a machine? Chapter nine, The Campaign to Stop Killer Robots. The most fundamental moral objection to fully autonomous lethal weapons comes down to this as a matter of human dignity, only a human should be able to make the decision to kill another human.

[00:46:33]

Some things are just morally wrong, regardless of the outcome, regardless of whether or not, you know, torturing one person saves a thousand. It's torture is wrong. Slavery is wrong. And from this point of view, one might say, well, look, it's wrong to let a machine decide whom to kill. Humans have to make that decision.

[00:46:54]

Some people have been working hard to turn this moral view into binding international law.

[00:47:00]

So my name is Mary, wherein I'm the advocacy director of the Arms Division of Human Rights Watch. I also coordinate this coalition of groups called the Campaign to Stop Killer Robots, and that's a coalition of 112 non-governmental organizations and about 56 countries that is working towards a single goal, which is to create a prohibition on fully autonomous weapons.

[00:47:27]

The campaign's argument is rooted in the Geneva Conventions, a set of treaties that establish humanitarian standards for the conduct of war. There's the principle of distinction which says that armed forces must recognize civilians and may not target them.

[00:47:44]

And there's the principle of proportionality which says that incidental civilian deaths can't be disproportionate to an attack's direct military advantage.

[00:47:55]

The campaign says killer robots fail these tests. First, they can't distinguish between combatants and noncombatants or tell when an enemy is surrendering. Second, they say deciding whether civilian deaths are disproportionate inherently requires human judgment. For these reasons and others, the campaign says fully autonomous lethal weapons should be banned.

[00:48:21]

Getting an international treaty to ban fully autonomous lethal weapons might seem like a total pipedream, except for one thing, Mary Wareham and her colleagues already pulled it off for another class of weapons landmines.

[00:48:37]

The signing of this historic treaty at the very end of this century is this generation's pledge to the future.

[00:48:45]

The International Campaign to Ban Landmines and its founder, Jody Williams, received the Nobel Peace Prize in 1997 for their work leading to the Ottawa convention, which banned the use, production, sale and stockpiling of anti-personnel mines. While 164 nations joined the treaty, some of the world's major military powers never signed it, including the United States, China and Russia. Still, the treaty has worked and even influenced the holdouts.

[00:49:17]

So the United States did not join. But it went on to, I think, prioritize clearance of anti-personnel land mines and remains the biggest donor to clearing landmines and unexploded ordnance around the world. And then under the Obama administration, the U.S. committed not to use anti-personnel land mines anywhere in the world other than to keep the option open for the Korean Peninsula. So slowly over time, countries do, I think, come in line.

[00:49:46]

One major difference between banning landmines and banning fully autonomous lethal weapons is.

[00:49:53]

Well, it's pretty clear what a landmine is, but a fully autonomous lethal weapon. That's not quite as obvious, six years of discussion at the United Nations have yet to produce a crisp definition.

[00:50:07]

Trying to define autonomy is also a very challenging task. And this is why we focus on the need for meaningful human control.

[00:50:16]

So what exactly is meaningful human control, the ability for the human operator in the weapons system to to communicate and the ability for human to intervene in the detection, selection and engagement of targets, if necessary, to to cancel the operation.

[00:50:31]

Not surprisingly, international talks about the proposed ban are complicated.

[00:50:36]

I will say that a majority of the countries who have been talking about killer robots have called for a legally binding instruments and international treaty. You've got countries who want to be helpful, like France, who was proposing working groups, Germany, whose proposed political declarations on the importance of human control. There's a lot of proposals, I think, from Australia about legal reviews of of weapons. Those efforts are being rebuffed by a smaller handful of what we call militarily powerful countries who don't want to see new international law.

[00:51:11]

The United States and Russia have probably been amongst the most problematic on dismissing the calls for any form of regulation.

[00:51:19]

As with the landmines, Mary Wareham sees a path forward even if the major military powers don't join.

[00:51:24]

At first, we cannot stop every potential use. What we want to do that is stigmatized so that everybody understands that even if you could do it, it's not right and you shouldn't. Part of the campaign strategy is to get other groups on board and they're making some progress. I think a big move in our favor came in November when the United Nations secretary general, Antonio Gutierrez, he made a speech in which he called for them to be banned under international law.

[00:51:54]

Machines that have the power and the discretion to take human lives are politically and acceptable, are morally repugnant, and should be banned by international law. Artificial intelligence researchers have also been expressing concern since 2015, more than 4500 AI and robotics researchers have signed an open letter calling for a ban on offensive autonomous weapons beyond meaningful human control.

[00:52:32]

The signers included Elon Musk, Stephen Hawking and Demis Hassabis, the CEO of Google's Deep Mind.

[00:52:40]

An excerpt from the letter, quote, If any major military power pushes ahead with A.I. weapon development, a global arms race is virtually inevitable. And the endpoint of this technological trajectory is obvious. Autonomous weapons will become the knockoffs of tomorrow. Chapter 10 to ban or not to ban. Not everyone, however, favors the idea of an international treaty banning all lethal autonomous weapons.

[00:53:14]

In fact, everyone else I spoke to for this episode, Ron Arkin, Missy Cummings, Paul Shorey and Ash Carter oppose it. Interestingly, though, each had a different reason and a different alternative solution. Robo ethicist Ron Arkin thinks we'd be missing a chance to make war safer.

[00:53:35]

Technology can, must and should be used to reduce non-combatant casualties. And if it's not going to be this, you tell me what you are going to do to address that horrible problem that exists in the world right now with all these innocents being slaughtered in the battle space. Something needs to be done. And to me, this is one possible way.

[00:53:55]

Paul Sharry thinks a comprehensive ban is just not practical.

[00:54:00]

Instead, he thinks we should focus on banning lethal autonomous weapons that specifically target people, that is anti-personnel weapons. In fact, the landmine treaty bans anti-personnel landmines but not, say, anti-tank landmines.

[00:54:18]

One of the challenging things about anti-personnel weapons is that you can't stop being a person if you want to avoid being targeted. So if you have a weapon that's targeting tanks, you can come out of a tank and run away. That's a good way to effectively surrender and render yourself out of combat.

[00:54:35]

If it's even targeting, say, handheld weapons, you could set down your weapon and run away from it.

[00:54:40]

So do you think that would be practical to actually get either a treaty or at least understanding that countries should forswear anti-personnel, lethal autonomous weapons?

[00:54:52]

I think it's easier for me to envision how you might get to actual restraint. You make sure that the weapon that countries are giving up is not so valuable that they can't still defeat those who might be willing to cheat. And I think it's really an open question how valuable autonomous weapons are. But my suspicion is that they are not as valuable or necessary in an anti-personnel context.

[00:55:16]

Former fighter pilot and Duke professor Missy Cummings thinks it's just not feasible to ban lethal autonomous weapons like.

[00:55:25]

You can't ban people developing computer code. It's not a productive conversation to start asking for bans on technology that are almost as common as the air we breathe. Right. So we are not in the world of banning nuclear technologies. And because it's a different world, we need to come up with new ideas.

[00:55:47]

What we really need is that we make sure that we certify these technologies in advance.

[00:55:53]

Do you how do you actually do the tests to certify that the weapon does at least as well as a human? That's actually a big problem because no one on the planet, not the Department of Defense, not Google, not Uber, not in a driverless car company understands how to certify autonomous technologies.

[00:56:11]

So for driverless cars can come to an intersection and they will never prosecute that intersection the same way a sun angle can change the way that these things think. We need to come up with some out of the box thinking about how to test these systems to make sure that they're seeing the world. And I'm doing that in air quotes in a way that we are expecting them to see the world. And this is why we need a national agenda to understand how to do testing, to get to a place that we feel comfortable with the results, because you're successful and you get the Pentagon and the driverless car folks to actually do real world testing.

[00:56:53]

What about rest of world? What's going to happen?

[00:56:57]

So one of the problems that we see in all technology development is that the rest of the world doesn't agree with our standards, it is going to be a problem going forward.

[00:57:11]

So we certainly should not circumvent testing because other countries are circumventing testing. Finally, there's former Secretary of Defense Ash Carter back in 2012, Ash was one of the few people who were thinking about the consequences of autonomous technology at the time. He was the third ranking official in the Pentagon in charge of weapons and technology. He decided to draft a policy which the Department of Defense adopted. It was issued as Directive 3000 point 09 autonomy in weapons systems.

[00:57:46]

So I wrote this directive that said, in essence, there will always be a human being involved in the decision making when it comes to lethal force in the military of the United States of America.

[00:58:01]

I'm not going to accept autonomous weapons in a literal sense because I'm the guy who has to go out the next morning after some women and children have been accidentally killed and explain it to a press conference or a foreign government or a widow. And as suppose I go out there, Eric, and I say, oh, I don't know how it happened. The machine did it.

[00:58:23]

Are you going to allow your secretary of defense to walk out and give that kind of excuse? No way I would be crucified.

[00:58:33]

I should be crucified for giving a press conference like that.

[00:58:36]

And I didn't think any future secretary of defense should ever be in that position or allow him or herself to be in that position.

[00:58:44]

That's why I wrote the directive, because Ash wrote the directive that currently prevents U.S. forces from deploying fully autonomous lethal weapons. I was curious to know what he thought about an international ban.

[00:58:56]

I think it's reasonable to think about a national ban. And we have and we have one. Do I think it's reasonable that I get everybody else to sign up to that? I don't, because I think that people will say they'll sign up and then not do it. In general, I don't like fakery in serious matters, and that's too easy to fake that.

[00:59:19]

It's the fake meaning to fake that.

[00:59:22]

They have forsworn those weapons and then we find out that they haven't. And so it turns out they're doing it and they're lying about doing it or hiding that they're doing it. We've run into that all the time. Remember, the Soviet Union said it signed the biological weapons convention. They ran a very large biological warfare.

[00:59:43]

They just said they didn't. All right. But take the situation now.

[00:59:47]

What would be the harm of the U.S. signing up to such a thing, but at least building the moral opprobrium around lethal autonomous weft because you're building something else at the same time, which is an illusion of safety for other people.

[01:00:03]

You're conspiring in a circumstance in which they are lied to about their own safety. And I, I feel very uncomfortable doing that.

[01:00:13]

Porscha sums up the challenge as well.

[01:00:16]

Countries are widely divergent in their views on things like a treaty. But there's also been some early agreement that at some level we need humans involved in these kinds of decisions. What's not clear is at what level is the level of people choosing every single target, people deciding at a higher level what kinds of targets are to be attacked?

[01:00:38]

How far are we comfortable removing humans from these decisions?

[01:00:42]

We had all the technology in the world, what decisions would we want humans to make it more and why? What decisions were required uniquely human judgment? And why is that? And I think if we can answer that question, we'll be in a much better place to grapple with the challenge of autonomous weapons going forward. Conclusion, choose your planet. So there you have it, fully autonomous, lethal weapons, they might keep our soldiers safer, minimize casualties and protect civilians.

[01:01:22]

But delegating more decision making to machines might have big risks in unanticipated situations. They might make bad decisions that could spiral out of control. With no Stanislav Petrov in the loop, they might even lead to Flash Wars. The technology might also fall into the hands of dictators and terrorists, and it might change us as well by increasing the moral buffer between us and our actions. But as war gets faster and more complex would really be practical to keep humans involved in decisions.

[01:02:00]

Is it time to draw a line? Should we press for an international treaty to completely ban what some call killer robots? What about a limited ban or just a national ban in the U.S.? Or would all this be naive? Would nations ever believe each other's promises? It's hard to know, but the right time to decide about fully autonomous lethal weapons is probably now before we've gone too far down the path. The question is, what can you do a lot?

[01:02:33]

It turns out you don't have to be an expert and you don't have to do it alone. When enough people get engaged, we make wise choices, invite friends over virtually for now in person. What it's safe for dinner and debate about what we should do or organize a conversation for a book club or a faith group or a campus event. Talk to people with firsthand experience, those who have served in the military or been refugees from war. And don't forget to email your elected representatives to ask what they think.

[01:03:08]

That's how questions get on the national radar. You can find lots of resources and ideas at our Web site. Brave New Planet dot org. It's time to choose our planet. The future is up to us. I don't want a truly autonomous car. I don't want to come the garage and the car says I've fallen in love with the motorcycle and I won't drive you today because I'm autonomous.

[01:03:46]

Brave New Planet is a co-production of the Broad Institute of MIT and Harvard, Pushkin Industries and The Boston Globe with support from the Alfred P. Sloan Foundation. Our show is produced by Rebecca Douglas with Merridew theme song composed by Ned Porter, Mastering and Sound Design by James Gava, fact checking by Joseph Fridmann and a stint and enchante special thanks to Christine Heenan and Rachel Roberts at Clarendon Communications.

[01:04:14]

To Lee McGuire, Kristen Zerilli and Justin Levine, our hands at the Road to Meal Lobell and Heather Fain at Pushkin, and to Eliane Brud, who made the Broad Institute possible. This is Brave New Planet. I'm Eric Lander.