Transcribe your podcast
[00:00:05]

You're listening to Brave New Planet, a podcast about amazing new technologies that could dramatically improve our world or if we don't make wise choices, could leave us a lot worse off utopia or dystopia. It's up to us. On July 16th, 1969, Apollo 11 blasted off from the Kennedy Space Center near Cape Canaveral, Florida, 25 million Americans watched on television as the spacecraft ascended toward the heavens, carrying Commander Neil Armstrong, lunar module pilot Buzz Aldrin and command module pilot Michael Collins.

[00:00:55]

Their mission to be the first humans in history to set foot on the moon. Four days later, on Sunday, July 20th, the lunar module separated from the command ship and soon fired its rockets to begin its lunar descent. Five minutes later, disaster struck about a mile above the moon's surface. Program alarms 12 01, and Claverton sounded loudly indicating that the mission computer was overloaded. And then, well, every American knows what happened next on state of flight.

[00:01:47]

Good evening, my fellow Americans, President Richard Nixon addressed a grieving nation, fate has ordained that the men went to the moon to explore and peace will stay on the moon to rest in peace. These brave men, Neil Armstrong and Edwin Aldrin, know that there is no hope for their recovery, but they also know that there is hope for mankind in their sacrifice.

[00:02:22]

He ended with the now famous words for every human being who looks up at them will, in the nights to come, will know that there is some point or another word that is forever mankind. Wait a minute, that never happened, the moon mission was a historic success. The three astronauts returned safely to tickertape parades and a celebratory 38 day world tour. Those alarms actually did sound, but they turned out to be harmless. Nixon never delivered that speech.

[00:03:01]

His speechwriter had written it, but it sat in a folder labeled in event of moon disaster.

[00:03:09]

Until now, the Nixon you just heard is a deep fake part of a seven minute film created by artificial intelligence, deep learning algorithms. The fake was made by the Center for Advanced Virtuality at the Massachusetts Institute of Technology as part of an art exhibit to raise awareness about the power of synthesized media. Not long ago, something like this would have taken a lot of time and money. But now it's getting easy. You can make new paintings in the style of French impressionism, revive dead movie stars, help patients with neurodegenerative disease, or soon maybe take a class on a tour of ancient Rome.

[00:03:52]

But is the technology quickly becomes democratized? We're getting to the point where almost anyone can create a fake video of a friend, the next lover, a stranger or a public figure that's embarrassing, pornographic or perhaps capable of causing international chaos.

[00:04:11]

Some argue that in a culture where fake news spreads like wildfire and political leaders deny the veracity of hard facts, deep fake media may do a lot more harm than good. Today's big question, will synthesized media unleash a new wave of creativity, or will it erode the already tenuous role of truth in our democracy? And is there anything we can do to keep it in check? My name is Eric Lander, I'm a scientist who works on ways to improve human health.

[00:04:56]

I helped lead the Human Genome Project and today I lead the Broad Institute of MIT and Harvard. In the 21st century, powerful technologies have been appearing at a breathtaking pace related to the Internet, artificial intelligence, genetic engineering and more. They have amazing potential upsides, but we can't ignore the risks that come with them. The decisions aren't just up to scientists or politicians, whether we like it or not. We all of us are the stewards of a brave new planet.

[00:05:28]

This generation's choices will shape the future as never before.

[00:05:35]

Coming up on today's episode of Brave New Planet, I speak with some of the leaders behind advances in synthesized media, you could certainly, by the way, generate stories that could be fresh and interesting and new and personal for every child.

[00:05:51]

We got e-mails from people who were quadraplegic and they asked us if we could make them dance.

[00:05:57]

We hear from experts about some of the frightening ways that bad actors can use deep fakes.

[00:06:03]

Predators would chime in and say, you can absolutely make a deep fake sex video of your ex with 30 pictures. I've done it with 20.

[00:06:11]

Here's the things that keep me up at night. All right. A video of Donald Trump saying I've launched nuclear weapons against Iran.

[00:06:18]

And before anybody gets around to figuring out whether this is real or not, we have a global nuclear meltdown and we explore how we might prevent the worst abuses.

[00:06:27]

It's important that younger people advocate for the Internet that they want. We have to fight for it. We have to ask for different things.

[00:06:38]

Stay with us. Chapter one. Abraham Lincoln's head. To begin to understand the significance of deep fake technology, I went to San Francisco to speak with a world expert on synthetic media.

[00:06:58]

My name is Alexi or sometimes called Alyosha Afro's, and I'm a professor at UC Berkeley and computer science and Electrical Engineering Department. My research is on computer vision, computer graphics, machine learning, various aspects of artificial intelligence.

[00:07:19]

Where'd you grow up? I grew up in St. Petersburg in Russia. I was one of those geeky kids playing around with computers or dreaming about computers. My first computer was actually the first Soviet personal computer.

[00:07:38]

So you actually are involved in making sort of synthetic content, synthetic media?

[00:07:44]

That's right. Alexi has invented powerful artificial intelligence tools, but his lab also has a wonderful ability to use computers to enhance the human experience. I was struck by a remarkable video on YouTube created by his team at Berkeley.

[00:08:00]

So this was a project that actually was done by my students who didn't even think of this as anything but a silly little toy project of trying to see if we could get a geeky computer science student to move like a ballerina.

[00:08:20]

In the video, one of the students, Caroline Chan, dances with the skill and grace of a professional despite never having studied ballet. The idea is you take a serious actor like a ballerina. There is a way to detect the limbs of the dancer, have a kind of a skeleton extracted and also have my student just move around and do some geeky moves. And now we are basically just going to try to sympathize that parents of my student, driven by the skeleton of the balloon, put it all together.

[00:08:58]

And then we have our grad student dancing pirouettes like like a balloon through artificial intelligence.

[00:09:06]

Carolyn's body is puppeteer by the dancer.

[00:09:09]

We weren't even going to publish it, but we just released a video on YouTube called Everybody Dance Now. And somehow it really touched a nerve.

[00:09:21]

Well, there's been an explosion recently, a new ways to manipulate media. Aleksi notes that the idea itself isn't new.

[00:09:29]

It has a long history.

[00:09:31]

I can't help but ask, given that you come from Russia, one of the premiere users of doctoring photographs, I think, was Stalin, who who used the ability to manipulate images for political effect.

[00:09:47]

How did they do that? Do you can you think of examples of this? And like what was the technology then?

[00:09:53]

The urge to change photographs has been around basically since the invention of photography. For example, there is a photograph of Abraham Lincoln that still hangs in many classrooms. That's fake.

[00:10:07]

It's actually Calhoun with Lincoln's head attached to Alexi's, referring to John C. Calhoun, the South Carolina senator and champion of slavery, a civil war portrait artist superimposed a photo of Lincoln's head onto an engraving of Calhoun's body because he thought Lincoln's gangly frame wasn't dignified enough.

[00:10:31]

And so they just said, OK, what can we use, Calhoun? Let's slap the Lincoln's head on his body. And then, of course, as soon as you go into the 20th century, as soon as you get to dictatorship's, this is a wonderful toy for a dictator to use. So, again, Stalin was a big fan of this. He would get rid of people in photographs once they were out of favor or once they got jailed or killed, he would just basically get them scratched out with reasonably crude techniques.

[00:11:06]

Hitler did it.

[00:11:07]

Mao did it. Castro did it. Brezhnev did it.

[00:11:10]

I'm sure U.S. spy agencies have done it. Also, we have always manipulated images with a desire to change history.

[00:11:18]

This is Högni Fareed. He's also a professor at Berkeley and a friend of Alexi's.

[00:11:23]

I'm a professor of computer science and I'm an expert in digital forensics.

[00:11:27]

We're Aleksi works on making synthetic media. Honey is devoted his career to identifying when synthetic media is being used to fool people.

[00:11:37]

That is spotting fakes. He regularly collaborates on this mission with Aleksi.

[00:11:43]

So I met Alyosha Afro's ten, twenty years ago. He is really an. Rather, be creative and clever guy, and he has done what I consider some of the most interesting work and computer vision and computer graphics over the last two decades, and if you really want to do forensics, well, you have to partner with somebody like Alyosha. You have to partner with a world class mind who knows how to think about the synthesis side so that you can synthesize the absolute best content and then think about how to detect it.

[00:12:15]

I think it's interesting that if you're somebody on the synthesis side and developing the forensic, there's a little bit of a Jekyll and Hyde there. And I think it's really fascinating.

[00:12:23]

You know, the idea of altering photos, it's not entirely new. How far back does this go?

[00:12:30]

So we we used to have in the days of Stalin, highly talented, highly skilled, time consuming, difficult process of manipulating images, removing somebody, erasing something from the image, splicing faces together. And then we moved into the digital age where now a highly talented digital artist could remove one face and at another face. But it was still a time consuming and required skill.

[00:12:56]

In 1994, the makers of the movie Forrest Gump won an Oscar for visual effects for their representations of the title character, interacting with historical figures like President John F. Kennedy readily.

[00:13:11]

How does it feel to be an all-American? Very good. Congratulations. How do you feel? I got to say, I believe it has to be.

[00:13:21]

Now, computers are doing all of the heavy lifting of what used to be relegated to talented artists. The average person now can use sophisticated technology to not just capture the recording, but also manipulate it and then distribute it.

[00:13:34]

The tools used to create synthetic media have grown by leaps and bounds, especially in the past few years.

[00:13:41]

And so now we have technology broadly called deep fake, but more specifically should be called synthesized content, where you point an image or a video or an audio to an AI or machine learning system, and it will replace the face for you. And it can do that in image, it can do that in a video, or it can synthesize audio for you in a particular person's voice. It's become straightforward to swap people's faces. There's a popular YouTube video that features tech pioneer Elon Musk's adult face on a baby's body.

[00:14:18]

And there's a famous meme where actor Nicolas Cage's face replaces those of leading movie actors, both male and female. You can put words into people's mouths and make them jump and dance and run. You can even resurrect powerful figures and have them deliver a fake speech about a fake tragedy from an altered history chapter to creating Nexen. The text of Nixon's moon disaster speech that we heard at the top of the show is actually not fake. As I mentioned, it was written for President Nixon as a contingency speech and thankfully never had to be delivered.

[00:15:03]

It's an amazing piece of writing. It was written by Bill Safire, who was one of Nixon's speechwriters.

[00:15:10]

This is artist and journalist Francesca Pineta. She's the co-director of the Nixon Fake or Mitty's Moon Disaster Team. She's also the creative director in MIT's Center for Advanced Virtuality.

[00:15:24]

I was doing experimental journalism at the Guardian newspaper.

[00:15:29]

I ran The Guardian's virtual reality studio for the last three years, the second half of the moon disaster team as sound artist Halsy Burgund. My name is Halsy Burgund. I'm a sound artist and technologist and I've had a lot of experience with lots of sorts of audio enhanced with technology, though this is my first experience with synthetic media, especially since I typically focus on authenticity of voice. And now I'm kind of doing the opposite.

[00:15:55]

So together, Halsy and Francheska chose to automate a tragic moment in history that never actually happened.

[00:16:03]

I think it all started with it being the fiftieth anniversary of the moon landing last year. And add on top of that an election cycle in this country and dealing with information, which is obviously very important in election cycles. It was like light bulbs went on and we got we got very excited about pursuing it.

[00:16:21]

It's possible to make mediocre fakes pretty quickly and cheaply. But Francheska and Halsy wanted high production values. So how does one go about making a first rate fake presidential address?

[00:16:33]

There are two components. There's the visuals and there's the audio and they are completely different processes. So we decided to go with a video dialogue replacement company called CANY II who would do the visuals for us.

[00:16:48]

And then we decided to go with a speech who are a dialogue replacement company for the voice of Nixon.

[00:16:56]

They tackled the voice first, the more challenging of the two mediums. What we were told to do was to get two to three hours worth of Nixon talking. That was pretty easy because the Nixon Library has hours and hours of Nixon mainly giving Vietnam speeches.

[00:17:13]

The communist armies of North Vietnam launched a massive invasion of South Vietnam.

[00:17:18]

That audio was then chopped up into chunks between one and three seconds long. We found this incredibly patient actor called Lewis Wheeler. Lewis would listen to the one second clip and then he would repeat that and do what I believe was right and do what I believe was right.

[00:17:42]

Free speech would say to us things like, we need to change the diagonal attention, which meant nothing to us.

[00:17:48]

Yes, we have a whole lot of potential bad names going forward. Tyack not the first synthetic. Nixon is another good one.

[00:17:58]

So once we have our Nixon model made out of these thousands of tiny clips, it means that whatever our act says will come out then in Nixon's voice. So then what we did was record the contingency speech of Nixon, and it meant that we got Lewis's actively performance. But in Nixon's voice, what about the video part? I mean, the video is much easier.

[00:18:24]

We're talking a couple of days here and a tiny amount of data just with Lewis's iPhone. We filmed him reading the contingency speech once a couple of minutes of him just chatting to camera, and that was it.

[00:18:39]

Fate has ordained that the men who went to the moon to explore in peace will stay on the moon to rest, rest in peace.

[00:18:48]

You know, we were told by Kennedy that everything would be the same in the video, apart from just the area around the mouth. So every gesture of the hand, every blink, every time he moved his face, all of that would stay the same. But just the mouth basically would change.

[00:19:06]

So we used Nixon's resignation speech to a serve in this office.

[00:19:11]

It's true. I felt a very personal sense of kinship.

[00:19:16]

It was the speech of Nixon that looked the most somber way. He seemed to have the most emotion in his face.

[00:19:23]

So what actually went on in the computer? Artificial intelligence sometimes sounds inscrutable, but the basic ideas are quite simple. In this case, it uses a type of computer program called an auto encoder. It's trained to take complicated things, say spoken sentences or pictures, encode them in a much simpler form and then decode them to recover. The original as best it can be, Encoder tries to reduce things to their essence, throwing away most of the information, but keeping enough to do a good job of reconstructing it to make a deep fake.

[00:20:02]

Here's the trick. Trayner speech auto encoder for Nixon to Nixon and a speech auto encoder for actor to actor, but force them to use the same encoder.

[00:20:15]

Then you can input actor and decoded as Nixon if you have enough data. It's a piece of cake.

[00:20:27]

Around there carefully created video, the moon disaster team created an entire art installation, a 1960s living room with a fake vintage newspaper, sharing the fake tragic news while a fake Nixon speaks solemnly on a vintage black and white television.

[00:20:45]

Some people, when they were watching the installation, they watched a number of times. You'd see them, they'd watch it once, and they would watch again staring at the lips to see if they could see any lack of synchronicity. We had some people who thought that perhaps Nixon had actually recorded this speech as a contingency speech for it to go into television.

[00:21:06]

Lots of folks who are listening, viewing and even press folks just immediately said, oh, the voice is real or whatever. You said these things that weren't accurate because they just felt like there wasn't even a question. I suppose that is what we wanted to achieve. But at the same time, it's it was a little bit eye opening and like a little scary, you know, that that could happen.

[00:21:29]

Chapter three. Everybody dance. What do you see as just the wonderful upsides of having technologies like this?

[00:21:42]

Yeah, I mean, alien art is becoming a whole field in itself. So creatively there is enormous potential.

[00:21:50]

One of the potential positive educational uses of deep technology would be to bring historical figures back to life, to make learning more durable. I think one could do that with bringing Abraham Lincoln back to life and having him deliver speeches.

[00:22:04]

Film companies are really excited about reenactments. We're already beginning to see this in films like Star Wars when we're bringing people like Kifissia back to life. I mean, that is at the moment not being done through fake technology. This is using fairly traditional techniques of CGI at the moment. So we still have to see our first deep fake big cinema screen release. But this is just to come like the technology is getting better and better.

[00:22:31]

Not only will we be able to potentially bring back actors and actresses who are no longer alive and have them star in movies, but an actor could make a model of their own voice and then sell the use of that voice to anybody to do a voiceover of whatever is wanted. And so they could have 20 of these going on at the same time. And the sort of restriction of their physical presence is no longer there. And that might mean that Brad Pitt is in everything, or it might just mean that lower budget films can afford to have some of the higher cost talent.

[00:23:05]

At that point, you know, the top 20 actors could just do everything. Yes. There's no doubt that there will be winners and losers from these technologies. But the potential of synthetic media goes way beyond the arts. There are possible medical and therapeutic applications.

[00:23:20]

There are companies that are working very hard to allow people who have either lost their voice or who never had a voice to be able to speak in a way that is either how they used to speak or in a way that isn't a canned voice that everybody has. Alekseyev frozen.

[00:23:37]

His students discovered potential uses of synthetic media and medicine quite unintentionally while working on their Everybody Dance Now project that could turn anyone into a ballerina.

[00:23:50]

We were kind of surprised for all the positive feedback we got. We got emails from people who were quadraplegic and they asked us if we could make them dance and and it was very unexpected. So now we are trying to get the software to be in a state where people can use it because, yeah, it somehow it did hit a nerve with folks.

[00:24:14]

Chapter four unicorns in the Andes.

[00:24:20]

The past few years have seen amazing advances in the creation of synthetic media through artificial intelligence, the technology now goes far beyond fitting one face over another face in a video. A recent breakthrough has made it possible to create entirely new and very convincing content out of thin air.

[00:24:41]

The breakthrough, called Generative Adversarial Networks, or Gamze, came from machine learning researcher at Google named Ian Goodfellow like Auto Encoders. The basic idea is simple, but brilliant. Suppose you want to create amazingly realistic photos of people who don't exist while you build again consisting of two computer programs a photo generator that learns to generate fake photos and a photo discriminator that learns to discriminate or identify fake photos from a vast collection of real photos.

[00:25:20]

You then let the two programs compete, continually tweaking their code to outsmart each other. By the time they're done, the gane can generate amazingly convincing fakes.

[00:25:31]

You can see for yourself if you go to the website, this person does not exist. Dot dotcom.

[00:25:38]

Every time you refresh the page, you're shown a new uncanny image of a person who, as the website says, does not and never did exist.

[00:25:49]

Francesc and I actually tried out the website. This young Asian woman, she's she's got great complexion, envious of that neat black hair with a fringe pink lipstick and a slightly dreamy look as she's kind of gazing off to her left.

[00:26:10]

Oh, here's a woman who looks like she could be a neighbor of mine in Cambridge, probably about 65, she's got nice wireframe glasses, layered hair, her earrings don't actually match, but that could just be her distinctive style.

[00:26:27]

I mean, of course, she doesn't really exist.

[00:26:31]

It's hard to argue that gangs aren't creating original art. In fact, an artist collective recently used GAM to create a French impressionist style portrait. When Christie's sold it at auction, it fetched an eye popping 432 thousand dollars.

[00:26:52]

Alekseyev from the Berkeley professor recently pushed Gan's a step further, creating something called Cycle Gamze. By connecting to Gan's together in a clever way cycle, Gan's can transform a Monet painting into what? Seemingly a photograph of the same scene or turn a summer landscape into a winter landscape of the same view. Alexi's cycle can seem like magic. If you were to add in virtual reality, the possibilities become mind blowing.

[00:27:26]

You may be reminiscing about walking down bullfighter's Sangamon in Paris and with a few clicks you are there and you're walking down the block and you're looking at all the buildings and maybe you can even switch to a different year. And I think that is, I think, very exciting as a way to mentally travel to different places.

[00:27:51]

So if you do this in VR, I mean, imagine classes going on a class visit to ancient Rome. That's right.

[00:27:58]

You could imagine from how a particular city like Chrom looks now trying to extrapolate how it looked in the past.

[00:28:07]

It turns out that gangs aren't just transforming images. I spoke with a friend who's very familiar with another remarkable application of the technology.

[00:28:18]

My name is Reid Hoffman, I'm a podcast of Masters's scale. I'm a partner, Greylock, which is where we're sitting right now, co-founder of LinkedIn, and then a variety of other AI centric hobbies.

[00:28:29]

Reid is a board member of an unusual organization called Open Eye.

[00:28:34]

Opening Eyes is highly concerned with artificial general intelligence, human level intelligence. I helped Sam Altman and Elon Musk started up. The basic concern was that if one company created and deployed that that could be disbalancing on all kinds of ways. And so the thought is, if it could be created, we should make sure that there is essentially a non-profit that is creating this and that that can make that technology available at selective time, slices to industry as a whole, government, et, etc.

[00:29:08]

.

[00:29:09]

Last year, Open, I released a program that uses Gamze to write language from a short opening prompt. The system called GP2 can spin a convincing article or story instead of a deep fake video, it's deep fake text. It's pretty amazing, actually. For example, opening AI researchers gave the program the following prompt.

[00:29:35]

In a shocking finding, scientists discovered a herd of unicorns living in a remote, previously unexplored valley in the Andes Mountains.

[00:29:43]

Even more surprising to the researchers was the fact that the unicorn spoke perfect English.

[00:29:48]

GP2 took it from there, delivering nine crisp paragraphs on the landmark discovery. I asked Fran to read a bit from the story. Don't.

[00:29:58]

Huai Perez, an evolutionary biologist from the University of La Paz, and several companions were exploring the Andes Mountains when they found a small valley with no other animals or humans.

[00:30:11]

Perez noticed that the Valley had what appeared to be a natural fountain, surrounded by two peaks of rock and silver snow. Perez and the others then ventured further into the valley. By the time we reached the top of one peak, the water looked blue with some crystals on top, said Perez. Perez and his friends were astonished to see the unicorn head.

[00:30:35]

So tell me some of the great things you can do with language generation. Well, say, for example, entertainment generates stories that could be fresh and interesting and new and personal for every child.

[00:30:47]

Embed educational things in those stories are drawn into the fact that the story is involving them and their friends, but also now brings in grammar and and math and other kinds of things as they're doing that generate explanatory material of this kind of education that works best for this audience, for this kind of people like we want to have this kind of math of this kind of physics or this kind of history or this kind of poetry explained in the right way. And also the sign language.

[00:31:18]

Right. Like, you know, native Syriacs language.

[00:31:21]

When open, I announced its breakthrough program for text generation.

[00:31:25]

It took the unusual step of not releasing the full powered version because it was worried about the possible consequences.

[00:31:32]

Now, part of the open eye decision to say we're going to release a smaller model than the one we did is because we we think that the deep fake problem hasn't been solved.

[00:31:43]

And by the way, some people complained about that because they said, well, you're slowing down our ability to do progress. And so far, the answer is, hey, look, when these are released to the entire public, we cannot control the downsides as well as the upsides, downsides from art to therapy to virtual time travel, personalized stories and education.

[00:32:06]

Synthetic media has amazing upsides. What could possibly go wrong? Chapter five, what could possibly go wrong? The downsides are actually not hard to find, the ability to reshape reality brings extraordinary power, and people inevitably use power to control other people.

[00:32:30]

It should be no surprise, therefore, that 96 percent of fake videos posted online are nonconsensual pornography videos, almost always of women manipulated to depict sex acts that never actually occurred. I spoke with a professor who studies deep fakes, including digital attempts to control women's bodies.

[00:32:52]

I'm Danielle Citron and I am a law professor at Boston University School of Law. I write about privacy, technology, automation, my newest work and my next book is going to be about sexual privacy. So I worked in and around consumer privacy, individual rights, civil rights. I write a lot about free speech and then automated systems.

[00:33:13]

When did you first become aware of deep fakes?

[00:33:17]

Do you remember when this crossed the border? I did so. And there was a Reddit thread devoted to, you know, fake pornography, movies of Calzado, Emma Watson. But the Reddit thread sort of spooled not just from celebrities, but ordinary people. And so you had predators asking each other, how do I make a deep fake sex video of my ex-girlfriend? I have 30 pictures. And then other predators would chime in and say, look at this YouTube tutorial.

[00:33:43]

You can absolutely make a deep, vague sex video of your ex with 30 pictures. I've done it with 20.

[00:33:49]

In November 2017, an anonymous predator began posting synthesized porn videos under the pseudonym Deep Fakes, perhaps a nod to the deep learning technology used to create them, as well as the 1970s porn film Deep Throat.

[00:34:07]

The Internet quickly adopted the term deep fakes and broadened its meanings beyond pornography to create the videos, he used celebrity faces from Google image search and YouTube videos and then trains an algorithm on that content together with pornographic videos. Have you seen deep fake pornography videos? Yes. So still pretty crude. So you probably can tell that it's a fake. But for the person who's inserted into pornography, it's devastating. You use the neural network technology, the artificial intelligence technology to create out of digital whole cloth pornography videos using probably real pornography and then inserting the person in the pornography so they become the female actress if it's a female that's usually female in that video.

[00:35:01]

My name is No Martin, and I am an activist and law reform campaigner in Australia.

[00:35:11]

Noel is 26 years old and she lives in Perth, Australia.

[00:35:15]

So the first time that I discovered myself on pornographic sites was when I was 18 and.

[00:35:27]

Out of curiosity, decided to Google image of a search myself in an instant, like in a less than a millisecond, my life completely changed.

[00:35:38]

At first it started with photos, still images stolen from the world's social media accounts.

[00:35:44]

They would then doctoring my face from ordinary images and superimposing those onto the bodies of women depicting me having sexual intercourse.

[00:35:58]

It proved impossible to identify who was manipulating Noel's image in this way. It's still unclear today which made it difficult for her to seek legal action.

[00:36:07]

I went to the police soon after I contacted government agencies. Try getting a private investigator. Essentially, there's nothing that they could do. The sites are hosted overseas. The perpetrators are probably overseas. The reaction was at the end of the day, I think you can contact the webmasters to try and get things deleted.

[00:36:31]

You know, you can adjust your privacy settings so that, you know, nothing is available to anyone publicly. It was an unwinnable situation.

[00:36:40]

Then things started to escalate. In twenty eighteen, the world saw a synthesized pornographic video of herself.

[00:36:48]

And I believe that it was done for the purposes of silencing me because I've been very public about my story and advocating for change. So I had actually got an email from a fake email address and you know, I click the link. I was actually at work. It was a video of me having sexual intercourse. The title had my name. The face of the woman in it was edited so that it was my face. And, you know, all the tags were like Noel Martin, Australia feminist.

[00:37:27]

And it didn't look real.

[00:37:31]

But the context of everything with the title, with my face, with the tags, all points to me being depicted in this video.

[00:37:41]

The fakes were of poor quality, but porn consumers aren't a discriminating lot and many people reacted to them as if they were real.

[00:37:50]

The public reaction was horrifying to me. I was a victim blamed and slut shamed, and it's definitely limited the course of of where I can go in terms of career and employment.

[00:38:02]

Noel finished a degree in law and began campaigning to criminalize this sort of content.

[00:38:08]

My advocacy and my activism started off because I had a lived experience of this, and I experienced at a time where it wasn't criminalised in Australia, the distribution of altered intimate images or altered intimate videos. And so I had to petition meet with my politicians in my area. I wrote a number of articles. I spoke to the media, and I was involved in the law reform in Australia in a number of jurisdictions in Western Australia and New South Wales. And I ended up being involved in two press conferences with the attorney generals of each state at the announcement of the law that was criminalising this abuse.

[00:38:58]

Today, in part because of Noels activism, it is illegal in Australia to distribute intimate images without consent, including intimate images and videos that have been altered. Although it doesn't encompass all malicious synthetic media, Noel has made a solid start. Chapter six, Scissors and Glue. The videos depicting Noel Martin were nowhere near as sophisticated as those made by the Moon disaster team. They were more cheap fakes than deep fakes. And yet the point didn't have to be perfect to be devastating.

[00:39:39]

The same turns out to be true in politics. To understand the power of fakes. You have to understand human psychology.

[00:39:47]

It turns out that people are pretty easy to fool. John Kerry.

[00:39:51]

I was running for president of the U.S. His stance on the Vietnam War was controversial. Jane Fonda, of course, was a very controversial figure back then because of her anti-war stance.

[00:40:02]

What have we become as a nation if we call the men heroes that were used by the Pentagon to try to exterminate an entire people? What business have we to try to exterminate a people?

[00:40:10]

And somebody had created a photo of the two of them sharing a stage at an anti-war rally with the hopes of damaging the Kerry campaign. The photo was fake and they had never shared a stage together. They just took two images, probably put it into some standard photo editing software, like a Photoshop and just put a headline around it and out to the world it went. And I will tell you, I remembered the most fascinating interview I've heard in a long time was right after the election.

[00:40:37]

Kerry, of course, lost and a voter was being interviewed and asked how they voted and he said he couldn't vote for Kerry. And the interviewer said, well, why not? And the gentleman said, I couldn't get that photo of John Kerry and Jane Fonda out of my head. And the interviewer said, well, you know, that photo is fake. And the guy said, much to my surprise, yes, but I couldn't get it out of my mind.

[00:40:58]

And this is shows you the power of visual imagery. Like even after I tell you something is fake, it's still had an impact on somebody. And I thought, wow, we're in a lot of trouble because it is very, very hard to put the cat back into the bag. Once that content is out there, you can't undo it.

[00:41:15]

So so seeing is believing, even above thinking.

[00:41:19]

Yeah, that seems to be the rule. There is very good evidence from the social science literature that it's very, very difficult to correct the record after the mistakes are out there.

[00:41:28]

Well, Professor Danielle Citron also notes that humans tend to pass on information without thinking, which triggers what she calls information cascades.

[00:41:39]

Information Cascades is a phenomenon where we have so much information overload that when someone sends us something, some information and we trust that person, we pass it on. We don't even check its veracity. And so information can go viral fairly quickly because we're not terribly reflective, because we act on impulse.

[00:42:00]

Danielle says that information cascades have been given new life in the 21st century through social media.

[00:42:07]

Think about a 20th century phenomenon where we get most of our information from trusted sources, trusted newspapers, trusted major US TV channels. Growing up, we only had, you know, we didn't have a million. And they were adhering to journalistic ethics and commitments to truth and neutrality, a notion that you can't publish something without checking it. Now we are publishing information that most people say we're relying on our peers and our friends. Social media platforms are designed to tailor our information diet to what we want and to our pre-existing views.

[00:42:44]

So we're locked in a digital echo chamber. We think everybody agrees with us. We pass on that information. We haven't checked the veracity. It goes wild. We're especially likely to pass it on if it's negative a novel. Why is that? It's just like it's one of our weaknesses. We know how gossip goes like wildfire online. So like Hillary Clinton as running a sex ring. That's crazy. Oh, my God. Did you hear about that?

[00:43:14]

I'll post it on Facebook. Eric, you pass it on. We just can't help ourselves. And it is much in the way that we love sweets and fats and pizza. You know, we indulge.

[00:43:26]

We don't think in some sense this phenomenon is an old phenomenon. Right. There's the famous observation by Mark Twain about how a lie gets halfway around the world before the truth gets its pants on.

[00:43:39]

Yeah, the truth still in the bedroom getting dressed. And we often will see the lie. But the rebuttal is not seen. It's often lost in the noise of the defamatory statements. That is not new. But what is new is a number of things about our information ecosystem are force multipliers.

[00:44:02]

Chapter seven, Truth Dekay. Many experts are worried that the rapid advances in making fakes combined with a catalist of information Cascade's will undermine democracy. The biggest concerns have focused on elections globally.

[00:44:25]

We are looking at highly polarized situations where this kind of manipulated media can be used as a weapon.

[00:44:34]

One of the main reasons Francheska and Halsy made their Nexen deep faith was to spread awareness about the risks of misinformation campaigns before the 2020 US presidential election.

[00:44:46]

Similarly, a group showcased the power of deep fakes by making videos in the run up to the UK parliamentary election, showing the two bitter rivals, Boris Johnson and Jeremy Corbyn, each endorsing the other.

[00:45:00]

I wish to rise above this divide and endorse my worthy opponent, the Right Honourable Jeremy Corbyn, to be prime minister of a United Kingdom back, Boris Johnson to continue as our prime minister.

[00:45:12]

But you know what? Don't listen to me. I think I may be one of the thousands of fakes on the Internet using powerful technologies to tell stories that aren't true.

[00:45:24]

So this just kind of indicates how candidates and political figures can be misrepresented. And you just need to feed them into, you know, people social media feeds for them to be seeing this at times when the stakes are pretty high.

[00:45:40]

So far, we haven't yet seen sophisticated deep fakes in U.S. or UK politics. That might be because fakes will be most effective if they're timed for maximum chaos, say, close to Election Day, when newsrooms won't have the time to investigate and debunk them. But another reason might be that cheap fakes made with basic video editing software are actually pretty effective. Remember the video that surfaced of House Speaker Nancy Pelosi in which she appeared intoxicated and confused?

[00:46:13]

We want to give this president the opportunity to do something historic for our country.

[00:46:22]

Both President Trump and Rudy Giuliani shared the video as fact on Twitter. The video was just a cheap fake, just slowed down Pelosi's speech to make her seem incompetent. But maybe elections won't be the biggest targets. Some people worry the deep fakes could be weaponized to foment international conflict.

[00:46:44]

Berkeley Professor Honi Fareed has been working with U.S. government's media forensics program to address this issue.

[00:46:51]

DARPA, on the Defense Department's research arm, has been pouring a lot of money over the last five years into this program. They are very concerned about how this technology can be a threat to national security and also how when we get images and videos from around the world in areas of conflict, do we know if they're real or not? Is this really an image of a U.S. soldier who has been taken hostage?

[00:47:14]

How do we know?

[00:47:15]

So what do you see as some of the worst case scenarios? Here's the things that keep me up at night. All right. A video of Donald Trump saying I've launched nuclear weapons against Iran. And before anybody gets around to figuring out whether this is real or not, we have a global nuclear meltdown. And here's the thing. I don't think that that's likely, but I also don't think that the probability of that is zero and that should worry us, because while it's not likely, the consequences are spectacularly bad.

[00:47:45]

Lawyer Danielle Citron worries about an even more plausible scenario.

[00:47:50]

Imagine a deep think of a well-known American general burning a Koran. And it is timed at a very tense moment in a particular, you know, country, whether it's Afghanistan, it could then lead to physical violence.

[00:48:09]

And you think this could be made? No general, no Koran actually used in the video, just programmed.

[00:48:15]

You can use the technology to mine existing photographs kind of easy, especially with someone like Take Jim Mattis when he was our defense secretary, Jim Mattis, you know, actually taking a Koran and ripping it in half and say all Muslims. Should I imagine that the chaos in diplomacy, the chaos of our soldiers abroad in a Muslim country is it would be inciting violence without question?

[00:48:42]

Well, we haven't yet seen spectacular fake videos used to disrupt elections or create international chaos.

[00:48:49]

We have seen increasingly sophisticated attacks on public policy making.

[00:48:55]

So we've got an example in 2017 where the FCC solicited public comment on the proposal to repeal net neutrality.

[00:49:04]

Net neutrality is the principle that Internet service providers should be a neutral public utility. They shouldn't discriminate between websites, say, slowing down Netflix streaming to encourage you to purchase a different online video service. As President Barack Obama described in 2014, there are no gatekeepers deciding which sites you get to access.

[00:49:27]

There are no toll roads on the information superhighway.

[00:49:30]

Federal communications policy had long supported net neutrality, but in 2017, the Trump administration favored repealing the policy.

[00:49:40]

There were 22 million comments that the FCC received, but 96 percent of those were actually fake.

[00:49:49]

The interesting thing is the real comments were opposed to repeal, whereas the fake comments were in favor of a Wall Street Journal investigation exposed that the fake public comments were generated by bots.

[00:50:04]

It found similar problems with public comments about payday lending. The bots varied their comments in a combinatorial fashion so that the content wasn't identical. With a little sleuthing, though, you could see that they were generated by computers. But with the technology increasingly able to generate completely original writing like Open Eyes program that wrote the story about unicorns in the Andes, it's going to become hard to spot the fakes.

[00:50:33]

So there was this Harvard student, Max Vyse, who used to to kind of demonstrate this. And I went on his site yesterday and he's got this little test where you need to decide whether a comment is real or fake. So you go on and you read it and you decide whether it's been written by a bot or by a human. So I did this and the ones that seemed to be, you know, really well-written and quite narrative discursive generally.

[00:51:01]

I was picking them as human. I was wrong almost all the time. It was amazing and alarming.

[00:51:07]

In our democracy, public comments have been an important way in which citizens can make their voices heard.

[00:51:13]

But now it's becoming easy to drown out those voices with millions of fake opinions. Now the downfall of truth likely won't come with a bang, but a whimper, a slow, steady erosion that some call truth decay.

[00:51:30]

If you can't believe anything you read or hear or see anymore, I don't know how you have a democracy. I don't know, frankly, how we have civilized society. If everybody is going to live in an echo chamber believing their own version of events, how do we have a dialogue if we can't agree on basic facts?

[00:51:45]

In the end, the most insidious impact of deep fakes may not be the deep fake content itself, but the ability to claim that real content is fake.

[00:51:56]

It's something that Danielle Citron refers to as the liars dividend.

[00:52:01]

The Liars dividend is that the more you educate people about the phenomenon of deep fakes, the more the wrongdoer can disclaim reality. Think about what President Trump did with the Access Hollywood tape automatically attracted to me.

[00:52:16]

I just guessing it's like a magnet. You just click. I don't even know where and when you're a star. They let you do it. You can do anything, whatever you want, grab them by the party. I can do anything.

[00:52:28]

Initially, Trump apologized for the remarks. Anyone who knows me knows these words don't reflect who I am. I said it, I was wrong and I apologize.

[00:52:40]

But in twenty seventeen a year after his initial apology and with the idea of deep fake content starting to gain attention, Trump changed his tune.

[00:52:51]

Upon reflection, he said, they're not real. That wasn't me. I don't think that was my voice. That's. Elias dividend in practice, the Trump comments about Access Hollywood was remarkable, but lately more subtle than that, he said, I'm not sure that was my favorite.

[00:53:08]

Right. Well, that's the corrosive gaslighting. Chapter eight, a life stored in the cloud. Deep fakes have the potential to devastate individuals and harm society. The question is, can we stop them from spreading before they get out of control? To do so, we need reliable ways to spot the fakes.

[00:53:41]

So the good news is there are still artifacts in the synthesised content, whether those are images, audio or video that we, as the experts can tell apart. So when, for example, The New York Times wants to run a story with a video, we can help them validate it.

[00:53:55]

What are the real sophisticated experts looking at? So the eyes are really wonderful forensically because they reflect back to you what is in the scene. So I'm sitting now right now in a studio, there's maybe about a dozen or so lights around me. And you can see this very complex set of reflections in my eyes. So we can analyze fairly complex lighting patterns, for example, to determine if this is one person's head spliced onto another person's body or if the two people standing next to each other were digitally inserted from another photograph.

[00:54:27]

I could spend another hour telling you about the many different forensic techniques that we've developed. There's no silver bullet here really is a sort of a time consuming and deliberate and thoughtful and requires many, many tools. And it requires people with a fair amount of skill to do this.

[00:54:42]

Honey Friede also has quite a few detection techniques that he won't speak about publicly for fear. The deep fake creators will learn how to beat his tests.

[00:54:51]

I don't create a GitHub repository and give my code to all my adversaries. I don't have just one forensic techniques. I have a couple of dozen of them. So that means you, as the person creating, does now have to go back and implement 10, 20 different techniques and you have to do it just perfectly. And that makes the landscape a little bit more tricky for you to manage.

[00:55:11]

As technology makes it easier to create deep fakes, a big problem will be the sheer amount of content to review so the average person can download software repositories.

[00:55:22]

And so it's getting to the point now where the average person can just run these as if they're running any standard piece of software. There's also websites that have propped up where you can pay them 20 bucks and you tell them, please put this person's face into this person's video and they will do that for you. And so it doesn't take a lot to get access to these tools. Now, I will say that the output of those are not quite as good as what we can create inside the lab.

[00:55:46]

And you just know what the trend is. You just know it's going to get better and cheaper and faster and easier to use.

[00:55:51]

Detecting the fakes will be a never ending cat and mouse game. Remember how generative adversarial networks or Gan's are built by training a fake generator to outsmart a detector? Well, detectors get better. Fake generators will be trained to keep pace. Still, detectives like Honey and platforms like Facebook are working to develop automated ways to spot deep fakes rapidly and reliably.

[00:56:21]

That's important because more than 500 additional hours of video are being uploaded to YouTube every minute.

[00:56:29]

I don't mean to sound defeatist about this, but I'm going to lose this war. I know this because it's always going to be easier to create content than it is to detect it. But here's where I will wind. I will take it out of the hands of the average person. So think about, for example, the creation of counterfeit currency. With the latest innovations brought on by the Treasury Department, it is hard for the average person to take their inkjet printer and create compelling fake currency.

[00:56:56]

And I think that's going to be the same trend here, is that if you're using some off the shelf tool, if you're paying somebody on a website, we're going to find you and we're going to find you quickly. But if you are a dedicated, highly skilled, you have the time and the effort to create it. We are going to have to work really hard to detect those.

[00:57:12]

Given the challenges of detecting fake content. Some people envision a different kind of techno fix. They proposed developing airtight ways for content creators to mark their own original video as real. That way we get instantly recognize an altered version if it wasn't identical.

[00:57:33]

Now there's ways of authenticating at the point of recording, and these are what I call control capture system. So here's the idea. You use a special app on your mobile device that at the point of capture, a cryptographically signs the image of the video or the audio. It puts that signature onto the block chain. And the only thing you have to know about the block chain is that it is an immutable, distributed ledger, which means that that signature is essentially impossible to manipulate.

[00:57:59]

And now all of that happened at the point of recording. If I was running a campaign today and I was worried about my candidate's likeness being misused, absolutely. Every public event that they were at, I would record with a control capture system and I'd be able to prove what they actually said or did at any point in the future.

[00:58:17]

So this approach would shift the burden of authentication to the people creating the videos rather than publishers or consumer.

[00:58:26]

Law professor Danielle Citron has explored how this solution could quickly become dystopian, we might see the emergence of an essentially an audit trail of everything you do and say all of the time.

[00:58:37]

Danielle refers to the business model as immutable life logs in the cloud.

[00:58:43]

In a way, we sort of have already seen it. There are health plans that if you wear a Fitbit all the time and you let yourself be monitored, it lowers your insurance, you know, your health insurance rates. But you can see how, if the incentives are there in the market to self surveil, whether it's for health insurance, life insurance, contingence, we're going to see the unraveling of privacy by ourselves. You know, corporations may very well because the CEO is so valuable, they may say you've got to have a log, an immutable audit trail of everything you do and say.

[00:59:19]

So when that deep faith comes up the night before the IPO, you can say, look, the CEO wasn't taking the bribe, wasn't having sex with a prostitute. And so we have proof because we have a we have an audit trail, we have a log.

[00:59:33]

So we were imagining we were imagining a business model that hasn't quite come up.

[00:59:39]

But we have gotten the number of requests from insurance companies as well as companies to say we're interested in this idea.

[00:59:48]

So how much has to be in that log? Does it have to be a whole video of your life?

[00:59:52]

That is a great question, one that terrifies us. So it may be that your logging locate geolocation, your logging videos, you see people talking and who they're interacting with. And that might be good enough to prevent the mischief that would hijack the IPO.

[01:00:08]

Because your whole life online. Yes.

[01:00:11]

Stored securely somewhere, blocked out, protected in the cloud. It is at least for privacy scholar. There are so many reasons why we ought to have privacy that aren't about hiding things. It's about creating spaces and managing boundaries around ourselves and our intimate and our loved ones. So I worry that if we entirely unravel privacy, a in the wrong hands is very dangerous, right? Be it changes how we think about ourselves and humanity.

[01:00:47]

Chapter nine. Section 230. So techno fixes are complicated. What about passing laws to ban deep fakes or at least deep fakes that don't disclose their fake?

[01:01:01]

So the video and audio is speech in our First Amendment doctrine is very much a protective of free speech. And the Supreme Court has explained that lies just lies themselves without harm is protected speech. When lies cause certain kinds of harm, we can regulate it. Defamation of private people, threats, incitement, fraud, impersonation of government officials.

[01:01:26]

What about lies concerning public figures like politicians? California and Texas, for instance, recently passed laws making it illegal to publish deep fakes of a candidate in the weeks leading up to an election. It's not clear yet whether the laws will pass constitutional muster.

[01:01:45]

So you're saying in an American context, right, we are just not going to be able to outlaw write fakes.

[01:01:52]

Yeah, we can't have a flat ban. And I don't think we should it would fail on doctrinal grounds, but ultimately it would prevent the positive uses.

[01:02:01]

Interestingly, in January 2020, China, which has no First Amendment protecting free speech, promulgated regulations banning deep fakes. The use of EHI or Virtuality now needs to be clearly marked in a prominent manner, and the failure to do so is considered a criminal offence to explore other options for the U.S.. I went to speak with a public policy expert.

[01:02:28]

My name is Joan Donovan and I work at Harvard Kennedy Shorenstein Center, where I lead a team of researchers looking at media manipulation and disinformation campaigns.

[01:02:38]

Joan is head of the Technology and Social Change Research Project and her staff studies how social media gives rise to hoaxes and scams. Her team is particularly interested in precisely how misinformation spreads across the Internet.

[01:02:54]

Ultimately, underneath all of this is the distribution mechanism, which is social media and platforms and platforms have to rethink the openness of their design because that has now become a territory for information warfare.

[01:03:11]

In early 2020. Facebook announced a major policy change about synthesized content.

[01:03:17]

Facebook pre issued policies now on Deep FAQ saying that if it is an API generated video and it's misleading and some other contextual way, then they will remove it.

[01:03:31]

Interestingly, Facebook banned the moon disaster teams Nexen video even though it was made for educational purposes, but didn't remove the slowed down version of Nancy Pelosi, which was made to mislead the public. Why? Because the Pelosi video wasn't created with artificial intelligence.

[01:03:51]

For now, Facebook is choosing to target deep fakes but not cheap fakes.

[01:03:57]

One way to push platforms to take a stronger stance might be to remove some of the legal protections that they currently enjoy. Under Section 230 of the Communications Decency Act, passed in 1996, platforms aren't legally liable for content posted by its users.

[01:04:16]

The fact that platforms have no responsibility for the content they host has an upside. It's led to the massive diversity of online content we enjoy today, but it also allows a dangerous escalation of fake news. Is it time to change Section 230 to create incentives for platforms to police false content? I asked the former head of a major platform, LinkedIn co-founder Reid Hoffman, for example.

[01:04:44]

Let's take, you know, my view of what the response to the Christchurch shooting should be is to say, well, we want you to solve not having terrorism, murderer or murderers displayed to people. So we're simply going to do a fine of ten thousand dollars per view to shootings occurred at mosques in Christchurch, New Zealand did March twenty.

[01:05:06]

Nineteen graphic videos of the event were soon posted online.

[01:05:12]

Five people saw it and fifty thousand dollars. But if you become a meme and a million people see it, that's ten billion dollars.

[01:05:20]

Yes, right. So what we're really trying to do is get you to say, let's make sure that the meme never happens.

[01:05:27]

OK, so that's a governance mechanism there. Yes. You find the channel. The platform, yes. Based on a number of views would be a very general way to say now you guys have to solve now usof you figure it out.

[01:05:41]

What about other solutions?

[01:05:43]

If we are to make regulation, it should be about the amount of staff in proportion to the amount of users. So that they can get a handle on the content, but can they be fast enough, maybe the viral spread should be slowed down enough to allow them to moderate. Let's put it this way. The stock market has certain governors built in when there's massive changes in a stock price.

[01:06:12]

There are decelerator that kick in, breaks that kick in. Should the platforms have breaks that kick in before something can go fully viral.

[01:06:20]

So in terms of deceleration, there are things that they do already that accelerate the process that they need to think differently about, especially when it comes to something turning into a trending topic. So there needs to be an intervening moment before things get to the home page and get to trending where there is a content review.

[01:06:43]

There's so much to say here. But I want to think particularly about listeners who are in their 20s and 30s and are very tech savvy. They're going to be part of the solution here. What would you say to them about what they can do? I think it's important that younger people advocate for the Internet that they want. We have to fight for it. We have to ask for different things. And that kind of agitation can come in the form of posting on the platform, writing letters, joining groups like Fight for the Future and trying to work on getting platforms to do better and to advocate for the kind of content that you want to see more of.

[01:07:35]

The important thing is that our society is shaped by these platforms and so we're not going to do away with them, but we don't have to make do with them either. Conclusion, choose your planet. So there you have it, stewards of the brave new planet, synthetic media or deep fakes, people have been manipulating content for more than a hundred years. But recent advances in I have taken it to a whole new level of verisimilitude. The technology could transform movies and television favorite actors from years past, starring a new narratives along with actors who never existed, patients regaining the ability to speak in their own voices, personalized stories created on demand for any child around the globe, matching their interests, written in their dialect, representing their communities.

[01:08:41]

But there's also great potential for harm the ability to cast anyone in a pornographic video, weaponized media dropping days before an election or provoking international conflicts. Are we going to be able to tell fact from fiction? Will truth survive? And what does it mean for our democracy? Better fake detection may help, but it'll be hard for it to keep up and logging our lives and block change to protect against misrepresentation. Doesn't sound like an attractive idea. Outright bans on deep fakes are being tried in some countries, but they're tricky in the U.S. given our constitutional protections for free speech.

[01:09:25]

Maybe the best solution is to put the liability on platforms like Facebook and YouTube if we can. Joan Donovans. Right. To get the future you want, you're going to have to fight for it. You don't have to be an expert and you don't have to do it alone. When enough people get engaged, we make wise choices. Deep fakes are a problem that everyone can engage with. Brainstorm with your friends about what should be done. Use social media, tweeted your elected representatives to ask if they're working on laws like in California and Texas.

[01:10:01]

And if you work for a tech company, ask yourself and your colleagues if you're doing enough, you can find lots of resources and ideas at our Web site. Brave New Planet Thag. It's time to choose our planet. The future is up to us. Brave New Planet is a co-production of the Broad Institute of MIT and Harvard, Pushkin Industries and The Boston Globe with support from the Alfred P. Sloan Foundation. Our show is produced by Rebecca Douglas with Merridew theme song composed by Ned Porter, Mastering and Sound Design by James Gava, fact checking by Joseph Fridmann and a Stitt and Enchante.

[01:10:53]

Special thanks to Christine Heenan and Rachel Roberts at Clarendon Communications.

[01:10:58]

To Lee McGuire, Kristen Zerilli and Justin Levine. Our hands at the road to me, LaBelle and Heather Fain at Pushkin, and to Eliane Brud, who made the Broad Institute possible. This is Brave New Planet. I'm Eric Lander.