Transcribe your podcast
[00:00:00]

Are you ready administers the Heads fiction podcast team and. Yes, reaches this thrilling final season. Just coming after my dear child team, and by season four, no one is allowed up here to listen and follow team and bay on the radio, out Apple podcasts or wherever you listen to podcasts, take me to to Monday.

[00:00:27]

But now we wait till and be sure to the any Charlamagne God here. And it is a privilege, an honor to introduce to you a new podcast, great shot, no chaser, hosted by a queen named Teslik Figueiro. She is the hood whisperer in this game of politics, debuting on my new Blackfin podcast network on I Heart Radio. This is Taslim Figaro. On my podcast, we'll cover politics, black life, racial justice and Food for the Soul.

[00:00:51]

This fire. You come sit this through to me. Subscribe now listen. Straight shot, no chaser. What Tesla figural on the radio app. Apple podcast or wherever you get your pocket.

[00:01:01]

Hey everybody. Josh here. We wanted to include a note before this episode, which is about existential risks, threats that are big enough to actually wipe humanity out of existence. Well, we recorded this episode just before the pandemic, which explains the weird lack of mention of covid when we're talking about viruses.

[00:01:21]

And when this pandemic came along, we thought perhaps a wait and see approach might be best before just willy nilly releasing an episode about the end of the world. So we decided to release this now, still in the thick of things, not just because the world hasn't ended, but because one of the few good things that's come out of this terrible time is the way that we've all kind of come together and given a lot of thought about how we can look out for each other.

[00:01:48]

And that's exactly what thinking about existential risks is all about. So we thought there would be no better time than right now to talk about them. We hope this explains things. And then you realize we're not releasing this glibly in any way.

[00:02:03]

And instead we hope that it makes you reflective about what it means to be human and why humanity is worth fighting for.

[00:02:12]

Welcome to Stuff You Should Know, a production of NPR Radio's HowStuffWorks. Hey, welcome to the podcast, I'm Josh Clark, and there's Charles, Chuck, Brian over there, and there is guest producer Dave Said, sitting in yet again at least the second time. I believe he's already picked up that he knows not to speak. He's not the custom established by Jerry. But, yeah, he did not. Didn't. So, yeah, I guess it is twice that Dave's been sitting in.

[00:02:44]

What if he just heard two times.

[00:02:48]

From the other side of the room, labor didn't have the heart to tell them not to do that. I think he would be he would catch the drift from, like, the record scratching right there, just like materialized out of nowhere. Right.

[00:03:00]

Not many people know that we have someone on permanent standby by a record player waiting just in case we do something like that.

[00:03:07]

And that person is Tommy Chong. Hi, Tommy. Do I smell bong water? Yeah. Wow. Very reeks of it. Yeah, probably so. I mean, hats off to him for sticking to his bit, you know, Cheetos like, hey, hey, I want a good long spot on Nash Bridges, so I'll say whatever you want me to do about pot.

[00:03:25]

Like I'm just in the gummies now, right. Tommy Chong like triple down. Yeah. And he sold the bongs didn't he, that pee test beta's.

[00:03:37]

Peter Speed, OK? I suddenly think of how to say something like that, uh, a way to defeat a urine test. Oh, well, that's the new fancy, I would say. I don't know.

[00:03:50]

I know the street guys call it Peter Speeders, but Peter Speeders is a bad name, about as good as they like. Diary, a planet, sure. Actually, I think diary planets got it beat, but still. All right. So, Chuck, we're talking today about a topic that is near and dear to my heart, existential risks. That's right. Which I don't know if you if you've gathered that or not, but I really, really am into this this topic.

[00:04:20]

Yeah. All around. As a matter of fact, I did a 10 part series on it called The End of the World with Josh Clark available everywhere. You get podcasts right now.

[00:04:30]

I managed to smash that down. That's kind of what this is. It's a condensed version. And forever, like, I've wanted to just see why české ify the topic of existential risks, like you do it with you.

[00:04:43]

I wanted to do it with you. This is going to be a live show. At one point it was I think even before that I was like, hey, you want to do an episode on this? You like this pretty dark stuff. We're doing it now.

[00:04:56]

So the only time I said that was when you actually sent me the document for the live show and I went, I don't know about a live version of this.

[00:05:02]

So I guess I guess that must have been before the end of the world then, huh? Oh, yeah. This was like eight years ago. Well, I'm glad you turned down the live show because it may have lived and died there. Yeah.

[00:05:12]

So one of these might not have made all those into the world big bucks, right? Exactly. Man, I'm rolling in. My mattress is stuffed with them.

[00:05:22]

So and you know, box are always just the only way of qualifying or quantifying the success of something, you know.

[00:05:30]

Yeah. There's also Academy Awards. Right. Oscars and that's it. Peabody's big money or public awards ceremony. Okay.

[00:05:38]

Granted, the other reason I wanted to do this episode was because one of the people who was a participant and interviewee in the end of the world with Josh Clark, a guy named Dr Toby Ford, recently published a book called The Precipice. And it is like a really in-depth look at existential risks and the ones we face and, you know, what's coming down the pike and what we can do about them and why. Who's hot, who's not. Right, exactly.

[00:06:03]

Cheers and jeers. Who wore it best. Right, exactly. And it's a it's a really good book. And it's written just for, like, the average person to pick up and be like, I hadn't heard about this. I reached the end of it and say, I'm terrified, but I'm also hopeful. And that one reason I wanted to do this episode, did everybody know about Dr. Ward's book or Tubby's book?

[00:06:21]

It's impossible to call him doctor or he's just a really likable guy, is because he actually turned the tone of the end of the world around almost single handedly. It was really grim.

[00:06:34]

I remember I interviewed him early samples. And also you remember I started like listening to the cure a lot. Sure. Just got real dark there for a little while, which is funny that the cure is my conception of like, really dark anyway.

[00:06:49]

Death metal guys out there laughing. All right.

[00:06:51]

So he but talking to him, he just kind of just steered the ship a little bit. And by the end of it, because of his influence in the world, actually is a pretty hopeful series. So my hat's off to the guy for for doing that, but also for writing this book, The Precipice.

[00:07:07]

Hats off, sir.

[00:07:08]

So we should probably kind of describe existential risks are I know that, you know, in the documents described many, many times. But the reason is described many, many times is because there's like a lot of nuance to it. And the reason there's a lot of nuance to it is because we kind of tend to walk around thinking that we understand existential risks based on our experience with previous risks. Right. But the problem with existential risks are they're actually new to us and they're not like other risks because they're just so big.

[00:07:41]

And if something happens, one of these existential catastrophes befalls us. That's it. There's no second chance. There's no do over. And we're not used to risks like that.

[00:07:51]

That's right. Nobody is because we are all people. Right. And the thought of all of human beings being gone or at least not being able to live as regular humans live and enjoy life like and not live is matrixx batteries. Sure. Because, you know, technically the matrix, those are people. Yeah, but that's there's no way to live the people in the pods.

[00:08:16]

Yeah. Yeah. So I'm saying I wouldn't want to live that way, but that's another version of existential risk is not necessarily that everyone's dead, but you could become just a matrix battery. Yeah. And not flourish or move forward as a people.

[00:08:27]

Right, exactly. So but but with existential risk in general, like the general idea of them is that like if you are walking along, you suddenly get hit by a car, like you no longer exist. But the rest of humanity continues on existing correct with existential risks.

[00:08:46]

It's like the car that comes along and it's not a human, but all humans. So it's a risk to humanity itself. And that's just kind of different because all of the other risk. That we've ever run across. Either give us the luxury of time or proximity, meaning that we have enough time to adapt our our behavior to it, to survive it and continue on as a species.

[00:09:15]

Right. Or there's not enough of us in one place to be affected by this this risk that took out, say, one person or a billion people.

[00:09:23]

Right. Like, if all of Europe went away. That is not an exact risk.

[00:09:28]

No. And so people might say to be sad, it would be really sad. And I mean, not to, you know, say 99 percent of the people live on Earth. If they all died somehow, it would still possibly not be an existential risk because that one percent living could conceivably rebuild civilization.

[00:09:48]

That's right. We're talking about giving the world back to Mother Nature. And just seeing what happens, do you remember that series, I think it was a book to start the Earth without us now.

[00:10:02]

Oh, so why do I think I know that there was a big deal when it came out and then they made like maybe a Science Channel or Nat Geo series about it, where this guy describes, like how our infrastructure will start to crumble if humans just vanish tomorrow, how the earth would reclaim nature, would reclaim everything we've done and undo, you know, after after a month, after a year, after ten thousand years. I've heard of that.

[00:10:25]

It's really cool stuff. Yeah.

[00:10:26]

There's a Bonnie Prince.

[00:10:28]

Billy, my idol has a song called It's Far from Over. And that's sort of a Bonnie Prince. Billy, look at the fact that, hey, even if all humans leave, it's not over. Yeah.

[00:10:40]

Like new animals are going to new creatures are going to be born. Right. The Earth continues. Yeah. And he also has a line, though, about like you, but you better teach your kids to swim. That's a great line. Yeah, it's good stuff.

[00:10:51]

They ever tell you I saw that guy do karaoke with his wife once. Oh, really? You know our friend Toby. Oh, sure. At his wedding. Yeah. I would have not been able to be at that wedding because you it just such a shame.

[00:11:04]

Boy, I don't know what I would do. I would I would it would ruin my time really. They really would because I would second guess everything I did. Oh. About I mean I even talk to the guy once backstage and I that ruined my day. It really did.

[00:11:18]

Because you spent the rest of the time just thinking about how, you know, it was actually fine.

[00:11:23]

He was very, very, very nice guy. And we talked about Athens and stuff, but that's who I just went to see in D.C., Philly, in New York.

[00:11:30]

Nice when a little follow him around to tour for a few days. Did he sing that song about the girl going on or life going on? He did.

[00:11:39]

So so let's just cover a couple of things that we like people might think are existential risk that actually aren't OK.

[00:11:46]

Yeah, I mean, I think a lot of people might think, I'm sure some global pandemic that could wipe out humanity, but there could very well be a global pandemic that could kill a lot of people. But it's probably not going to kill every living human. Right. It would be a catastrophe, sure. But not an X risk.

[00:12:05]

Yeah, I mean, because humans have antibodies that we develop. And so people who survive that flu have antibodies that they pass on in the next generation. And so that that disease kind of dies out before it kills everybody off.

[00:12:17]

And the prepress at the very least, they'll be fine, would be safe.

[00:12:21]

What about calamities like a mudslide or something like that?

[00:12:26]

You get mudslide the earth. You can't. And that's a really good point. This is what I figured out in researching this. After doing the end of the world, after talking all these people, it took researching this article for me to figure this out that it's time and proximity. Yeah. That are the two things that we use to survive and that if you take away time in proximity, we're in trouble. And so mudslides are a really good example of proximity where a mudslide can come down a mountain and take out an entire village of people and has.

[00:12:55]

Yes, and it's really sad and really scary to think of. I mean, we saw it with our own eyes.

[00:13:00]

We stood in a field that was now, what, like eight or nine feet higher than it used to be.

[00:13:05]

Yeah. And you could see the track. This is in Guatemala. We went down to visit our friends at Koed. There was like the trees were much sparser. You could see the track of the mud.

[00:13:15]

They were there because the people are still down there. This is it was a horrible tragedy and it happened in a matter of seconds. It just wiped out a village. But we all don't live under one mountain now. And so if a bunch of people are taken out, the rest of us still go on. So there's the time and there's proximity.

[00:13:32]

Yeah, I think a lot of people in the eighties might have thought because of movies like war games and movies like the day after that global thermonuclear war would be an X risk. And as bad as that would be, it wouldn't kill every single human being.

[00:13:47]

No, no, they don't think so.

[00:13:49]

They started out thinking, this is a matter of fact. Nuclear war was the first one of the first things that we identify as a possible existential risk. Sure. And if you kind of talk about the history of the field for the first like several decades, that was like the focus, the entire focus of existential risk. Yeah. Like Bertrand Russell and Einstein wrote a manifesto about how we really need to be careful with these nukes because we're going to wipe ourselves out.

[00:14:15]

Carl Sagan, you remember our amazing nuclear winter episode. Yeah, yeah. That was from, you know, studying existential risk. And then in the 90s, a guy named John Wesley came along and said, hey, there's way more than just nuclear war that would wipe ourselves out with. And some of it is taking the form of this technology that's coming down the pike. And that was taken up by one of my personal heroes, a guy named Nick Bostrom.

[00:14:40]

Yeah, he's a philosopher out of Oxford and he is one of the founders of this field. And he's the one that said, ah, one of the ones that said, you know, there's a lot of potential exits. Existential risks, yeah, and nuclear war is peanuts, bring it on, right?

[00:14:59]

But I know I don't know if Boston specifically believes they probably does that, that we would be able to recover from a nuclear war.

[00:15:07]

That's the idea as you rebuild as a society after whatever zombie apocalypse or nuclear war happens.

[00:15:13]

Yeah. Again, say it. Kill off 99 percent of people. To us, that would seem like an unimaginable tragedy because we lived through it. But if you zoom back out and look at the life span of humanity, sure. Not just the humans live today, but all of humanity like it would be a very horrible period in human history, but one we could rebuild from over, say, 10000 years. Yeah. To get back to the point where we were before the nuclear war.

[00:15:38]

And so ultimately, it's probably not an existential risk.

[00:15:42]

Yeah, it's tough. This is a tough topic for people because I think people have a hard time with that long of a view of things. And then whenever you hear the Big Mac comparisons of, you know, how long people have been around and how old the earth is and that stuff, it kind of hits home. But it's tough for people living that live, you know, 80 years to think about. Oh, well, ten thousand years.

[00:16:05]

Yeah, we'll be fine.

[00:16:06]

And even like you mean, when I was researching that, she brought this up a lot. Like, where do we stop caring about people. Yeah. That are descendants. You know, we care about our children, our grandchildren.

[00:16:16]

That's about I just care about my daughter. That's about it. That's where it needs to. Heck with the grandchildren. You have grandchildren yet? Yeah, but wait till they come along. Everything I've ever heard is that being a grandparent, even better than being a parent. And I know some grandparents.

[00:16:31]

OK, let's say I'm not dead before my daughter eventually has a kid if she wants to. OK, I would care about that grandchild. Right. But after that little whippersnapper, forget it. OK, yeah. My kids, kids, kids, who cares.

[00:16:44]

Screening, that's about that's about where it would like where I care about people in humanity as a whole.

[00:16:50]

I think that's what you got to do. You can't think about like your, your eventual ancestors thinking. You just got to think about people. Right. Yeah. That's a really help people you don't know.

[00:17:00]

Now it's kind of requisite to start caring about existential risks, to start thinking about people, not just. Well, let's talk about it. So Toby Ord made a really good point in his book, The Precipice, right. That you care about people on the other side of the world that you've never met. Yeah.

[00:17:17]

So I'm saying like that happens every day. Right. So what's the difference between people who live on the other side of the world that you will never meet and people who live in a different time that you will never meet? Why would you care any less about these people, human beings that you'll never meet, whether they live on the other side of the world or in the same place you do, but at a different time?

[00:17:38]

I think of you I mean, I'm not speaking for me, but I think if I were to step inside the brain of someone who thinks that they would think like a it's a little bit of a self.

[00:17:51]

It's a bit of an ego thing because, you know, like, oh, I'm helping someone else. So that does something for you in the moment. Right? Like someone right now on the other side of the world that maybe I've sponsored is doing better because of me.

[00:18:02]

Got to. And that had a little kick out of it from Sally Struthers. Yeah. That does help so strongly help put food on her plate. She's still with us.

[00:18:12]

I think so. I think so too. But I feel really bad.

[00:18:15]

If I certainly haven't heard any news of her death, you people would talk about that and the record scratch would have just happened.

[00:18:21]

Right. So I think that is something to and I think there are also sort of a certain amount of people that are just, um, that just believe your worm dirt.

[00:18:33]

There is no benefit to the afterlife as far as good deeds and things. So, like, once you're gone, it's just who cares? Because it doesn't matter. Yeah, there's no consciousness.

[00:18:44]

Yeah, well, that's I mean, if you if you were at all like piqued by that stuff, I would say definitely read the precipice because like one of the best things that Toby does and he does a lot of stuff really well is describe why it matters because I mean, the philosopher after all. So he says like this is why it matters. Like, not only does it matter because you're you're keeping things going for the future generation, you're also continuing on with the previous generation built like who who are you to just be like, oh, where you're going to drop the ball?

[00:19:14]

No, I agree. That's a very self-centered way to look at things, totally. But I think you're right. I think there are a lot of people who look at it that way. So you want to take a break?

[00:19:22]

Yeah, we can take a break now and maybe we can dive into Mr Bostrom or doctor, I imagine.

[00:19:27]

Sure. Boasters five different types.

[00:19:31]

Are there five. No, there's just a few. OK, a few different types of existentialists. We can make up a couple that are not. Let's not.

[00:19:52]

Hey, it's Bobby Bones, executive producer of Make It Up as we Go, the brand new podcast from Audio Up and I Heart Radio brought to you exclusively by Unilever's Noor and Magnum Brands. The story follows a songwriter's journey as well as the songs themselves and how they make it to country radio from executive producer Miranda Lambert and creators Scarlett Burg and Jared Goosestep, a story inspired by the competitive world of Nashville writing rooms featuring original music by Scarlett Burke, director and executive producer, featuring some of the biggest names in country, including The Cool Guy and Everything Now Nowadays.

[00:20:33]

Make it up as we go only on the podcast network in association with audio of media created by Scarlett Burke and Jared Goosestep. Know you're sure you're all right, Chuck, so one of the things you said earlier is that existential risks, the way we think of them typically is that something happens and humanity is wiped out and we all die and there's no more humans forever and ever.

[00:21:22]

That's an existential risk. That's one kind, really, and that's the easiest one to grasp. But which is extinction.

[00:21:28]

Yeah. And that kind of speaks for itself. Just like dinosaurs are no longer here. That would be us.

[00:21:34]

Yes. No more here. And it's cool. And I think that's one of those other things, too. It's kind of like how people walk around, like, yeah, I know I'm going to die someday, but if you set them down and you're like, do you really understand that you're going to die someday, that they might start to panic a little bit, you know, and they realize I haven't actually confronted that. I just know that I'm going to die.

[00:21:54]

Yeah. Or if you knew the date that be weird, it'd be like a Justin Timberlake movie.

[00:21:59]

Would that make things better or worse for humanity? I would say better probably, right? I think it'd be a mixed bag, I think some people would be able to do nothing but focus on that and think about all the time they're wasting. And other people would be like, I'm going to make the absolute most out of this.

[00:22:15]

Well, I guess there are a couple of ways you can go, and it probably depends on when your date is. If you found out your date was a ripe old age, you might be like, well, I'm just going to try and lead the best life I can. That's great. You find out you live fast and die hard at 27. Yeah, die hard or you might die harder. You might just be like, screw it. Or you might really ramp up your good works.

[00:22:38]

Yeah, it depends what kind of person you are. Yeah.

[00:22:40]

And more and more I'm realizing it depends on how you were raised to, you know, like we we definitely are responsible for carrying on ourselves as adults like you.

[00:22:52]

You can't just say, well, I wasn't raised very well or I was raised this way. So whatever, like you have a responsibility for yourself and who you are as an adult. Sure. But I really feel like the way that you're raised to really sets the stage and put you on a path that that can be difficult to get off of because it's so hard to see for sure, you know. Yeah, because that's just normal to you, because that's what your family was.

[00:23:13]

Yeah, that's a good point.

[00:23:14]

So anyway, extinction is just one of the ways, one of the types of existential threats that we face a bad one.

[00:23:21]

Yeah. Permanent stagnation is another one. And that's the one we kind of mentioned, danced around a little bit. And that's like some people are around.

[00:23:30]

Not every human died and whatever happened, but whatever is left is not enough to either repopulate the world or to progress humanity in any meaningful way or rebuild civilization back to where it was now.

[00:23:46]

And it would be that way permanently, which is kind of in itself, tough to imagine to just like the genuine extinction of humanity, is tough to imagine the idea of, well, there's still plenty of humans running around. How are we never going to get back to that place?

[00:24:01]

And there's that may be the most depressing one, I think. I think the next one is the most depressing, but that's pretty depressing. But one one example it's been given for that is like let's say we say, all right, this climate change, we need to do something about that. So we undertake a geoengineering project that isn't fully thought out and we end up causing like runaway greenhouse gas effect.

[00:24:23]

Right. We can make it worse. And there's just nothing we can do to reverse course. And so we ultimately wreck the earth. Yeah, that would be a good example of permanent stagnation. That's right. This is this next one. So, yes. Agree. Permanent stagnation is pretty bad. I wouldn't want to live under that.

[00:24:40]

But at least you can run around and, like, do what you want. I think the the total lack of personal liberty and the flawed realization one is what gets me.

[00:24:50]

Yeah. They all get me. Sure. Flawed realization is the next one. And that's that's sort of like the Matrix example. Right.

[00:25:00]

Which is that there are some technology that we invented that eventually makes us their little batteries and pods like oops.

[00:25:09]

Right. Basically or there's just some someone is in charge, whether it's a group or some some individual or something like that. It's basically a permanent dictatorship that we will never be able to get out from under. Yeah, because this technology we've developed like a global one. Yeah.

[00:25:27]

Is being used against us and it's so good at keeping tabs on everybody and squashing dissent before it grows. There's just nothing anybody could ever do to overthrow it. Yeah. And so it's a permanent dictatorship where we're not doing anything productive. We're not advancing. We're say say it's like a religious dictatorship or something like that. All anybody does is go to church and support the church or whatever, and that's that. And so what Dr. Bostrom figured out is that there are there are faiths as bad as death.

[00:26:02]

Sure, there are possible outcomes for the human race that aren't are as bad as extinction, that still leave people alive, even like in kind of a futuristic kind of thing, like the flawed realization one goes, but that you wouldn't want to live the lives that those humans live now. And so humanity has lost its chance of ever achieving its its true potential. That's right. And that that those qualify as existential risks as well.

[00:26:29]

That's right. They want to live in the Matrix. No. At all. Or in a post apocalyptic, altered earth.

[00:26:36]

Uh, yeah, the matrix, basically like Thunder the Barbarian. That's what I imagine with OK, with the permanent stagnation.

[00:26:44]

So there are a couple of big categories for existential risks. And they are either nature made or manmade. Um, the nature ones. We've you know, there's always been the threat that a big enough object hitting planet Earth could do it. Right, like that's always been around, it's not like that's some sort of new realization, but it's just a pretty rare it's so rare that it's not likely. Right.

[00:27:13]

All of the natural ones are pretty pretty rare compared to the human made ones.

[00:27:17]

Yeah. Like, I don't think science wakes up every day and worries about a comet or an asteroid or a meteor. No.

[00:27:24]

And it's definitely worth saying that the better we get at scanning the the heavens, the safer we are eventually when we can do something about it. If we see those heading out, what do we do?

[00:27:36]

Just hit the gas and move the earth over and Superman out there? Right. And there was nothing we can do about any of these anyway. So maybe that's also why science doesn't wake up worrying, right? Yeah.

[00:27:47]

So you've got near Earth objects. You've got celestial stuff like collapsing stars that produce gamma ray bursts. And then even back here on Earth, like a super volcanic eruption could conceivably put out enough soot that it blocks photosynthesis. And we did a show on that. Yeah. Sends us into essentially a nuclear winter, too. That would be bad. But like you're saying, there's these are very rare and there's not a lot we can do about them now.

[00:28:11]

Instead, the focus of people who think about existential risks. And there are like a pretty decent handful of people who are dedicated to this. Now, they say that the anthropogenic or the human made ones, these are the ones we really need to mitigate because they're human made. So they're under our control. And they they they that means we can do something about them more than, say, a comet. Yeah, yeah.

[00:28:38]

But that's it's a bit of a double edged sword because you think, oh well it's since we could stop this stuff, that's really comforting to know. But we're not right. Like we were headed down a bad path and some of these areas for sure. So because we are creating these risks and not thinking about these things and a lot of cases, they're actually worse, even though we could possibly control them.

[00:29:03]

Right. It's definitely makes it more ironic, too. Yeah, right.

[00:29:08]

So there are a few that have been identified and there's probably more that we haven't figured out yet or haven't been invented yet. But one of the big ones just I think almost across the board, the one the existential risk analysts worry about the most is A.I. artificial intelligence.

[00:29:24]

Yeah, and this is the most frustrating one because it seems like it would be the easiest one to not stop in its tracks, but to divert along a safer path. Right.

[00:29:38]

The problem with that is that people who have dedicated themselves to figuring out how to make that safer path are coming back and saying this is way harder than we thought it was going to be to make the safer path. Yeah. Oh, really? Yeah. And so at the same time, while people recognize that there needs to be a safe path for A.I. to follow this other path that it's on now, which is known as the unsafe path.

[00:30:05]

Yeah, that's the one that's making people money. So everybody's just going down the road.

[00:30:09]

Now, what these other people are trying to figure out, the safer one, because the the computer and war games would say maybe the best option is to not play the game. Sure. And that's if there is no safe option, then maybe I should not happen or we need to.

[00:30:28]

And this is almost heresy to say we need to put the brakes on a development so that we can figure out the safer way and then move forward. But we should probably explain what we're talking about with safe in the first place, right?

[00:30:42]

Yeah, I mean, we're talking about creating a super intelligent A.I. that basically is is so smart that it starts to self learn and is beyond our control. And it's not thinking, oh, wait a minute. One of the one of the things I'm programmed to do is make sure we take care of humans. Right.

[00:31:01]

And it doesn't necessarily mean that some A.I. is going to become super intelligent and say, I want to destroy all humans. Right. It's actually probably not going to be the case. It will be that this super intelligent A.I. is carrying out whatever it was programmed to do. It would disregard humans. Exactly. And so if our goal of staying alive and thriving comes in conflict with the goal of whatever this A's goal is, whatever it was designed to do, we would lose that.

[00:31:29]

Yeah, because it's smarter than us. By definition, it's smarter than us. It's out of its out of our control. And probably one of the first things it would do when it became super intelligent is figure out how to prevent us from turning it off.

[00:31:42]

Right. Well yeah. That's the failsafe is the all important failsafe that I could just disable.

[00:31:47]

Exactly right. You can just like sneak up behind it with the screwdriver or something like that, and then you could get shot. Right.

[00:31:53]

And the robots, like, see what was in a robot voice that's called designing friendly or aligned A.I. and people. I have the. Some of the smartest people in the field of A.I. research have stopped figuring out how to build A.I. and have started to figure out how to build friendly A.I..

[00:32:12]

Yeah, aligned and aligned with our goals and needs and desires.

[00:32:16]

Yeah. And Nick Bostrom actually has a really great thought experiment about this called the paperclip problem. Yeah. And it's you can hear it on in the world.

[00:32:28]

Yeah. I like that. Driving listeners. I think you think the next one is nanotech and nanotech is I mean it's something that's very much within the realm of possibility as is A.I. actually it's not that's not super far fetched either. Well superintelligent a.i. Yeah. Yeah it's definitely possible. Yeah.

[00:32:48]

And that's the same with nanotechnology we're talking about. And I've seen this everywhere from little tiny robots that will just be dispersed and clean your house. Right.

[00:32:58]

And be nice to like the atomic level where they can like reprogram our body right from the inside.

[00:33:06]

It's a little tiny robots that can clean your car. Yeah, those are the three. Those are three things.

[00:33:13]

So two of them are cool. One of the things about these nanobots is that because they're so small, they'll be able to manipulate matter on like the atomic level, which is like the usefulness of that is mind boggling to send them in and they're going to be networked. So we'll be able to program to do whatever and control them. Right. The problem is, is if they're networked and they're under our control, if they fall under the control of somebody else or say, a super intelligent A.I., yeah.

[00:33:43]

Then we would have a problem. Yes. Because they can rearrange matter on the atomic level. Yeah. So who knows what they would start rearranging that we wouldn't want them to rearrange.

[00:33:53]

Yeah, it's like that. Gene Simmons sci fi. Oh yeah. Movie in the 80s. I won't say it was looker. No, I always confuse those two. I was looking at the other one. This is runaway. Runaway. I think one inevitably followed the other on HBO.

[00:34:10]

They had to have been a double feature because they could not be more linked in my mind. Same here. You know, I remember Albert Finney was in one. I think he was in Looker. He was. And Gene Simmons was in Runaway as the bad guy, of course. Oh, yeah. But did a great job. And Tom Selleck was the good guy.

[00:34:25]

Yeah. Tom Selleck, yeah, but the idea in that movie was not nanobots, they were, but they were a little insect like robots, but they just weren't nano sized. Right.

[00:34:36]

And so the reason that these could be so dangerous is because not their size, but there's just so many of them. Yeah. And while they're not big and they can't punch you in the face or stick you in the neck with a needle or something like the runaway robot, they can do all sorts of stuff to you molecularly and you would not want that to happen.

[00:34:56]

Yeah, this is pretty bad. There's an engineer out of MIT named Eric Drexler. He is a big, big name in molecular nanotech.

[00:35:06]

He if he's listening right now, right up to when you said his name, he was just sitting there saying, please don't mention me, really, you know, because he's tried to back off from his grey goo hypothesis.

[00:35:16]

Right. So, yeah, this is the idea of what there are so many of these nanobots that they can harvest their own energy. They can self replicate. Right. Like a little bunny rabbits. And there would be a point where there was runaway growth such that the entire world would look like grey goo because it's covered with nanobots. Yeah, and since they can harvest energy from the environment, they would eat the world, they'd wreck the world.

[00:35:40]

Basically, this is the Hudson. That's scary. You're right.

[00:35:43]

So he took so much flak for saying this even because apparently it scared people enough back in the 80s that nanotechnology was like kind of frozen for a little bit. Yeah. And so everybody went Drechsler And so he's backed off from it saying, like, this would be a design flaw. This wouldn't just naturally happen with nanobots. You'd have to design them to harvest energy themselves and to self replicate. So just don't do that. Yeah. And so the thing is, it's like, yes, he took a lot of flack for it, but he also like it was a contribution to the world, he pointed out to big flaws that could happen that now are just like a sci fi trope.

[00:36:19]

Right.

[00:36:20]

But when he when he thought about them, they weren't self-evident or obvious. Yeah. I mean, I feel bad.

[00:36:27]

We even said his name, but it's worth saying. Clyde Drexler. Right.

[00:36:32]

Clyde the glide by the guy. That's right. Biotechnology is another pretty scary field. There are great people doing great research with infectious disease. Part of that, though, involves developing new bacteria, new viruses, new strains that are even worse than the pre-existing ones. Right. As part of the research. And that is that can be a little scary, too, because, I mean, it's not just stuff of movies. There are accidents that happen, protocols that aren't followed.

[00:37:03]

Right. And this stuff can or could get out of a lab. Yeah.

[00:37:08]

And it's not one of those, like could get out of a lab, even things that it has gotten out of. It happens. I don't want to say routinely this happens so many times. Yeah. That when you look at the track record of the biotech industry, it's just like, how are we not all dead right now? It's crazy.

[00:37:24]

Like lost broken arrows, lost nuclear war. Exactly. But with little tiny, horrible viruses. And then when you factor in that terrible track record with them actually altering viruses and bacteria to make them more deadly. Yeah. To do those two things, to reduce the time that we have to get over them. Right. So they make them more deadly and then to reduce proximity, to make them more easily spread, more contagious. So they spread more quickly.

[00:37:51]

Right. And kill more, more quickly as well. Then you have potentially an existential risk on your hand for sure.

[00:37:58]

We've talked in here a lot about the Large Hadron Collider. We're talking about physics experiments as the I guess this is the last example that we're going to talk about.

[00:38:07]

Yeah. And I should point out that this is not physics experiments does not show up anywhere and towards Precipice book, OK, this one is kind of like, oh, yeah, yeah. I mean, they're taken away.

[00:38:18]

There's plenty of people who agree that this is a possibility. But a lot of existential theorists are like, I don't know.

[00:38:26]

Well, you'll explain it better than me. But the idea is that we're doing all these experiments like the Large Hadron Collider, to try and figure stuff out. We don't understand. Right. And which is great, but we don't exactly know where that all could lead.

[00:38:41]

Yeah, because we don't understand it enough. You can't say this is totally safe. Right.

[00:38:46]

And so if you read some physics papers and this isn't like Rupert Sheldrake, morphic fields, kind of like children, right?

[00:38:55]

It's it's actual physicists have have said, well, actually using this version of string theory. Yeah. It's possible that this could be created in a Large Hadron Collider. Right. Or more likely a more powerful collider that's going to be built in the next 50 years or something like that.

[00:39:11]

The Super Large Hadron Collider, the DUPERE. Yeah, I think it's the nickname for it. Oh, man, I hope it doesn't have to be the next the Dupere. It's pretty great, right?

[00:39:22]

Yeah, I guess so. But it also is a little kind of, you know. I don't know, I like it. All right, so they're saying that a few things could be created accidentally within one of these colliders when they smash the particles together. Microscopic black hole. My favorite, the low energy vacuum bubble, no good, which is it's a little tiny version of our universe that's more stable, like a more stable version that's going to keep a lower energy version.

[00:39:49]

And so if it were allowed to grow, it would grow at the speed of light. It would overwhelm our universe and be the new version of the universe.

[00:39:58]

Yeah, that's like when you buy the baby alligator or the baby boa constrictor python you think is so cute, right?

[00:40:03]

And then it grows up and eats. The universe may be screwed. The problem is, is this new version of the universe is set up in a way that's different than our version. And so all the matter, including us, that's arranged. Just so for this this version of the universe would be disintegrated in this new version. So it's like the snap.

[00:40:23]

But can you imagine if all of a sudden just a new universe grew out, the Large Hadron Collider accidentally and at the speed of light, just ruin this universe forever? If it was, we just accidentally did this with the physics experiment. I find that endlessly fascinating and also hilarious. Yeah, just the idea. I think the world will end ironically, somehow it's entirely possible.

[00:40:48]

So maybe before we take a break, let's talk a little bit about climate change, because a lot of people might think climate change is an existential threat. Uh, you know, it's terrible and we need to do all we can. Oh, yeah. But even the worst case models probably don't mean an end to humanity as a whole, as a whole. Like it means we're living much further inland than we thought we ever would. And we may are much tighter quarters than we ever thought we might be in.

[00:41:18]

A lot of people might be gone, but it's probably not going to wipe out every human being.

[00:41:23]

Yeah, it'll probably end up being akin to that same that same line of thinking, the same path of a catastrophic nuclear war. Yeah. Which I guess you could just say nuclear war. Sure. Catastrophic is kind of built into the idea, but we would be able to adapt and rebuild. Yeah, it's possible that our worst case scenarios are actually better than what will actually happen. So just like with a total nuclear war, it's possible that it could be bad enough that it could be an existential risk.

[00:41:55]

It's possible climate change could end up being bad enough that it's an existential threat. From our current understanding, they're probably not existential risks. Right.

[00:42:05]

All right. Well, that's a hopeful place to leave for another break. And we're going to come back and finish up with why all this is important. Should be pretty obvious, but we'll summarize it.

[00:42:36]

You know, you should, you know, OK, Chuck, one thing about existential risks that people like to say is, well, let's just not let's just not do anything.

[00:42:51]

And it turns out from people like Nick Bostrom and Toby Board and other people around the world who are thinking about this kind of stuff, if we don't do anything, we probably are going to accidentally wipe ourselves out. Like doing nothing is not a safe option.

[00:43:08]

Yeah, but Bostrom is one who has developed a concept that's hypothetical, called technological maturity, which is would be great, and that is sometime in the future where we have invented all these things. But we have done so safely and we have complete mastery over it all. There won't be those accidents. There won't be the grey goo. There won't be the A.I. That's not aligned.

[00:43:31]

Yeah, because we'll know how to use all this stuff. That's right. Like you said. Right. We're not mature in that way right now.

[00:43:37]

No, actually, we're at a place that Carl Sagan called our technological adolescence where we're becoming powerful, but we're also not wise. Right.

[00:43:45]

So that makes sense at the point where we're at now, technological adolescence, where we're starting to invent the stuff that actually can wipe humanity out of existence. But before we reach technological maturity where we have safely mastered and have that kind of wisdom to use all this stuff, that's probably the most dangerous period in the history of humanity. And we're entering it right now. And if we don't figure out how to take on these existential risks, we probably won't survive from technological adolescence all the way to technological maturity.

[00:44:19]

We will wipe ourselves out one way or another because this is really important to remember. All it takes is one one existential catastrophe. And not all of these have to take place. No, it doesn't have to be some combination of just one just one bug with basically 100 percent mortality has to get out of a lab. Just one accidental physics experiment has to slip up. Just one A.I. has to become super intelligent and take over the world, like just one of those things happening and then that's it.

[00:44:50]

And again, the problem with existential risks that makes them different is we don't get a second chance. One of them befalls us. That's that. That's right.

[00:45:01]

It depends on who you talk to about if you want to get in, maybe just a projection on our chances as a whole as humans. Toby Orridge right now is, what, a one in six chance over the next hundred years?

[00:45:15]

Yeah, he always follows that with Russian roulette. Yeah. Other people say about ten percent. There's some different cosmologists. There's one named Lord Martin Rees who puts it at fifty fifty. Yeah.

[00:45:28]

He actually is a member of the Center for the Study of Existential Risk.

[00:45:32]

And we didn't mention before Bostrom founded something called the Future of Humanity Institute, which is pretty great if h I mean and then there's one more place I want to shout out.

[00:45:43]

It's called the Future of Life Institute. It was founded by Max Tegmark and John Tollan, co founder of I think Skype. Oh, really? I think so. All right.

[00:45:53]

Well, you should probably also shut out the Church of Scientology, no doubt.

[00:45:58]

No, the genius. Yeah, that's the one. Yes. What I was thinking about it. Well, they get confused a lot.

[00:46:04]

This is a pretty cool little thing you did here with how long? Because I was kind of talking before about the long view of things and how long humans have been around. So I think your rope analogy is pretty spot on here.

[00:46:16]

So let's j.l Schoenberg's rope analogy. Well, I didn't think he wrote it.

[00:46:20]

I wish it were that you included it. So the what we were talking about, like you were saying, is like it's hard to take that long view, but if you if you step back and look at how long humans have been around. So Homo sapiens has been on Earth about 200000 years, seems like a very long time.

[00:46:36]

It does. And even modern humans like us have been around for about fifty thousand years. Seems like a very long time as well. That's right. But if you think about how much longer the human race humanity could continue on to exist as a species, it's that's nothing. It's virtually insignificant. And jail. Schellenberg puts it like this like let's say humanity has a billion year lifespan and you translate that billion years into a 20 foot rope.

[00:47:05]

OK, that's easy to show up with.

[00:47:08]

Just the eighth of an inch mark on that 20 foot rope, you would have to our species would have to live another three hundred thousand years from the point where we've already lived.

[00:47:19]

Yes, we would have to live 500000 years just to show up as an eighth of an inch. That first eighth of an inch on that 20 foot long rope says it all. That's how long humanity might have ahead of us. And that's take. Kind of a conservative estimate. Yeah, some people say once we reach technological maturity, we're we're fine. We're not going to go extinct because we'll be able to use all that technology, like having A.I. track all those near Earth objects.

[00:47:45]

Right. And say, well, this one's a little close for comfort. I'm going to send some nanobots out to disassemble it. We will remove ourselves from the risk of ever going extinct when we hit technological maturity. So a billion years is definitely doable for us.

[00:47:58]

Yeah, and it's why we should care about it, because it's happening right now. I mean, there is already a guy that is unaligned. We are we've already talked about the the biotech and labs. Accidents have already happened. Happened all the time. Yeah. And there are experiments going on with physics that we we think we know what we're doing. Right. But accidents happen and an accident that you can't recover from. You know, there's no whoopsies.

[00:48:27]

Right. Let me try that again. Right, exactly.

[00:48:30]

Because we're all toast. So this is why you have to care about it. And luckily, I wish there were more people that care about it.

[00:48:36]

Well, it's becoming more of a thing. And if you talk to Toby Ord, he's like, so just like, say, the environmental movement was, you know, the the the moral push. And we're starting to see some some stuff, some results from that now. Yeah, but they started back in the 60s and 70s. Nobody had ever heard of that. Yeah. I mean it took decades. He's saying like we're about that's what we're doing now with existential risk.

[00:48:59]

People are going to start to realize like oh man, this is for real. And we do something about it because we could live a billion years if we managed to survive the next hundred, which mean you and me, Chuck, and like all of us alive right now in one of the most unique positions any humans ever been in, we have the entire future of the human race basically resting in our hands because we're the ones who happen to be alive when humanity entered its technological adolescence.

[00:49:27]

Yeah, and it's a tougher one than save the planet because it's such a tangible thing when you talk about pollution and it's very easy to put on a TV screen or in a classroom. And it's not so easily dismissed because you can see it in front of your eyeballs and understand it.

[00:49:45]

This is a lot tougher education wise because 99 percent of people hear something about nanobots and grey goo or A.I. and just think, come on, man, that's the stuff of movies. Yeah.

[00:49:58]

And I mean, that's a it's sad that we couldn't just dig into it further, because when you really do start to break it all down and understand it, it's like, no, this totally is for real. And it makes sense. Like this is highly possible and maybe even likely. Yeah.

[00:50:14]

And not the hardest thing to understand. It's not like you have to understand nanotechnology, right. To understand its threat.

[00:50:19]

Right, exactly. Well put. The other thing about all this is that not everybody is on board with this. Even it's not even people who hear about this kind of stuff for like now, you know, this is overblown. Pie in the sky is overblown or the sort of pie in the sky. It's this guy is making the ground the opposite. We're in the dark sky territory. It's a turkey drumstick in the north.

[00:50:42]

OK, that's kind of the opposite of a pie. Sure. OK, I think I may have just come up with a colloquialism. I think so. Um, so some people aren't convinced. Some people say, no, I is nowhere near being even close to human level intelligent, let alone super intelligent. Yeah.

[00:51:01]

Like why spend money. Because it's expensive. Right. And other people are like yeah. If you start diverting, you know, research into figuring out how to make AI friendly, I can tell you China and India aren't going to do that. And so they're going to leapfrog ahead of us and we're going to be toast competitively. Right. So there's a cost to it. Opportunity cost. There's an actual cost. So there's a lot of people it's basically the same arguments for people who argue against mitigating climate change.

[00:51:28]

Yeah, same same thing. Kind of.

[00:51:30]

So the answer is terraforming. Terraforming.

[00:51:35]

Well, that's that's not the answer. The answer is either one of those. Right. Study terraforming is right. OK, the answer is to study this stuff. Sure. Figure out what to do about it. But it wouldn't hurt to learn how to live on Mars.

[00:51:47]

Right. Or just off of Earth because in the exact same way like that, like a whole village is at risk when it's a mudslide or a mountain in a mudslide comes down. If we all live on earth, if something happens to life on Earth, that's it for humanity. But if there like a thriving population of humans who don't live on earth, who live off of Earth, if something happens on Earth, humanity continues on. So learning to live off of Earth is a good step in the right direction.

[00:52:16]

But that's a plan B, that's plan A dot one one eight or one, to be sure.

[00:52:23]

Yes, it's tied for first, like it's something we should be doing at the same time is studying and learning to mitigate existential risk. Yeah.

[00:52:30]

And I think it's got to be multipronged. Because the threats are multipronged. Sure, absolutely. And there's one other thing that I really think you got to get across. Well, like we said, that if if, say, the U.S. starts to invest all of its resources into figuring out how to make friendly, I bet India and China continue on like the path it's not going to work.

[00:52:52]

And the same goes with it. Every country in the world said, no, we're going to figure out friendly. Yeah, but just one dedicate itself to continuing on this path the night that the rest of the countries in the world progress would be totally negated by that one.

[00:53:09]

Yeah. So we got to get the it's got to be a global effort.

[00:53:11]

It has to be species wide effort, not just with A.I., but with all of these understanding all of them and mitigating them together. Yeah, that could be a problem.

[00:53:19]

So thank you for very much for doing this episode with me.

[00:53:24]

Oh, me. Yeah, I think you're talking to Dave.

[00:53:27]

Yeah, well, Dave too. We appreciate you too, Dave. But big ups to you, Charles, because Jerry is like I'm not sitting in that room.

[00:53:33]

He's like, I'm not listening to blather on about existential risk for an hour. So one more time towards the precipice is available. Everywhere you buy books, you can get the end of the world with Josh Clark. Wherever you get podcasts. If this kind of thing floats your boat, check out the Future of Humanity Institute, the Future of Life Institute. And they have a podcast hosted by Ariel Kahn. And she had me on back in December of twenty eighteen as part of a group that was talking about existential hope.

[00:54:06]

So you can go listen to that too. If you're like this is a downer. I want to think about the bright side. Sure. There's that whole Future of Life Institute podcast. Yeah. So what about you? Are you like convinced of this whole thing like that? This is an actual like thing we need to be worrying about and thinking, oh, no, no, really, no.

[00:54:25]

I mean, I think that sure, there are people that should be thinking about this stuff and that's great as far as like me.

[00:54:34]

Like, what can I do?

[00:54:36]

Well, I ran into that, like, there's not a great answer for that. It's more like start telling other people is the best thing that the average person can do. Hey, man, we just did that in a big way. We did. And we like to create five hundred million people. Now we can go to sleep. OK, you got anything else?

[00:54:53]

I got nothing else. All right. Well then since Chuck said he's got nothing else, it's time for Listener Mail.

[00:55:01]

Uh, yeah. This is the opposite of all the smart stuff we just talked about. I just realized, hey, guys, love you. Love stuff you should know on a recent airplane flight. I listen to and really enjoy the Coyote episode wherein Chuck mentioned often wolf bait as a euphemism for farts. Yeah.

[00:55:20]

Coincidentally, on that same flight, uh, we're Bill Nye, the Science Guy, and Anthony Michael Hall, the actor.

[00:55:29]

What is this star studded airplane flight. Wow. He said so naturally, when I arrived at my home, I felt compelled to watch three watch the 1985 film Weird Science, in which Anthony Michael Hall stars in that movie. And I remember this now that he mentions it in that movie, Anthony Michael Hall uses the term both bait as a euphemism for pooping, dropping wolf bait, which makes sense now that it would be actual poop and not a fart is.

[00:55:56]

Did you say his name before? Who wrote this? No. Your friend who used the word vulfpeck. Oh, yeah, sure. OK, so is Eddie like a big, weird science fan or Anthony Michael? No, I don't think he just Kelly Le Brock fan. Yeah, that must be OK.

[00:56:11]

It has been a full circle day for me and one that I hope you will appreciate hearing about. And that is Jake.

[00:56:16]

Man, can you imagine being on a flight with Bill Nye and Anthony Michael Hall? Who do you talk to? Who do you hang with? I don't I just be worried that somebody was going to, like, take over control of the plane and fly it somewhere to hold us all hostage and make those to, like, perform.

[00:56:33]

Or what if Bill Nye and Anthony a hall are in cahoots maybe and they take the plane hostage?

[00:56:38]

Yeah, it'd be very suspicious if they didn't talk to one another, you know what I mean? I think so. Who is that?

[00:56:44]

That was Jake. Thanks, Jake. That was a great email and thank you for joining us. If you want to get in touch with us like Jake did, you can go on to stuff you should know dot com and get lost in the amazing ness of it. And you can also just send us an email to Stuff podcast and I heart radio dotcom. Stuff you should know is a production of radios HowStuffWorks for more podcasts, my radio is at the radio app Apple podcasts or wherever you listen to your favorite shows.