Transcribe your podcast
[00:00:00]

Hey, this is Dana Schwartz. You may know my voice from Noble Blood, Haleywood, or stealing Superman. I'm hosting a new podcast, and we're calling it very special episodes. A very special episode is stranger than fiction.

[00:00:15]

It sounds like it should be the next season of True Detective. These canadian cops trying to solve this mystery of who spiked the chowder on the Titanic set.

[00:00:22]

Listen to very special episodes on the iHeartRadio app, Apple Podcasts, or wherever you get your podcasts.

[00:00:31]

Hello, this is Susie Esman and Jeff Garland. I'm here, and we are the hosts of the history of curb your enthusiasm podcast. Now, we're going to be rewatching and talking about every single episode, and we're going to break it down and give behind the scenes knowledge that a lot of people don't know. And we're going to be joined by special guests, including Larry David and Cheryl Hines, Richard Lewis, Bob Odenkirk, and so many more. And we're going to have clips, and it's just going to be a lot of fun. So listen to the history of curb your enthusiasm on iHeartRadio app, Apple podcasts, or wherever you happen to get your podcasts.

[00:01:04]

Hi, everyone, it's Josh, and for this week's select, I've chosen our 2018 episode some games you would surely lose to a computer. It's a philosophical discussion about AI that's disguised as an episode on computer games. Honestly, we didn't plan it to be like that. It just turned out that way. We're pretty happy that it did. And in light of the recent advances with machine learning, like chat, GPT, a few of the things we say seem naively quaint now. Plus, it has a dollop of our tech stuff colleague Jonathan Strickland at the end, so that's a bonus. I hope you enjoy.

[00:01:43]

Welcome to stuff you should know, a production of iHeartRadio.

[00:01:53]

Hey, and welcome to the podcast. I'm Josh Clark. There's Charles W. Chuck Bryant. There's Jerry over there. I'm just going to come out and tell everybody making fun of me for some weird reason. Vaguely, in weird ways, but I'm all right. So, Chuck, I have a story for you.

[00:02:10]

Okay.

[00:02:11]

I'm going to take us back to the 1770s and the swinging town of Vienna. Not Virginia, not Viana, Georgia, which, you know, that's how they pronounce it, right? Viana.

[00:02:25]

Viana. Sausages.

[00:02:26]

Right? Vienna, Austria.

[00:02:28]

You ever been there?

[00:02:30]

Vienna, Austria? No. Been to Brussels. That was pretty close.

[00:02:34]

Vienna is lovely, I'm sure.

[00:02:37]

I think it's a lot like Brussels.

[00:02:39]

Very clean. Lovely town. I just remembered it being very clean.

[00:02:43]

Yeah, very clean. Gorgeous architecture. Weird little angled side streets that are very narrow. Very pretty town. So we're in Vienna, and there is a dude skulking about going to the royal palace in Vienna. His name is Wolfgang von Kemplen, and he's an inventor. He's an engineer. He's a pretty sharp dude, and he's got with him what would come to be known as the Turk. But he called it the mechanical Turk or the automaton chess player, and that's what it was. It was a wooden figure that moved mechanically, seated at a cabinet, and on top of the cabinet was a chessboard. And when he brought it out to show to the royal court, it was cool, kind of, but nothing they hadn't seen before, because automata was kind of a hip thing by then.

[00:03:42]

Yeah. People loved building these, engineering these automata machines to do various things. And people were just knocked out by the fact that you hide these gears and levers behind wood or a cloth, and it looks as though there's a real. Well, not real, but you know what.

[00:04:02]

I mean, that it's like a real machine.

[00:04:04]

Yeah. But they weren't fooled into thinking, like, is that a real man? But for their time, it was so advanced looking that it's like us seeing ex machina in the movie theater.

[00:04:17]

Sure.

[00:04:18]

Does that make sense?

[00:04:19]

Yeah, no, it does make sense. But imagine seeing, like, ex machina and being like, I've seen this before. This isn't anything special. Okay.

[00:04:26]

Yeah. And this thing, to be clear, looked like a. Is it Zoltar or Zoltan from big?

[00:04:33]

Zoltar. Zoltan. I don't know. It's one of those two.

[00:04:37]

One of those two. Like, this guy's wearing a turban, and it's in a glass case. Like a bust. Like a chest up thing.

[00:04:45]

Yeah. He's seated at this cabinet, so there's no need for legs or anything like that. Yeah. But the thing, this is what was amazing about the Turk. He could play chess, and he could play chess really well. So, yeah, he was like an automaton, and he moved all herky jerky or whatever, but he could play you in chess, which was a huge advance at the time. Like, this is something that wouldn't come up again until the 1990s, more than 200 years later. This thing, this automaton could play a human being in chess and beat them.

[00:05:22]

Well, yeah. And it looked like when the game started, it would look down at the chessboard and cock his head like, what should my first move be?

[00:05:29]

Right.

[00:05:30]

And if people. I love this part. If people tried to cheat. Apparently Napoleon tried to cheat this thing, because this guy, he debuted at the viennese court, but then it went on a world tour.

[00:05:42]

Yeah. And it was taken over by a successor to the guy who toured with it even further.

[00:05:48]

People went nuts for this stuff.

[00:05:49]

They did. They loved it because they were like, this is crazy. I can't believe what I'm seeing. Most people, though, were not taken in by it. They're like, there's some trick here.

[00:05:56]

Sure.

[00:05:57]

But von Kemplen and the guy who came after him, I don't remember his name. They would demonstrate. You could open this cabinet, and you could see all the workings of the mechanical Turk inside.

[00:06:09]

Right. So what I was saying is, if this thing sensed a cheater, like Napoleon supposedly did, Napoleon would move a piece out of Turner illegally or something. This dude, the Turk. Turk 182, would pick up the chess piece, move it back, as if to say, like, no, no, Napoleon. I see what you're doing. And then if the person attempted to move it again, I don't know how many times, maybe two or three times, eventually it would just go and wipe his hand across the board and knock off all the pieces. Game over.

[00:06:40]

Right.

[00:06:40]

Which is pretty great. Yeah, it's a nice little feature.

[00:06:43]

Yeah. But it even showed even more that this thing was thinking for itself.

[00:06:49]

Yeah.

[00:06:49]

That's the key here. Right? Sure. Chess had been, for a very long time, viewed as only something that a human would be capable of, because it took a human intellect. And there was actually a guy, an english engineer. I think he was a mechanical engineer. His name was Robert Willis. He said that chess was in, quote, the province of intellect alone. So the idea that there was this automaton playing chess blew people away. But again, people figured out, like, okay, there's something going on here. We think that von Kemplen is controlling this thing remotely, somehow, maybe using magnets or whatever. Other people hit upon the idea that there was a small person inside the cabinet who would hide when the workings were shown, when the cabinet was open to show the workings. And then when the cabinet was closed again and the mechanical Turk started playing, the person had crawled back out and was actually controlling it. This seems to be the case that there was a person controlling it, but the idea that it was a machine that could think and beat humans in chess had, like, kind of unsettling implications.

[00:08:02]

Yeah. This author, Philip Thickness. Great name. British author.

[00:08:09]

Sure.

[00:08:10]

Philip Thickness. Yeah, he said. And people, like you said, all those more complicated explanations. In this article you sent astutely points out that he followed Occam's razor and basically said, he's got a little kid in there. He's got a little bobby Fisher in there that's really good at chess. And that's what's going on. And other people speculated that, know, little people might be in there. Just adults who would fit in. Then, you know, there's the explanation that he would open it up and shine a candle around and say, nothing to see here. Right, everyone? So, should we reveal the real deal?

[00:08:50]

Sure. I think I did already.

[00:08:53]

Well, I don't think you spelled it out as well.

[00:08:55]

We'll spell it out.

[00:08:56]

There was a little person in there.

[00:08:58]

Yeah.

[00:08:58]

Not just one little person, but they would travel around and recruit people. I guess people would get tired of being in there, or they'd forget about.

[00:09:05]

Them and they'd starve and have to replace them.

[00:09:08]

But it really was a trick. There was a little person in there. They did the same thing as, like, the magic ax. When they saw a person in half. The lady just gets into a tiny little ball in one section of that box.

[00:09:21]

But my thing is, like, this is not a satisfying explanation to me, Chuck.

[00:09:25]

I think it's great.

[00:09:26]

How did the person keep up with the board above?

[00:09:30]

Well, I don't know if they ever proved exactly how it was going.

[00:09:34]

That's what saying.

[00:09:35]

Oh, okay. Whether or not I think the Turk was just hollowed out. And you would just put your arms through the arm holes.

[00:09:47]

So you would crawl up into the Turk.

[00:09:49]

Yeah. You would become the Turk. You and the Turk would fuse.

[00:09:52]

That's what some people thought. I think that's what Edgar Allan Poe thought, too.

[00:09:56]

He loved kidding me.

[00:09:58]

Right? Other people thought that the little person was underneath in the cabinet operating the Turk with levers and stuff like that.

[00:10:07]

Well, there could have been a mirror or something, I guess, like a little telescopic mirror.

[00:10:11]

That's what's getting me, is how would they keep up with the game?

[00:10:14]

Right.

[00:10:14]

You could keep track of the game, but how could you see where the other person moved? You would know where you moved, but you wouldn't be able to see where the other person moved. That's what I don't get.

[00:10:22]

Just mirrors. Smoking mirrors.

[00:10:23]

Maybe so. But the point is, it was a fake. It was a fraud. But it raised some really big questions about the idea of a machine beating a person at something like chess.

[00:10:35]

Yeah. And it really piqued the mind of one Charles Babbage. He was a kid or young, at least at the time, when he saw the Turk in person. And a few years afterward, he began work on something called the difference engine, which was a machine that he designed to calculate mathematics automatically. So some point to this is kind of maybe the beginnings of humans trying to create AI.

[00:11:04]

Well, yeah. With babbage's differential machine or difference machine.

[00:11:08]

Yeah, difference engine. But at the very least, what this is is the first that I know of. Example of man versus machine. Even though it was really man versus man, because there was a man in the machine.

[00:11:21]

Right. It was a fraud. Yeah.

[00:11:23]

But it sparked that idea.

[00:11:25]

It definitely did. And that's something that chess in particular has always been like, this idea of, like, if you can teach a machine to play chess, you really achieved a milestone.

[00:11:36]

Sure.

[00:11:36]

And there's been plenty of programs, most notably deep Blue, which we'll talk about. But there's been this idea that part of AI is chess, teaching it to play chess. But the people who develop AI never set out to make a chess playing AI, just to make a machine that can play chess.

[00:12:00]

Right.

[00:12:00]

That's not the point. Chess has always been this way to demonstrate the progress of artificial intelligence.

[00:12:06]

Yeah. Because it's a complex game that you can't just program it like it almost has to learn.

[00:12:14]

Well, it depends on how you come at it at first. Right. So initially, they did try to program it. Okay. From basically 1950 to 1950 to 2010, 60 years. Right. That is how they approached AI. And chess is you figured out how to break chess down and explain it to a computer. Now, if you could, ideally, you would have this computer or this AI, this artificial intelligence, be able to think about the outcome of every possible outcome of a move before making it.

[00:12:56]

Right?

[00:12:57]

That's just not possible. Still today, we don't have computers that can do that. Right. So what you have to do is figure out how to create shortcuts for the machine, give it best practices, that kind of thing. And that was actually laid out in 1950 by a guy named Claude Shannon, who's the father of information theory. And he wrote a paper with a pretty on the nose title called programming a computer for playing chess. And you have to say it like that when you say the name.

[00:13:23]

Yeah, it's got a question mark at the end.

[00:13:25]

Right. But he laid out two big things. One is creating a function of the different moves, and then another one is called a minimax. And those were the two things that Shannon laid out, and they established about 50 or 60 years of development in teaching an AI to play chess.

[00:13:48]

Yeah. So this evaluation function is just sort of the very basis of it all, kind of where it starts, which is you kind of create a numerical evaluation based on the state of the board at that moment.

[00:14:03]

Right.

[00:14:03]

And assign a real number evaluation to it. So the highest number that you would shoot for is obviously getting checkmate, getting a king and checkmate.

[00:14:16]

Right. So what you've just done now is by assigning a number to a state like the pieces on a board, what you've done is to say shoot for this number.

[00:14:27]

Right.

[00:14:28]

The higher the number, like you're going to give this AI the rule. Now, the higher the number, the more desirable that this move that could lead to that higher number function, evaluation function is what you want to do.

[00:14:39]

Right. Like capture the knight or capture the queen. Capture the queen would have a higher evaluation number, right, exactly.

[00:14:45]

So that's the function. And there's another one called the minimax.

[00:14:49]

Yeah, this is pretty great where you.

[00:14:50]

Want to minimize the maximum. And this is another shortcut that they.

[00:14:53]

Taught computers, maximum loss.

[00:14:55]

That is, right.

[00:14:56]

Yeah.

[00:14:56]

So what they taught computers to do is no computer can look through an entire game, every possible outcome.

[00:15:04]

Right.

[00:15:04]

But there are computers that can look pretty far down the line at every possible outcome. And what you can say is, okay, you want to find the evaluation function, that is the worst case scenario, the maximum loss, and then find the move that will minimize the possibility for that outcome.

[00:15:28]

Yeah. And this is, you're only limited by your programming power, but by looking not only at the state of the board right now, but if I make this move and I move the pawn to this spot, what are the next three moves possibly that could happen as a result of this move.

[00:15:44]

Right.

[00:15:45]

And you're only limited, like I said, by programming power. So obviously the more juice you have, the more moves ahead that you can look.

[00:15:52]

Exactly. And then they just shy away from ones with a higher function number.

[00:15:57]

Exactly.

[00:15:58]

Or a lower function number, depending on how you've programmed it.

[00:16:00]

Right.

[00:16:01]

But they're making these decisions based on these rules. And then there's other things you can do, like little shortcuts to say if a decision tree leads to the other player's king being in checkmate, don't even think about that move any further, don't evaluate any longer, just abandon it, because we would never want to make that move. Right. So there's all these shortcuts you can do. And that's what they did to teach computers. That's what deep blue did when it beat Gary Kasparov in 1997. It was this huge, massive computer that knew a lot of. A lot about chess. It had a lot of rules, a lot of incredibly intricate programming that was extremely sharp, and it actually won. It became the first computer to beat an actual human chess grandmaster in regulation match mean.

[00:16:55]

And I don't think Kasparov gets enough credit for being willing to do this, because it was a big deal for him to lose. It was in this community and the AI community. It sent shockwaves. And everyone that was alive remembers, even if you didn't know anything about either one, remembers deep blue being all over the news. It was a really big deal. And Kasparov put his name on the line and lost.

[00:17:21]

Yeah. And I was wondering, Chuck, how you would get somebody to do that.

[00:17:26]

I'm sure a mountain of cash, I.

[00:17:29]

Guess that would probably be part of it. But he probably got mean. I don't know.

[00:17:34]

I bet that's out.

[00:17:35]

Just.

[00:17:35]

I just didn't look it up.

[00:17:36]

So that's possible. It's also possible that they said, look, man, this is chess we're talking about or whatever, but really what you're doing is helping advance artificial intelligence.

[00:17:46]

Right? Because we're not really trying ultimately to win chess games. We're trying to cure cancer.

[00:17:52]

Yeah, we're going to take your title because we're going to beat you, or our machine's going to beat you. But even still, you're going to be helping with cancer. Think of the cancer, Kasparov. That's probably what they said.

[00:18:03]

Should we take a break? Yeah, let's wait. Well, should we tease our special guest first?

[00:18:09]

Okay. I can smell him.

[00:18:10]

I don't think we even said we're going to have a special guest later in the episode. Mr. Jonathan Strickland of tech stuff.

[00:18:17]

Nice.

[00:18:18]

It's been a long time since, like years since we had Strickon.

[00:18:21]

The last time we had Strickon was like 2009 with the Necronomicon episode.

[00:18:26]

Where's he been besides sitting in between us every day?

[00:18:29]

It's been a Strickland drought, is what it's been.

[00:18:31]

Yeah. So Strickland's coming later, but we're going to come back after this and talk a little bit more about man versus machine.

[00:18:42]

Welcome to stuff you should know.

[00:18:50]

Hi, I'm Susie Esman.

[00:18:52]

And I am Jeff Garland.

[00:18:53]

Yes, you are. And we are the hosts of the history of curb your enthusiasm podcast. We're going to watch every single episode. It's 122, including the pilot, and we're going to break them down.

[00:19:05]

By the way, most of these episodes I have not seen for 20 years.

[00:19:08]

Yeah, me too. We're going to have guest stars and people that are very important to the show, like Larry David.

[00:19:13]

I did once try and stop a woman who was about to get hit by a car. I screamed out, watch out.

[00:19:18]

And she said, don't you tell me what to do.

[00:19:20]

And Cheryl Hines, why can't you just.

[00:19:22]

Lighten up and have a good time?

[00:19:24]

And Richard Lewis, how am I going.

[00:19:26]

To tell him I'm going to leave now? Can you do it on the phone? Do you have to do it in person? What do, do you in cable?

[00:19:30]

You have to go in. He's a human being. He's helped you.

[00:19:32]

And then we're going to have behind the scenes information. Tidbit.

[00:19:35]

Yes, tidbit is a great word.

[00:19:37]

Anyway, we're both a wealth of knowledge about this show because we've been doing it for 23 years. So subscribe now and you could listen to the history of Kerber enthusiasm on iHeartRadio app, Apple Podcasts, or wherever you happen to get your podcasts.

[00:19:52]

Hey, this is danish sports. You may know my voice from Noble Blood, Haley Wood, or stealing Superman. I'm hosting a new podcast and we're calling it very special episodes.

[00:20:03]

One week, we'll be on the case.

[00:20:05]

With special agents from NASA as they crack down on black market moon rocks. H. Ross Pro is on the other side, and he goes, hello, Joe, how.

[00:20:13]

Can I help you?

[00:20:14]

I said, Mr. Pro, what we need is $5 million to get back a moon rock.

[00:20:18]

Another week, we'll unravel a 90s Hollywood mystery.

[00:20:21]

It sounds like it should be the next season of true Detective or something. These canadian cops trying to solve this 25 year old mystery of who spiked the chowder on the Titanic set.

[00:20:30]

A very special episode is stranger than fiction. It's normal. People plop down in extraordinary circumstances. It's a story where you say, this should be a movie. Listen to very special episodes on the iHeartRadio app, Apple Podcasts, or wherever you get your podcasts.

[00:20:50]

What up, guys? Ola ketal, it's your girl. Cheekies from the cheekies and chill. And dear Cheekies podcasts. You've been with me for season one and two, and now I'm back with season three. I am so excited. You guys get ready for all new episodes where I'll be dishing out honest advice and discussing important topics like relationships, women's health, and spirituality. For a long time, I was afraid of falling in love, so I had to. And this is a mantra of mine or an affirmation every morning where I tell myself it is safe for me to love and to be loved. I've heard this a lot. That people think that I'm conceited, that I'm a mamona. And a mamona means that you just think you're better than everyone else. I don't know if it's because of how I act in my video. Sometimes I'm like, I'm a baddie. I don't know what it is, but I'm chill. It's cheekies and chill.

[00:21:35]

Hello.

[00:21:36]

Listen to cheekies and chill and dear cheekies as part of the My Coultura podcast network on the iHeartRadio app, Apple Podcasts, or wherever you get your podcasts.

[00:21:57]

Okay, dude. So what we just described was how AI was taught to play things like chess or to think like, you take something, you figure out how to break it down into little rules and things that a computer can think of, right? And then follow these kind of rules to make the best decision. That's how it used to be. The way that it's done now, that everybody's doing now, is where you are creating a machine that teaches itself.

[00:22:26]

Yeah, that's the jam that was the breakthrough.

[00:22:29]

You may have noticed back in about 2013, 2014. All of a sudden, things like Siri and Alexa got way better at what they are doing. They got way less confused.

[00:22:44]

Oh, really?

[00:22:45]

Your navigation app got a lot better. The reason why is because this new type of AI, this new type of machine learning that can teach itself and learn on its own just hit the scene, and they just started exploding. And one of the things that they were first trained on was game.

[00:23:04]

Yeah. And it makes sense. And if you thought chess was complicated and difficult when it comes to these new ais that they're teaching to teach themselves game strategy, they said, we might as well dive in to the chinese strategic game go because it has been called the most complex game ever devised by. This was actually. That was actually a quote from Demi Hasabi, a neuroscientist and the founder of Deep Mind, which was DeepMind. They were purchased by Google, or were they always part of Google?

[00:23:41]

I don't know if they were a spun off branch or were they were purchased, but it's one of Google's AI outfits.

[00:23:47]

Well, they're one of the teams. Yeah. That are designing these new programs. And to give you an idea of how complex go is, it deals with a board with different stones, and there are ten. How do you even say that?

[00:24:02]

Ten to the 170th power.

[00:24:04]

So that means 170 zeros. And take that number. And that's the number of possible configurations of a go board.

[00:24:12]

Right. So like you say, chess is very complex and complicated, and it's very difficult to master go. And I've never played go of you.

[00:24:21]

No.

[00:24:21]

So it's supposedly, it's easy to learn.

[00:24:24]

Right. But very complicated in its simplicity.

[00:24:26]

Right? Exactly. It's extremely difficult to master. And there was a guy in the late ninety s, and I'm guessing that he was saying this after deep blue beat Kasparov. It was an astrophysicist from Princeton. He said that it would probably be 100 years before a computer beats a human at go. To give you an idea of just how complex go is, that deep blue would just be Kasparov. And this guy's saying it'll still be 100 years before anyone gets beat at go by a computer.

[00:24:58]

And he was someone who knew about.

[00:25:00]

This stuff, who's an astrophysicist.

[00:25:01]

He wasn't just some Shmo at home.

[00:25:03]

And drunk in his recliner, just making asinine predictions.

[00:25:10]

And again, we've said this before, but I want to reiterate the people that I think Alphago is the name of this program, the people that created this at DeepMind, they wanted to stress that this is a problem solving program. We're just teaching it this game at first, to make it learn and to see if it can get good at what it does. But they said it is built with the idea that any task that has a lot of data that is unstructured and you want to find patterns in the data and then decide what to do.

[00:25:42]

Right.

[00:25:42]

And that's kind of like what we were talking about. It crunches down all these possible options, aka data, to decide what move should I make, right. And you could apply that. Ideally, they're going to apply this to Alzheimer's and cancer and all sorts of things.

[00:25:57]

Right. That's general purpose thinking, right?

[00:26:00]

Yeah.

[00:26:00]

And thinking on the fly, too, when faced with novel stuff. So one of the reasons why it's good to use games like chess or go or whatever those are called, perfect information games, where both players or anybody watching has all the information that's available on it. There are definite rules, there's structure. It's a good proving ground. But as we'll see, AI makers are getting further and further away from those structure games as their AI becomes more and more sophisticated, because the structure and the limitations aren't necessarily needed anymore, because these things are starting to be able to think on their own in a very generalized and even creative way.

[00:26:42]

Yeah, it's really interesting, the way that they're, like you said earlier before the break, that we don't have computers that can run all the possibilities. So what they teach, in the case of AlphaGo, this program teaches itself by playing itself in these games, in go specifically. And the more it plays itself, the more it learns, and the more ability it has during a game to choose a move, by narrowing down possibilities. So instead of like, well, there are 20 million different variations here. By playing itself, it's able to say, well, in this scenario, there are really only 50 different moves that I could or should make.

[00:27:24]

Right.

[00:27:25]

That's kind of a simplified way to say it, right?

[00:27:27]

No, but it's true. But that's exactly right. And what they're doing is basically the same thing that a human does. It's going back to its memory banks.

[00:27:35]

Yeah, exactly.

[00:27:36]

Its experience. And saying, well, I've been faced with something like this before, and this is what I used, and it was successful 40 out of 50 times. I'll do this one. This is a pretty reasonable move.

[00:27:46]

Yeah.

[00:27:47]

That is what humans do.

[00:27:48]

Yeah. Not only. I mean, boy, we screwed up the chess episode, but I get the idea that when you're a chess master, you don't just think, what do the numbers say and what does the book say? But, oh, man, I did this move that one time, and it didn't go as the book said.

[00:28:04]

Right.

[00:28:04]

So that's now factored into my thinking.

[00:28:06]

Right. Except imagine being able to learn from scratch and get to that point in eight days or 8 hours.

[00:28:15]

Yeah.

[00:28:15]

So that go team, the alphaGo, the first iteration of Alpha Go, I think they started working on it in 2014. And in 2016, at the end of 2016, they unleashed it secretly onto an AlphaGo website. And it started just wiping the floor with everybody. Everybody's like, this thing's pretty good. Oh, it's Alpha go.

[00:28:39]

What year is this?

[00:28:40]

That was the end of 2016.

[00:28:42]

Okay, so chess had already come and gone by this point. You can download a program that's like deep blue. Right?

[00:28:50]

That's a great point. Yeah. Today, the stuff you play chess with on your laptop is even more advanced than deep blue was in the. It's just on your laptop. So this is go. This is the end of 2016. The end of 2017, Alphago was replaced with AlphaGo Zero. It learned what Alphago had taken two years or three years to learn in 40 days by teaching itself.

[00:29:21]

And it beat the master.

[00:29:22]

Yeah.

[00:29:23]

And finally, in May of 2017, Alphago took on key g, the highest ranked go player in the world. Don't know if he or she still is.

[00:29:35]

No Lisa Doll is the current or was until Alphago beat him.

[00:29:42]

Oh, man.

[00:29:42]

Yeah.

[00:29:43]

Do they get knocked off and Alphago is the. That's, that's not fair.

[00:29:49]

If it's match play and the human player has accepted a challenge from the computer, I don't see why it wouldn't be the world champion.

[00:29:58]

Or do they just now say on websites like human champion, maybe in italics with like a sneer?

[00:30:05]

Right? Maybe, yeah.

[00:30:08]

Interesting.

[00:30:09]

What do they call that? Wetware, like your brain, your neurons and all that?

[00:30:13]

What?

[00:30:14]

Instead of hardware, it's wetware.

[00:30:15]

Oh, I don't know about that.

[00:30:17]

I think that's the term for it.

[00:30:18]

What does that mean, though?

[00:30:19]

It means like, you have a substrate, right? Your intelligence, your intellect is based on your neurons, and they're firing all that stuff. And it's wet and squishy and meat. Then there's hardware that you can do the same thing on, you can build intelligence on, but it's hardware, it's not wetware.

[00:30:37]

Oh, interesting.

[00:30:38]

So that's probably it. It's the wetware champion versus the hardware champion.

[00:30:42]

But wetware is italicized with the sneer. So where things really got interesting, because you were talking earlier about, what is it with the chess and go, what are they called? What kind of games?

[00:30:55]

Perfect information games.

[00:30:57]

Right. Then you think, and my first thought when you said that was, well, yeah, but then there's games like poker, like Texas hold them, where they're a set of rules. But poker is not about the set of rules. It is about sitting down in front of, whatever, five or six people and lying, bluffing and getting away with it.

[00:31:18]

And being your game.

[00:31:19]

Being bluff. Like there's so many human emotions and contextual clues and micro expressions and all these things. Like, surely you could never, ever teach a machine to win at Texas Holden poker. Yeah.

[00:31:35]

It'll be a hundred years at least before that happens, I predict.

[00:31:39]

No, they did it. And more than one team has done it.

[00:31:44]

Yeah. I read there was one from Carnegie Mellon called liberatus AI.

[00:31:50]

Go melon heads.

[00:31:51]

Yeah, go the Thornton melon.

[00:31:55]

Yeah. University of Alberta has one called deep stack.

[00:32:00]

That was the one I read about.

[00:32:02]

Okay.

[00:32:02]

And it actually, here's the thing. If you read the release on it, you're like, you don't know how this thing works, do you?

[00:32:10]

Oh, really?

[00:32:11]

Yeah. And I'm pretty sure they don't fully get it because that's one of the problems. Actually. Talk about this in the existential risks series.

[00:32:18]

That is to be released.

[00:32:20]

Right. That there is a type of machine learning where the machine teaches itself, but we don't really understand how it's teaching.

[00:32:28]

That's probably the scariest one, right?

[00:32:30]

Or what it's learning, but that's the most prevalent one. That's what a lot of this is like, these machines. It's like, here's chess. Go figure it out. And they go, okay, got it. How'd you do that? Wouldn't you like to know?

[00:32:44]

So that's the scariest presentation you will see on AI, is when someone says, well, how does all this work? And they go, we just know it.

[00:32:52]

Can beat a human at poker. But the thing about deep stack at the University of Alberta is that it learned, somehow, some sort of intuition, because that's what's required. It's not just the perfect information, where you have all the information on the. It's. With poker, you don't know what the other person's cards are, and you don't know if they're lying or bluffing or what they're doing. So that's an imperfect information game. So that would require intuition. And apparently not one, but two different research groups taught AI to intuit.

[00:33:26]

Yeah. Carnegie Mellon came out in January of 2017 with its liberatus AI. And they said they spent 20 days playing 120,000 hands of Texas hold them with four professional poker players and won and smoked them. Basically got up to they weren't playing with real money, obviously, but that would have been great.

[00:33:50]

They were playing with Skittles, like me as a kid.

[00:33:52]

Funded their next project. Liberatis was up by 1.7 million. And one of the quotes from one of the poker players that he made to Wired magazine said, I felt like I was playing against someone who was cheating. Like it could see my cards. I'm not accusing it of cheating. It was just that good.

[00:34:08]

Right?

[00:34:09]

So that's a really interesting thing, man, that they could teach self teach a program, or a program could teach itself. Intuition.

[00:34:19]

Right.

[00:34:20]

That's creepy.

[00:34:20]

Yeah.

[00:34:21]

I thought this part was interesting. The Atari stuff. This gets pretty fun. Google deep mind, let its AI wreak havoc on Atari. 49 different Atari 2600 games, see if they could figure out how to win. And apparently the most difficult one was Miss Pacman, which is a tough game still, man. Miss Pacman. They nailed it. It's still one of the great games.

[00:34:49]

But their deep Q network algorithm beat it. I think it got the highest score, 999,900 points. And no human or machine has ever achieved that high score, from what I understand.

[00:35:05]

Amazing. And the way this one does it, the hybrid reward architecture that it uses is really interesting. It says here it generates a top agent that's like a senior manager and then all these other 150 individual agents. So it's almost like they've devised this artificial structural hierarchy of these little worker agents that go out and collect, I guess, data and then move it up the chain to this top agent, right?

[00:35:36]

And then this thing says, okay, I think that you're probably right, what these agents are probably doing. And I don't know this is exactly true, but there are models out there like this where the agent says, you have a 90% chance of success at getting this pellet if we take this action, somebody else says, you got an 82% chance of evading this ghost if we go this way. And then the top agent, the senior manager, can put all this stuff together and say, well, if I listen to this guy and this guy, not only will I evade this ghost, I'll go get this pellet. And it's based on what confidence level that the lower agents have in success in recommending these moves. And then the top agent weighs these things.

[00:36:26]

Wow, they should give them a little cap.

[00:36:29]

But all this is happening like that.

[00:36:31]

Oh, yeah.

[00:36:32]

You know what I'm saying? This isn't like, well, hold on, hold on, everybody. What is. Harvey, Harvey, what do you have to say? Well, let's get some Chinese in here and hash it out. And everybody sits there and orders some chinese food. Then you wait for it to come, and then you pick up the meeting from that point on. And then finally Harvey gives his idea, but he forgot what he was talking about, so he just sits down and eats his egg roll.

[00:36:53]

Well, here's a pretty frightening survey. There was a survey of more than 350 AI researchers, and they had the following things to say. And these are the pros that are doing this for a living. They predicted that within ten years, AI will drive better than we do. By 2049, they will be able to write a best selling novel. AI will generate this, and by 2053, be better at performing surgery than humans are.

[00:37:22]

Again, one of the things about the field of artificial intelligence, which you know.

[00:37:27]

A lot about now, famous.

[00:37:30]

It is famous for making huge predictions that did not pan out.

[00:37:34]

Sure.

[00:37:35]

But you've also seen it's also famous for beating predictions, know, been levied against it. But there is something in there, Chuck, that stands out to me. And that's the idea of an AI writing a novel, like, for a very long time, I thought, well, yeah, okay, you can teach a robot arm to put a car part or something somewhere, if you wanted to just follow these mechanical things. Or it can use logic and reason, but to create, that's different, right? That was like the new frontier. It used to be chess, and then it was go. The next frontier is creativity, and they're starting to bang on that door big time. There's a game designing AI called Angelina out of the university of Falmouth, which I always want to say foulmouth. Yeah, but we'll just call it Falmouth like it's supposed to. And Angelina actually comes up with ideas for new games. Not like a different level or something. You should put a purple loincloth on that know that'll look kind of cool. Like new games, but whacked out games that humans would never think of. One example I saw is in a dungeon battle royale game.

[00:38:49]

A player controls, like, ten players at once, and some you have to sacrifice to be killed to save the others. Like, just stuff that human wouldn't necessarily think of. This AI is coming up with.

[00:39:00]

Well, I mean, when you think of creatively, especially something like writing a novel or a film, if there are only seven stories, isn't that sort of the thinking that basically every dramatic story is a variation of one of seven things?

[00:39:15]

Yeah. So, I mean, you can look at AI as scary in some ways, it very much is and can be, but there's also definitely a level of excitement of the whole thing. And the idea that there are artificial minds that are coming online or that have come online now that are out there, that they'll just naturally, by definition, see things differently than we do. And the idea that they can come up with stuff that we've never even thought of, that is just going to knock our socks off, hopefully in good ways. That's a really cool thing. And so maybe there's just seven, as far as humans know, but there's an unlimited amount if you put computer minds to thinking about these kind of things. That's the premise of it.

[00:39:59]

Right. So the robot would be like, you never thought of boy meets girl meets.

[00:40:04]

Well, trilobite.

[00:40:07]

But see, even that's a variation of.

[00:40:10]

Probably just imagine something that we've never even thought of.

[00:40:14]

Well, do you know how they should do this? If they do do that, is just release a book and not tell anyone that it was written by an AI program, because if they do that, then it's going to be so under. Yeah, they should secretly release this book, and then after it's a New York Times bestseller, say meet the Whopper. The author of this, his interests are.

[00:40:40]

Roller skating, playing tic tac toe, and global thermonuclear war.

[00:40:44]

All right, should we take a break and get Strickland in here?

[00:40:46]

Yeah, we're going to end the Strickland drought because it is about to rain. Strickland in this piece, you gross.

[00:41:05]

Hey, this is Danish Schwartz, you may know my voice from noble Blood, Haley Wood, or stealing Superman. I'm hosting a new podcast, and we're calling it very special episodes.

[00:41:16]

One week, we'll be on the case with special agents from NASA as they crack down on black market moon rocks.

[00:41:22]

H. Ross pro is on the other.

[00:41:24]

Side, and he goes, hello, Joe.

[00:41:26]

How can I help you? I said, Mr.

[00:41:27]

Pro, what we need is $5 million to get back a moon rock.

[00:41:31]

Another week, we'll unravel a 90s Hollywood mystery.

[00:41:34]

It sounds like it should be the next season of true Detective or something. These canadian cops trying to solve this 25 year old mystery of who spiked the chowder on the titanic set.

[00:41:43]

A very special episode is stranger than fiction. It's normal. People plop down in extraordinary circumstances. It's a story where you say, this should be a movie. Listen to very special episodes on the iHeartRadio app, Apple Podcasts, or wherever you get your podcast.

[00:42:04]

Hi, I'm Susie Esman.

[00:42:05]

And I am Jeff Garland.

[00:42:07]

Yes, you are. And we are the hosts of the history of Curb your enthusiasm podcast. We're going to watch every single episode. It's 122, including the pilot, and we're going to break them down.

[00:42:18]

By the way, most of these episodes.

[00:42:20]

I have not seen for 20 years.

[00:42:22]

Yeah, me, too. We're going to have guest stars and people that are very important to the show, like Larry David.

[00:42:27]

I did once try and stop a woman who was about to get hit by a car.

[00:42:30]

I screamed out, watch out.

[00:42:31]

And she said, don't you tell me what to do.

[00:42:34]

And Cheryl Hines, why can't you just.

[00:42:36]

Lighten up and have a good time?

[00:42:38]

And Richard Lewis, how am I going.

[00:42:39]

To tell him I'm going to leave now? Can you do it on the phone? Do you have to do it in person? What's the canceling cable? You have to go in and he's.

[00:42:44]

A human being, he's helped you.

[00:42:46]

And then we're going to have behind the scenes information. Tidbit.

[00:42:49]

Yes, tidbit is a great word.

[00:42:51]

Anyway, we're both a wealth of knowledge about this show because we've been doing it for 23 years. So subscribe now, and you could listen to the history of Kerber enthusiasm on iHeartRadio app, Apple Podcasts, or wherever you happen to get your podcasts.

[00:43:05]

What up, guys? Ola ketal.

[00:43:06]

It's your girl.

[00:43:07]

Cheekies from the cheekies and chill. And dear cheekies podcasts. You've been with me for season one and two, and now I'm back with season three. I am so excited, you guys get ready for all new episodes where I'll be dishing out honest advice and discussing important topics like relationships, women's health, and spirituality. For a long time, I was afraid of falling in love, so I had to. And this is a mantra of mine. Or an affirmation every morning where I tell myself it is safe for me to love and to be loved. I've heard this a lot. That people think that I'm conceited, that I'm a Mamona. And a Mamona means that you just think you're better than everyone else. I don't know if it's because of how I act in my video. Sometimes I'm like, I'm a baddie. I don't know what it is, but I'm chill. It's cheekies and chill.

[00:43:50]

Hello.

[00:43:52]

Listen to cheekies and chill. And dear cheekies as part of the My Coultura podcast network on the iHeartRadio app, Apple Podcasts, or wherever you get your podcasts.

[00:44:13]

Okay, we're back. And get this. The scent of strick has permeated our place. And it's a beautiful scent.

[00:44:20]

It smells like a soldering gun and a circuit board and a feel of lavender and a protein bar.

[00:44:28]

That's fair.

[00:44:29]

I was going to say Draco noir, but that would have been a lie.

[00:44:32]

Is that how you say it? I always called it Drakar.

[00:44:35]

Dracar. No, that's fair.

[00:44:37]

Drakar.

[00:44:38]

I always pronounced it Beneton colors. That was what I wore.

[00:44:43]

Oh, is that what you wore?

[00:44:44]

Yeah, during what I call the year of cologne.

[00:44:47]

I had a couple.

[00:44:51]

Uh. This is scintillating. Why am.

[00:44:54]

Oh, so we know that you already know because we talked via email about this. But we'll tell everybody else. We have brought you in here because you are the master of tech. And we are talking tech today, which we've talked about without you before. But frankly, Chuck and I and Jerry huddled and we said it's just not quite as good without strict. So let's try something different. Gotcha.

[00:45:15]

And we're talking about games and machine versus man and that whole evolution and how that's gone super crazy over the last few years.

[00:45:25]

Games without frontiers, Peter Gabriel would say war without mean.

[00:45:30]

We've talked a lot about the evolution of machine learning and how now it's starting to take off like a rocket because they can teach themselves.

[00:45:37]

Right.

[00:45:38]

But one thing we haven't really talked about are solved games. I mean, we talked about chess. We talked about go.

[00:45:45]

Right?

[00:45:46]

Would those constitute solved games?

[00:45:48]

Not really. So, a solved game is the concept where if you were to assume perfect play on either sides of the game, you would always know how it was going to end, which we always assume perfect play.

[00:46:00]

Right. Yeah, that's kind of our bag.

[00:46:02]

It's stuff you should know. Motto.

[00:46:03]

So, perfect play, just meaning that no one ever makes a mistake. So very much the way I do my work.

[00:46:08]

Right. Stuff you should know. Motto.

[00:46:10]

Exactly. So if you were to take a game like Tictactoe and you assume perfect play on both sides, it is always going to end in a draw, which.

[00:46:18]

Is what's in war games.

[00:46:20]

Yes.

[00:46:20]

Right.

[00:46:20]

The only way to win is not to play.

[00:46:22]

Right?

[00:46:23]

Yes.

[00:46:23]

So a game like Twitter talk, a game like Connect four, whoever goes first is always going to win. Assuming perfect play.

[00:46:32]

Really? Both sides?

[00:46:33]

Yes. I don't think I've played Connect four. That's where you drop, or in a long time, that's the one where you drop the little tokens, kind of like checkers.

[00:46:43]

We did an interstitial with playing Connect four. Remember?

[00:46:45]

I was faking it, though, and you had perfect play, so I knew it was useless.

[00:46:48]

No, I was going to say that I'm so humiliated by all the Connect four games that I've lost, starting even.

[00:46:55]

Yeah, but I mean, perfect play, that's something that obviously only the best players typically achieve with significantly complex games. Obviously, the simpler the game, the easier it is to play perfectly.

[00:47:09]

Right.

[00:47:09]

Tictactoe. Once you've mastered the basics of Tictactoe and the other person has, you're never really going to win unless someone has just made a silly mistake because they weren't paying attention.

[00:47:21]

Like, they put a star instead of an x. Right.

[00:47:23]

Which doesn't count. Automatically disqualifies you. One thing I found that's very enjoyable is playing with little kids who haven't figured out that tic tac toe is very easy to play. Yes.

[00:47:33]

Smash their face in the board and rub it in.

[00:47:35]

Yeah.

[00:47:36]

Same reason why I like to join in on little league games, because I can really whale that ball out of the park. Yeah. It really makes me feel like a man.

[00:47:43]

That's the most tech stuffy thing you've ever said, you really whale that ball out of the park.

[00:47:47]

Well, to be fair, I did just do a tech stuff episode about the technology behind baseball bats, so it's still fresh on the mind.

[00:47:53]

Nice.

[00:47:54]

I have to listen to that, actually.

[00:47:55]

It's a lot of fun. So there have been a lot of games that have been solved. Checkers was one that was recently solved back in recently by the early 90s, when it was played against a computer called Chinook and.

[00:48:09]

C-H-I-N-O-O-K. Yeah, like the helicopter or the winds that blow through Alberta.

[00:48:14]

Exactly. And so there are certain games that are more easily solved than others. You do it through an algorithm. But other games, like chess, are more complicated because in chess you have multiple moves that you can do where you can move a piece back the way you win. Right. You're not committed to going a specific direction with certain pieces.

[00:48:36]

Never thought about that.

[00:48:37]

Like with a knight, you could go right back to where you started on your next move if you wanted to. And that creates more complexity.

[00:48:45]

Right.

[00:48:45]

So the more complex the game, the more difficult it is to solve. And some games are not solvable simply because you'll never know what the full state of the game is from any given moment. Did you have a chance to talk about the difference between perfect knowledge and imperfect knowledge in a game?

[00:49:04]

Yeah, we talked about that some.

[00:49:06]

Yeah. So computers, obviously, they do really well if they understand the exact state of the game all the way through. If they have perfect knowledge, all of.

[00:49:15]

The information is there on the board. Right? Right.

[00:49:18]

And all players can see all information at all times. But games like poker, which you guys talked about, obviously, you have imperfect information. You only know part of the state of the game. That's why those games have been more difficult, more challenging for computers to get better than humans until relatively recently. And there have been two major ways of doing that. You either throw more processing power at it, like you get a supercomputer, or you create neural networks, artificial neural networks, and you start teaching computers to, quote unquote, learn the way people do.

[00:49:51]

So we talked about that. Yeah. And one of the things that we talked about was how there's this idea that the programmers especially say the people who are making programs that are playing poker and are getting good at poker aren't exactly sure how the machines are learning to play poker or what they're learning. They're just getting better at poker. Do they know how they're learning poker, or they just know that they're learning poker and that they're good at it. Now, where's the intuition. How is that being learned?

[00:50:21]

An excellent question, the way it typically is learned, especially with artificial neural networks, is that you set up the computer to play millions of hands of poker that are randomly assigned. So it's truly as random as computers can get. That's a whole philosophical discussion that I don't think we're ready to go into right now. But you have games come up where the computer is playing itself millions upon millions of times and learning every single time how the statistics play out, how different betting strategies play out. It's sort of partitioning its own mind to play against itself. And through that process, it's as if you as a human player, were playing thousands of games with your friends and you start to figure out, oh, when I have these particular cards and they're in my hand, and let's say we're playing Texas hold them and the community cards are these, then I know that, generally speaking, maybe three times out of ten, I end up winning. Maybe I shouldn't bet, right? Well, the computer is doing that, but on a scale that far dwarfs what any human can do, and in a fraction of the amount of time.

[00:51:31]

Right, got you.

[00:51:33]

It's intuition in the sense of it's just done it so much.

[00:51:38]

Right. But does that mean it's completely ignoring micro expressions and facial cues, so that doesn't even come into play?

[00:51:47]

Yeah, it doesn't. Should say Strickland just nodded. I was.

[00:51:50]

I was waiting for.

[00:51:50]

How many years have you been doing?

[00:51:52]

Well, I still nod when I do a solo show. And I do a lot of expressive dance.

[00:51:56]

What do you think, Jonathan? I don't know, Jonathan.

[00:51:59]

It gets lonely in here, guys. But yes, what you're saying all the tells, right? The tells that you would use as a human player. The computer does not pick up on typically just data. Yes, typically. What it would do is it would study the outcomes of the games from a purely statistical expression.

[00:52:17]

Sorry.

[00:52:17]

Well, that makes more sense.

[00:52:18]

Most of these poker games tend to be computer based poker games, so it's not that it's playing. It's not like there's a computer that says, push ten more chips into the table eye tick. Right, exactly. It's a little winky face emoticon. Like I don't have good cards. No, it's all usually over. Sort of like Internet poker, which a lot of the people who play professional poker cut their teeth on.

[00:52:43]

Sure.

[00:52:43]

Especially in the more recent generations of professional poker players.

[00:52:48]

Kids today. Yeah, they don't know what it's like being a smoky saloon like moneymaker.

[00:52:53]

When moneymaker rose to the top a few years ago, more like a decade ago now, he had come from the world of Internet poker, and so he was using those same sort of skills in a real world setting. But obviously, there are subtle things that we humans do in our expressions that computers do not pick up on. And in fact, that leads us sort of into the realm of games where computers don't do as well as humans.

[00:53:19]

Yeah. Is that list you sent a joke, or is it real?

[00:53:22]

No, that's real. It does seem like it's weird. Like one of the games on there is Pictionary, for example.

[00:53:27]

Right.

[00:53:27]

Tag or tag. Yeah, but some of these, they sound silly, but when you start to think about them in terms of computation and robotics, you start to realize how incredibly complex it is from a technical perspective, but incredibly easy it is for your average human being. Okay, so with humans, a game of tag, once you know the basics, it's all an instinct. You know what to do. You run after the person. You tried to catch up with them, and you tag them, but you also.

[00:53:54]

Know, push them in the back as hard as you can.

[00:53:56]

Well, if you're Josh, you push them as hard as you can. But most of us, we tag, and we're not trying to cause harm. Robots, however, robots, not so good on.

[00:54:05]

The second stuffiest thing you said.

[00:54:07]

I'm just saying, isaac Asimov's rules of robotics aside, robots are not very good at judging how hard they have to hit something in order to make contact. Right? Yeah, they're not as good at even your bipedal robots that walk around like people, even the ones that can run and do flips and stuff, have you.

[00:54:26]

Seen that one the other day, that the footage of that thing running and jumping, it's really impressive and super creepy.

[00:54:32]

Yeah, but even so, that's a clip of the best of. If you ever see the clips where they show all the times the robots.

[00:54:41]

Falling over or pouring hot coffee in someone's head.

[00:54:44]

Yes, but they always play those clip shows to yakity sacks.

[00:54:48]

Yes, this is true. So DARPA had its big robotics challenge a few years ago where they had bipedal robots try to go through a scenario that was simulating the Fukushima nuclear disaster. So the interesting thing was the robot had to complete a series of tasks that would have been mundane to humans, things like open up a door and walk through it and pick up a power tool and use it against a wall. And you can watch the footage of some of these robots doing things like being unable to open the door. Because they can't tell if they need to pull or push or they open the door, but then immediately fall over the threshold of the door.

[00:55:31]

Right.

[00:55:31]

And when you see that, you realize as advanced as robotics is, as advanced as machine learning has become, and as incredible as our technology has progressed, there are still things that are fundamentally simple to your average human that are incredibly complicated from a technical standpoint.

[00:55:49]

Like, a six year old can play jinga better than a robot.

[00:55:52]

Right. Okay. But the thing is, we're talking robots here. And as we go more and more and more online, and our world becomes more and more web based rather than reality based, doesn't the fact that a robot can't walk through a door matter less and less? And the idea that machines are learning, intellect and the robot reasoning, you just blew my mind that that's becoming more and more vital and important and something we should be paying attention to.

[00:56:22]

It absolutely is something we should pay attention to. I mean, we have robotic stock traders. They're trading thousands of trades per second, right? So fast that we have had stock market booms and crashes that last less than a second long due to that.

[00:56:39]

So the robot army that will ultimately defeat us is not something from the Terminator. It's invisible.

[00:56:45]

Right?

[00:56:45]

It's online, or it will be online.

[00:56:48]

It's what's determining our retirement.

[00:56:51]

Right. Yeah. The global economy or our municipal water supply or whatever.

[00:56:57]

Yeah. The fascinating thing to me about this is not just that we're training machine intelligence to learn and to perform at a level better than humans, but that we're putting a lot of trust in those devices, in things that have real incredible impact on our lives. Significant enough impact where, if things were to go south, it would be really bad for us, and not in that Terminator Respect. Terminator is a terrifying dystopian science fiction story. But then when you realize what could really happen behind the scenes, you think, oh, the robots don't have to do any physical harm to us to really mess things up.

[00:57:39]

Right.

[00:57:40]

So there are certainly some cases for us to be very vigilant in the way we deploy artificial intelligence.

[00:57:48]

Do it right from the outset.

[00:57:50]

Exactly.

[00:57:51]

But isn't it too late?

[00:57:52]

Depends.

[00:57:53]

No, not necessarily.

[00:57:57]

I don't think it's too late, but I think it's getting to that point of no return very, very quickly.

[00:58:04]

By December of this year.

[00:58:06]

Yeah. Well, if you're someone like Elon Musk, you'd say, if we don't do something now, we're totally going to plummet off the edge of the cliff.

[00:58:16]

But now is a window that is rapidly.

[00:58:20]

Now is the. Now is a time where we've got a deadline. We don't know exactly when that deadline is going to be up, but we know that it's not getting further out. We're just getting closer to that deadline. And a lot of this is covered in deep conversations in the artificial intelligence and machine learning fields that has been going on for ages, to the point where you even have bodies like the European Union that have debated on concepts like granting personhood to artificial intelligence. So this is a really fascinating and deep subject. And the games thing is a great entry point into having that conversation. I'm lucky if I can win a game of chess against another human being.

[00:59:06]

Oh, yeah, right. I can't even describe chess.

[00:59:10]

My big thing is I do that night thing. I call it the night shuffle. I just move them back and forth, right?

[00:59:14]

I just castle if I can, castle, then I'm so happy.

[00:59:20]

And that's the third tech stuffiest thing. They come in threes.

[00:59:24]

Well, strick, thank you for stopping by.

[00:59:26]

I think you should stick around for listener mail.

[00:59:28]

I think you should, too.

[00:59:29]

I'd love to.

[00:59:30]

And throw out any funny comments that you have.

[00:59:33]

I'll throw out comments, and then Jerry can decide which ones are funny.

[00:59:37]

Okay. All right, fair enough. All right, so if you want to know more about AI, go listen to tech stuff. Strict. Does this every week. What days?

[00:59:45]

Monday, Tuesday, Wednesday, Thursday, and Friday.

[00:59:49]

Wow. That's amazing, buddy. And wherever you find your podcast.

[00:59:53]

Yes.

[00:59:54]

Okay. And you've been doing it for years, so if you love this, there's a whole big backlog. 900 plus episodes.

[01:00:00]

You're celebrating your ten year as well, right?

[01:00:02]

Yes, I sure am. We'll be turning ten and tech stuff on June 11.

[01:00:07]

Man.

[01:00:07]

Congratulations.

[01:00:08]

Anniversary.

[01:00:09]

Well, since I said happy anniversary, it means it's time for listener mail.

[01:00:14]

Guys. I'm going to call this Matt graning and cultural relativism. About that.

[01:00:20]

Nice.

[01:00:21]

Hey, guys. Love your podcast so much. The massive archive makes for endless learning and entertainment. My favorite part is you're such rad guys, including Strickland. And I could totally imagine. How did they know? I could totally imagine myself getting a beer with you two, but without Strickland. Your Simpsons episodes were absolutely perfect. I used to live in Portland and drove on Flanders and Lovejoy streets a lot.

[01:00:43]

Wait, is this Matt graning?

[01:00:44]

No.

[01:00:45]

Okay.

[01:00:45]

Matt Graning drew Bart in the sidewalk cement behind Lincoln High School in downtown Portland. You can google that. I would like to offer one interesting observation, though. I've noticed that on several episodes, you guys have said that you are cultural relativists. Is that pronounced right? Yeah, but then in nearly every episode, I hear you pass moral judgments on all the messed up stuff that people do, whether it's racism, freak shows, or crematoriums bearing bodies on the sly. You guys are never shy to condemn something that deserves to be condemned. Reminds me of something I read from Yale sociologist Philip Gorski, who points out that our own relativism is rarely as radical as our theory requires. We can't be complete relativists in our daily lives. He then gives the example of how academic social scientists who are die hard relativists get furious and moralistic at the data fudging of other researchers. Anyway, love the show, guys. Love tech stuff especially, and will forever be indebted to you for your hilarity and knowledgeability. Cheers. Jesse Lusco p. S. Go tech stuff.

[01:01:53]

That's sweet.

[01:01:53]

How about that?

[01:01:54]

Yeah, thanks a lot, Jesse. There was an actual episode, and I don't remember which one it was, where we abandoned our cultural relativism. Do you remember? Because we used to just be like, no judgment, no judgment. We just can't judge. And then finally we're like, you know what? No, that's not true. We changed our philosophy to include the idea that there are moral absolutes that are universal, although sometimes we are just judgy. Even beyond that.

[01:02:21]

Look at us.

[01:02:21]

Yeah. Well, if you want to get in touch with us, you can send us an email to stuffpodcast@housestuffworks.com you can send John an email to techstuff@housestuffworks.com nice. And then hang out with us at our home on the web, stuffyshoodnow.com. And just go to tech stuff.

[01:02:39]

Just search it in Google. I come up all the time.

[01:02:41]

Fair enough.

[01:02:46]

Stuff you should know is a production of iHeartradio. For more podcasts, my heart radio, visit the iHeartRadio app, Apple Podcasts, or wherever you listen to your favorite shows.

[01:03:03]

Hello, this is Susie Esman and Jeff Garland. I'm here, and we are the hosts of the history of curb your enthusiasm podcast. Now we're going to be rewatching and talking about every single episode, and we're going to break it down and give behind the scenes knowledge that a lot of people don't know. And we're going to be joined by special guests, including Larry David and Cheryl Hines, Richard Lewis, Bob Odenkirk, and so many more. And we're going to have clips, and it's just going to be a lot of fun. So listen to the history of curb your enthusiasm on iHeartRadio app, Apple podcasts, or wherever you happen to get your podcasts.

[01:03:36]

Hey, this is Dana Schwartz. You may know my voice from noble blood, Haleywood, or stealing Superman. I'm hosting a new podcast, and we're calling it very special episodes. A very special episode is stranger than fiction.

[01:03:51]

It sounds like it should be the next season of True Detective. These canadian cops trying to solve this mystery of who spiked the chowder on the Titanic set.

[01:03:58]

Listen to very special episodes on the iHeartRadio app, Apple Podcasts, or wherever you get your podcasts.

[01:04:06]

What up, guys?

[01:04:07]

Ola Catal. It's your girl cheekies from the cheekies and chill and dear Cheekies podcasts. And guess what? We're back for another season. Get ready for all new episodes where I'll be dishing out honest advice, discussing important topics like relationships, women's health, and spirituality. I'm sharing my experiences with you guys, and I feel that everything that I've gone through has made me a wiser person. And if I can help anyone else through my experiences, I feel like I'm living my godly purpose. Listen to cheekies and chill and your cheekies on the iHeartRadio app, Apple Podcasts, or wherever you get your podcasts.