Transcribe your podcast
[00:00:00]

The following is a conversation with Pamela McCormick, she's an author who has written on the history and the philosophical significance of artificial intelligence. Her books include Machines Who Think in nineteen seventy nine The Fifth Generation in 1983, with Ed Feigenbaum, who's considered to be the father of expert systems, the edge of chaos, the features of women and many more books. I came across her work in an unusual way by stumbling in a quote from machines who think that is something like artificial intelligence began with the ancient wish to forge the gods.

[00:00:37]

That was a beautiful way to draw a connecting line between our societal relationship with ARE from the grounded day to day science, math and engineering, to popular stories in science fiction and myths of automatons that go back four centuries. Through her literary work, she has spent a lot of time with the seminal figures of artificial intelligence, including the founding fathers of AI from the 1956 Dartmouth summer workshop where the field was launched. I reached out to Pamela for conversation in hopes of getting a sense of what those early days were like and how their dreams continue to reverberate for the work of our community today.

[00:01:20]

I often don't know where the conversation may take us, but I jump in and see. Having no constraints, rules or goals is a wonderful way to discover new ideas. This is the Artificial Intelligence podcast if you enjoy it. Subscribe on YouTube. Give it five stars on iTunes, supported on Patrón or simply connect with me on Twitter. Allex Friedman spelled Frid Man. And now here's my conversation with Pamela McCormick.

[00:02:08]

In 1979, your book, Machines to Think. In it, you interview some of the early pioneers and explore the idea that I was born not out of maybe math and computer science, but out of myth and legend. So tell me, if you could, the story of how you first arrived at the book, The Journey of. Oh, beginning to write it. I had been a novelist.

[00:02:38]

I'd published two novels, and I was sitting under the portal at Stanford one day in the house we were renting for the summer and I thought I should write a novel about these weird people. And I know. And then I thought, uh, don't write a novel, write a history simple. Just go around, you know, interview them, splice it together.

[00:03:02]

Voila. Instant book. Ha ha ha. It was much harder than that, but nobody else was doing it. And so I thought, oh, this is a great opportunity. And there were. People who were John McCarthy, for example, thought it was a nutty idea, there were much, you know, the field had not evolved yet, so on, and he had some mathematical thing he thought I should write instead. And I said, no, John, I am not a woman in search of a project.

[00:03:33]

I'm this is what I want to do. I hope you'll cooperate. And he said, Oh, Mutter, mutter, well, OK, it's your your time.

[00:03:41]

And was the pitch for the I mean such a young field at that point. How do you write a personal history of a field that so young.

[00:03:51]

I said this is wonderful. The founders of the field are alive and kicking and able to talk about what they're doing, that they sound or feel like founders at the time.

[00:04:02]

That they know that they've been found, that they have found something. Oh, yeah, they knew what they were doing was very important, very what they what I now see in retrospect is that they were at the height of their research careers. And it's humbling to me that they took time out from all the things that they had to do as a consequence of being there and to talk to this woman who said, I think I'm going to write a book.

[00:04:31]

You know, it was amazing, just amazing. Hoo hoo.

[00:04:34]

Who stands out to you? Maybe looking 63 years ago, the Dartmouth conference, the voice of Marvin Minsky was there. McCarthy was there, Claude Shannon, Allen Newell, Herb Simon, some of the folks you mentioned. Right. Um, then there's other characters right there.

[00:04:55]

One of your co-authors at Boston, at Dartmouth. He was not there. But I mean, he was there, I think an undergraduate there. And and, of course, Joe Traub, I mean, all of these are players and that adopt them. But in that era. Right, it's Simu and so on. So who are the characters? If you could paint a picture that stand out to you from memory, those people you've interviewed and maybe not people that were just in the in the the atmosphere, in the atmosphere, uh, of course, the four founding fathers were extraordinary guys.

[00:05:32]

They really were. Who are the founding fathers? Alan Newell, Herbert Simon, Marvin Minsky, John McCarthy, they were the four who were not only at the Dartmouth conference, but Newlon, Simon arrived there with a working program called The Logic Theorist. Everybody else had great ideas about how they might do it, but.

[00:05:54]

They weren't going to do it yet. And you mentioned Joe Treb, my husband, I was immersed in EHI before I met Joe. Because I had been Ed Feigenbaum assistant at Stanford, and before that I had worked on a book by edited by Feigenbaum and Julian Feldman called Computers and thought it was the first textbook of readings of A.I. and they only did it because they were trying to teach to people at Berkeley and there was nothing, you know, you'd have to send them to this journal, in that journal.

[00:06:32]

This was not the Internet where you could go look at an article. So I was fascinated from the get go by.

[00:06:40]

I was an English major, you know, what did I know?

[00:06:44]

And yet I was fascinated. And that's why you saw that historical that literary background, which I think is very much a part of the continuum of A.I. that I grew out of, that same impulse that you had, that traditional what what was what drew you to I. How did you even think of it back back then, what what was the possibilities, the dreams? What was interesting to you?

[00:07:15]

Uh, the idea of intelligence outside the human cranium. This was a phenomenal idea.

[00:07:22]

And even when I finished machines who think I didn't know if they were going to succeed, I in fact, the final chapter is very wishy washy, frankly.

[00:07:33]

Uh, I'll succeed. The field did. Yeah. Yeah.

[00:07:37]

So was there the idea that I began with the wish to forge the gods, so the the spiritual component that we crave to create this other thing greater than ourselves for those guys?

[00:07:53]

I don't think so. Newell and Simon were cognitive psychologists, what they wanted was to simulate aspects of human intelligence and they found they could do it on the computer.

[00:08:10]

Minsky just thought it was a really cool thing to do. Likewise, McCarthy. McCarthy had got the idea in 1949 when when he was a Caltech student and he listened to somebody lecture. It's in my book. I forget who it was. And he thought, oh, that would be fun to do.

[00:08:34]

How do we do that? And he took a very mathematical approach. Minsky was hybrid and Newlon Simon were very much cognitive psychology. How can we simulate various things about about human cognition?

[00:08:50]

What happened over the many years is, of course, our definition of intelligence expanded tremendously.

[00:08:58]

I mean, these days, biologists are comfortable talking about the intelligence of sell, the intelligence of the brain, not just the human brain, but the intelligence of any kind of brain cephalopods.

[00:09:14]

I mean, an octopus is really intelligent by any measure.

[00:09:20]

We wouldn't have thought of that in the 60s, even the 70s. So all these things have worked in.

[00:09:27]

And I did hear one behavioral primatologist, Frans Duvall, say A.I. taught us the questions to ask. Yeah, this is what happens, right?

[00:09:41]

It's when you try to build it is when you start to actually ask questions. It puts a mirror to ourselves. Yeah, right. So you were there in the middle of it seems like not many people were asking the questions that you were just trying to look at this field the way you were.

[00:09:59]

I was so low. I when I went to get funding for this because I needed somebody to transcribe the interviews and I needed travel expenses I went to. Every thing you could think of, the NSF, the DARPA, um, there was an Air Force place that doled out money and each of them said, well, that was that was very interesting. That's a very interesting idea. But we'll think about it.

[00:10:36]

And the National Science Foundation actually said to me in plain English, hey, you're only a writer, you're not an historian of science. And I said, yeah, that's true. But, you know, the historians of science will be crawling all over this field. I'm writing for the general audience.

[00:10:53]

So I thought and they still wouldn't budge. I finally got a private grant without knowing who it was from from Ed Fratkin at MIT. He said he was a wealthy man and he liked what he called crackpot ideas, and he considered this a crackpot idea. It's a crackpot idea and he was willing to support it. I am ever grateful.

[00:11:16]

Let me say that, you know, some would say that a history of science approach to A.I. or even just the history or anything like the book that you've written hasn't been written since.

[00:11:29]

I don't know, maybe I'm not familiar, but it's certainly not many. If we think about bigger than just these couple of decades, few decades, what what are the roots? Of I oh, they go back so far. Yes, of course, there's all the legendary stuff, the Golem and. The early robots of the 20th century, but they go back much further than that, if you read Homer. Homer has robots in the Iliad and a classical scholar was pointing out to me just a few months ago.

[00:12:08]

Well, you said you just read The Odyssey. The Odyssey is full of robots. It is. I said, yeah. How do you think Odysseus, this ship gets from one place to another? It doesn't have the crew people to do that. The crew men. Yeah, it's magic is robots. Oh, I thought. How interesting.

[00:12:28]

So we've had this notion of A.I. for a long time.

[00:12:33]

And then toward the end of the 19th century, the beginning of the 20th century, there were scientists who actually tried to make this happen some way or another, not successfully.

[00:12:44]

They didn't have the technology for it.

[00:12:47]

And of course, Babbage, in the 18th, 50s and 60s, he saw that what he was building was capable of intelligent behavior.

[00:13:00]

And he when he ran out of funding, the British government finally said, that's enough.

[00:13:05]

He and Lady Lovelace decided, oh, well, why don't we make you know, why don't we play the ponies with this?

[00:13:12]

So he had other ideas for raising money to prove we actually reached back once again.

[00:13:18]

I think people don't actually really know that robots do appear and ideas of robots. You talk about the Hellenic and the Bragg points of view. Oh, yes. Can you tell me about this?

[00:13:32]

I defined it this way. The Hellenic point of view is robots are great.

[00:13:37]

You know, they are party help. They help this guy have just as God have estos in his forge. I presume he made them to help him and so on and so forth. And they welcome the whole idea of robots. They break view has to do with.

[00:13:56]

I think it's the second commandment, thou shalt not make any graven image. In other words, you better not start imitating humans because that's just forbidden. It's the second commandment and.

[00:14:14]

A lot of the reaction to artificial intelligence has been a sense that this is this is somehow wicked, this is somehow blasphemous. We shouldn't be going there now. You can say, yeah, but they're going to be some downsides, and I say, yes, there are, but blasphemy is not one of them.

[00:14:39]

You know, there is a kind of fear that feels to be almost primal. Is there a religious roots to that? Because so much of our society has religious roots. And so there is a feeling of, like you said, blasphemy of creating the other of creating something. You know, it doesn't have to be artificial intelligence, it's creating life in general. It's the Frankenstein idea.

[00:15:06]

It's the annotated Frankenstein on my coffee table. It's a tremendous novel. It really is just beautifully perceptive.

[00:15:16]

Uh, yes, we we do fear this and we have good reason to fear it. But because it can get out of hand, maybe you can speak to that fear the psychology, if you thought about it. You know, there's a practical set of fears, concerns in the short term you can think of if we actually think about artificial intelligence systems, you can think about bias of discrimination in algorithms. You can think about there's social networks, have algorithms that recommend the content you see there by these algorithms, control the behavior of the masses.

[00:15:53]

There are these concerns, but to me, it feels like the fear that people have is deeper than that. So have you thought about the psychology of it? I think in a superficial way, I have there is this notion that if we. Produce a machine that can think it will outthink us and therefore replace us. I guess that's that's a primal fear for almost almost kind of a kind of mortality. So around the time you said you work with, uh, at Stanford with Ed Feigenbaum.

[00:16:34]

Mm hmm.

[00:16:35]

So let's look at that one person throughout history, clearly a key person, one of the many in the history of A.I. How has he changed in general around him? How Stanford changed in the last how many years are we talking about here? Oh, since that. Sixty five, six. Sixty five. So maybe there's something about him that could be bigger.

[00:17:00]

But because he was a key person in expert systems, for example, how is that, how are these folks who you've interviewed?

[00:17:10]

In the 70s, 79. Changed through the decades. In any case, I know him well, we are dear friends, we see each other every month or so.

[00:17:28]

He told me that when machines to think first came out, he really thought all the front matter was kind of baloney and.

[00:17:38]

Ten years later, he said, no, I see what you're getting at. Yes, this is an impulse that has been this has been a human impulse for thousands of years to create something outside the human cranium that has intelligence.

[00:17:58]

I think it's very hard when you're down at the algorithmic level and you're just trying to make something work, which is hard enough to step back and think of the big picture.

[00:18:11]

It reminds me of when I was in Santa Fe. I knew a lot of archeologists, which was a hobby of mine and.

[00:18:21]

I would say, yeah, yeah, well, you can look at the shards and say, oh, this came from this tribe and this came from this trade route and so on. But what about the big picture? And a very distinguished archaeologist said to me, they don't think that way.

[00:18:37]

You know, they're trying to match the Shard to the to where it came from. That's, you know, where did this corn the remainder of this corn come from? Was it grown here? Was it grown elsewhere?

[00:18:51]

And I think this is part of the I or any scientific field. You're so busy doing the the hard work.

[00:19:01]

And it is hard work that you don't step back and say, oh, well, now let's talk about the you know, the general meaning of all this.

[00:19:09]

Yes. So none of the even Minsky McCarthy, they.

[00:19:16]

Oh, those guys did. Yeah.

[00:19:17]

The Founding Fathers did early on or pretty early on, but in a different way from how I looked at it.

[00:19:26]

The two cognitive psychologists, Newell and Simon, they wanted to imagine reforming cognitive psychology so that we would really, really understand the brain.

[00:19:40]

Yes. Minsky was more speculative. And John McCarthy saw it as. I think I'm doing doing him right by this, he really saw it as a great boon for human beings to have this technology and that was reason enough to do it.

[00:20:02]

And he had wonderful, wonderful fables about how if you do the mathematics, you will see that these things are really good for human beings. And if you had a technological objection, he had an answer, a technological answer. But here's how we could get over that and then blah, blah, blah, blah. And one of his favorite things was what he called the literary problem, which, of course, he presented to me several times.

[00:20:33]

That is everything in literature. There are conventions and literature. One of the conventions is that you have a villain and a hero.

[00:20:48]

And the hero in most literature is human and the villain in most literature is a machine. And he said, oh, that's just not the way it's going to be, but that's the way we're used to it. So when we tell stories about A.I., it's always with this paradigm. My thought. Yeah, he's right. You know, looking back at the classics, are you are certainly the machines trying to overthrow the humans.

[00:21:17]

Frankenstein is different. Frankenstein is. A creature he never he never has a name, Frankenstein, of course, is the guy who created him, the human Dr. Frankenstein.

[00:21:33]

This creature wants to be loved, wants to be accepted. And it is only when Frankenstein. Turns his head, in fact, runs the other way, and the creature is without love. That he becomes the monster that he later becomes. So who's the villain in Frankenstein? It's unclear right now. It is unclear.

[00:22:02]

Yeah, it's really the people who drive him by driving him away. Right. They bring out the worst.

[00:22:11]

That's right. They give him no human solace and he is driven away. You're right.

[00:22:20]

He becomes at one point the friend of a blind man, and they he serves this blind man and they become very friendly.

[00:22:29]

But when the sighted people of the blind man's family come in, I know you've got a monster here.

[00:22:37]

So it's it's very didactic in its way. And what I didn't know is that Mary Shelley and Percy Shelley were great readers of the literature surrounding abolition in the United States, the abolition of slavery, and they picked that up wholesale.

[00:22:56]

You know, you are making monsters of these people because you won't give them the respect and love that they deserve.

[00:23:03]

Do you have if we get philosophical for a second.

[00:23:08]

Do you worry that once we create machines that are a little bit more intelligent, let's look at Roomba vacuum cleaner, that that this darker part of human nature where we abuse the other, the the somebody who's different will come out.

[00:23:30]

I don't worry about it.

[00:23:31]

I could imagine it happening. But I think that what I has to offer the human race will be so attractive that people will be won over.

[00:23:46]

So you have looked deep into these people have deep conversations. And it's interesting to get a sense of stories of the way they were thinking and the way was changed.

[00:23:59]

The way your own thinking about race changed, as you mentioned, McCarthy, is what about the years at CMU, Carnegie Mellon with Joe and was sure.

[00:24:11]

With a Joe. Was not in I he was in algorithmic complexity, was there always a line between A.I. and computer science, for example, is A.I. its own place of outcasts?

[00:24:27]

It was that the feeling there was a kind of outcast period for A.I., for instance, in 1974, the new field was hardly. Ten years old, the new field of computer science was asked by the National Science Foundation, I believe, but it may have been the National Academies, I can't remember to, you know, tell us tell are your fellow scientists where computer science is and what it means.

[00:25:01]

And they wanted to leave out A.I. and they only agreed to put it in because Don Knuth said, hey, this is important. You can't just leave that out. Really, Don. Don Canu. Yes. I talked to teachers out of all the people. Yes.

[00:25:20]

But you see, and I person couldn't have made that argument. You wouldn't have been believed. But Knuth was believed. Yes.

[00:25:27]

So your job worked on the real stuff.

[00:25:32]

Joe was working on algorithmic complexity, but he would say in plain English again and again, the smartest people I know are in. I really.

[00:25:42]

Oh, yes. No question. Anyway, Joe loved these guys. What happened was that. I guess it was as I started to write machines who think Herb Simon and I became very close friends, he would walk past our house on Northumberland Street every day after work. And I would just be putting my cover on my typewriter and I would lean out the door and say, Herb, would you like a sherry? And Herb almost always would like a sherry.

[00:26:14]

So he'd stop in and we'd talk for an hour, two hours. My journal says we talked this afternoon for three hours.

[00:26:23]

What was on his mind at the time?

[00:26:25]

If in terms of the side of things we didn't talk too much about, I would talked about other things in life. We both loved literature. And Herb had read Proust in the original French twice, all the way through. Yeah, I can't. I read it in English, in translation. So we talked about literature. We talked about languages. We talked about music because he loved music. We talked about art because he was he was actually enough of a painter that he had to give it up because he was afraid it was interfering with his research and so on.

[00:27:05]

So, no, it was really just chat chat, but it was very warm.

[00:27:10]

So one summer I said to her, you know, my students have all the really interesting conversations I was teaching at the University of Pittsburgh. Then in the English department, you know, they get to talk about the meaning of life and that kind of thing. And what do I have? I have university meetings where we talk about the photocopying budget and, you know, whether the course on romantic poetry should be one semester or two. So Herb laughed.

[00:27:39]

He said, yes, I know what you mean, he said, but, you know, you could do something about that dot. That was his wife. And I used to have a salon at the University of Chicago every Sunday night, and we would. Have essentially an open house, and people knew it wasn't a small talk, it was really for some topic of. Depth, he said, but my advice would be that you choose the topic ahead of time.

[00:28:10]

Fine, I said so the following.

[00:28:12]

We we exchanged mail over the summer that was US post in those days because you didn't have personal email. Right.

[00:28:21]

And we I decided I would organize it and there would be eight of us, Alan Newell and his wife, Herb Simon and his wife, Dorothea. There was a novelist in town, a man named Mark Harris, he had just just arrived and his wife, Josephine Mark was most famous then for a novel called Bang the Drum So Slowly, which was about baseball and Joe and me. So eight people and we met monthly and we we just sank our teeth into really hard topics and it was great fun.

[00:29:03]

How have your own views around artificial intelligence changed in through the process of writing machines who think and afterwards the ripple effects?

[00:29:15]

I was a little skeptical that this whole thing would work out. It didn't matter to me. It was so audacious, the whole thing being I generally and in some ways.

[00:29:31]

It hasn't worked out the way I expected so far, that is to say, there is this wonderful lot of apps thanks to deep learning and so on, but those are algorithmic. Yeah. And. In the part of a symbolic processing, there is very little yet, yes, and that's the field that lies waiting for industrious, industrious graduate students.

[00:30:03]

Maybe you can tell me some figures that popped up in your life in the 80s with expert systems where there was the symbolic A.I. possibilities of what you know, that what most people think of as A.I., if you dream of the possibilities, is really expert system. And those hit a few walls and those challenges there. And I think, yes, they will reemerge again with some new breakthroughs and so on. But what did that feel like? Both the possibility and the winter that followed the slow down.

[00:30:37]

And, you know, this whole thing about A.I. winter is to me, a crock.

[00:30:43]

So winter's because I look at the basic research that was being done in the 80s, which is supposed to be like God, it was really important. It was laying down things that nobody had thought about before, but it was basic research. You couldn't monetize it, right.

[00:31:01]

Hence the winter.

[00:31:02]

And so, you know, research, scientific research goes in fits and starts that. Isn't this nice, smooth. Oh, this follows this follows this. No, you know, it just doesn't work that way.

[00:31:16]

The interesting thing, the way winters happen, it's never the fault of the researchers.

[00:31:21]

It's the it's the the some source of hype overpromising. Well, no, let me take that back.

[00:31:28]

Sometimes it is the fault of the researchers. Sometimes certain researchers might overpromise the possibilities. They themselves believe that we're just a few years away, sort of just recently talk to Elon Musk and he believes he'll have an autonomous vehicle, will have autonomous vehicles in a year. And he believes that a year, a year with mass deployment of time.

[00:31:50]

For the record, this is 2019. Right now he's talking 2020 to do the impossible.

[00:31:57]

You really have to believe it.

[00:32:00]

And I think what's going to happen when you believe it, because there's a lot of really brilliant people around him, is some good stuff will come out of it. Some unexpected, brilliant breakthroughs will come out of it when you really believe that when you work that hard, but I believe that and I believe autonomous vehicles will come. I just don't believe it will be in a year.

[00:32:19]

I wish. But nevertheless, there is a thousand vehicles is a good example. There's a feeling many companies have promised by 2021, by 2020 to Ford, GM and basically every single automotive company has promised they'll have autonomous vehicles. So that kind of overpromise is what leads to the winter because we'll come to those dates that won't be autonomous vehicles and there'll be a feeling. Well, wait a minute, if we took your word at that time, that means we just spent billions of dollars, had made no money.

[00:32:53]

And there's a counter response to where everybody gives up on that sort of intellectually. And at every level, the hope just dies and all that's left is a few basic researchers. So you're uncomfortable with some aspects of this idea? Well, it's the difference between science and commerce. So you think science prevails, science goes on the way it does, what science can really be killed by not getting proper funding or timely funding? I think Great Britain was a perfect example of that.

[00:33:34]

The Lighthill report in, you know, remember the year essentially said there's no use Great Britain putting any money into this. It's it's going nowhere and. This was all about social factions in Great Britain, Ed Murrow hated Cambridge and Cambridge hated Manchester. And, you know, somebody else can write that story, but it really did have a hard effect on research there.

[00:34:07]

Now they've come roaring back with deep mind. Yeah, but that's one guy and his visionaries around him.

[00:34:18]

But just to push on that, it's kind of interesting. You have this dislike of the idea of an I winter. The words that where's that coming from? Where were you? Oh, because I just don't think it's true. There was a particular periods of time. There's a romantic notion, certainly.

[00:34:39]

Yeah, well, you know, I admire science perhaps more than I admire commerce.

[00:34:48]

Commerce is fine.

[00:34:49]

Hey, you know, we all got to live, but, uh.

[00:34:55]

Science. As a much longer view than commerce and continues almost regardless not.

[00:35:07]

It can't continue totally regardless, but it almost regardless of what's salable and what's not, what's most noticeable and what's not, so the winter is just something that happens on the commerce side and the science.

[00:35:21]

So it seems just that's a beautifully optimistic, inspiring must agree with you. I think if we look at the key people at work and I they work in key scientists and most disciplines, they continue working out of the love for science. No matter you can always scrape up some funding to stay alive and they continue working. Delijani. But there certainly is a huge amount of funding now and there's a concern and the eyesight and deep learning, there's a concern that we might, with overpromising, hit another slowdown in funding, which does affect the number of students, you know, that kind of thing.

[00:36:04]

Yeah, I know it does. So the kind of ideas, the machines you think did you continue that curiosity through the decades that followed? Yes, I did.

[00:36:14]

And what what was your view, historical view of how a community of all the conversations about the work has persisted the same way from its birth?

[00:36:26]

No, of course not. It's just as we were just talking, the symbolic I really kind of dried up and it all became algorithmic. And I remember a young a student telling me what he was doing, and I had been away from the field long enough. I'd gotten involved with complexity at the Santa Fe Institute.

[00:36:52]

I thought algorithms. Yeah, they're in the service of. But they're not the main event. No, they became the main event that surprised me.

[00:37:03]

And we all know the downside of this. We all know that if you're using an algorithm to make decisions based on a gazillion human decisions baked into it are all the mistakes that humans make, the bigotries, the short-sightedness, so on and so on.

[00:37:25]

So you mentioned Santa Fe Institute and you say you've written the novel Edge of Chaos, but it's inspired by the ideas of complexity were a lot of which have been extensively explored at the Santa Fe Institute.

[00:37:41]

Right. It's a I mean, it's another fascinating topic, just sort of emergent complexity from chaos. Nobody knows how it happens, really, but it seems to where all the interesting stuff does happen. Right.

[00:37:57]

So how the first novel, which is complexity in general in the work at Santa Fe, fit into the bigger puzzle of the history of A.I. or maybe even your personal journey through that one of the last projects I did.

[00:38:15]

Concerning AIG in particular, was looking at the work of Harold Cohen, the painter, and Harold was deeply involved with AIG.

[00:38:28]

He was a painter first. And and what his project, Ahran, which was a lifelong project, did, was reflect his own cognitive processes.

[00:38:45]

OK, Harold and I even though I wrote a book about it, we had a lot of friction between us and. I went, I thought, this is it, you know, the book died. It was published and fell into a ditch.

[00:39:02]

Uh, this is it. I'm finished. It's time for me to do something different by chance. This was a sabbatical year from my husband, and we spent two months at the Santa Fe Institute and two months at Caltech and then the spring semester in Munich, Germany. OK, those two months at. The Santa Fe Institute were so restorative for me and I began to the institute was very small then it was in some kind of office complex on old Santa Fe Trail.

[00:39:42]

Everybody kept their door open so you could crack your head on a problem. And if you finally didn't get it, you could walk in to see Stuart Kauffman or any number of people and say, I don't get this.

[00:39:58]

Can you explain?

[00:40:00]

And one of the people that I was talking to about complex adaptive systems was Murray Gelman.

[00:40:09]

And I told Murray what Harold Cohen had done, I said, you know, this sounds to me like a complex adaptive system.

[00:40:18]

And he said, Yeah, it is. Well, what do you know?

[00:40:22]

Harald's Erin had all these kissin cousins all over the world in science and in economics and so on and so forth.

[00:40:30]

I was so relieved. I thought, OK, your instincts are OK. You're doing the right thing. I didn't have the vocabulary. And that was one of the things that the Santa Fe Institute gave me. If I could have rewritten that book, no, we had to just come out.

[00:40:46]

I couldn't rewrite it. I would have had a vocabulary to explain what Aaron was doing. OK, so I got really interested in what was going on at the institute.

[00:40:59]

The people were again, bright and funny and willing to explain anything to this amateur. George Cowan, who was then the head of the institute, said he thought it might be a nice idea if I wrote a book about the institute and I thought about it and I had my eye on some other project, God knows what. And I said, I'm sorry, George, I'd really love to do it, but, you know, just not going to work for me at this moment.

[00:41:29]

I said, oh, what about I think it would make an interesting book. Well, he was right and I was wrong. I wish I'd done it.

[00:41:35]

But that's interesting. I hadn't thought about that, that that was a road not taken that I wish I'd taken. Well, you know, this just done that.

[00:41:43]

And that point is quite brave for you as an as a as a writer, as well as sort of coming from a world of literature, the literary thinking and historical thing.

[00:41:56]

I mean, just from that world and bravely talking to quite, I assume, large egos and in in a I or in complexity and so on. How did you do it?

[00:42:12]

Like, where did you I mean, I suppose they could be intimidated of you as well, because it's two different worlds.

[00:42:19]

I mean, I never picked up that anybody was intimidated by me, but how are you brave enough? Where did you find the guts to just dumb, dumb luck?

[00:42:27]

I mean, this is an interesting rock to turn over.

[00:42:30]

I'm going to write a book about it. And, you know, people have enough patience with writers if they think they're going to end up in a book that they let you flail around and so on as well. But they also look, if the writer has there's like if there's a sparkle in their eye, if they get it.

[00:42:48]

Yeah, sure.

[00:42:49]

When were you at the Santa Fe Institute?

[00:42:52]

At the time I'm talking about is nineteen ninety. Yeah. Nineteen ninety, ninety one ninety two. But we then because Joe was an external faculty member, were in Santa Fe every summer, we bought a house there and I didn't have that much to do with the institute anymore. I was writing my novels, I was doing whatever I was doing.

[00:43:17]

But I loved the institute and I loved. The again, the audacity of the ideas that really appeals to me.

[00:43:31]

I think that the there's this feeling much like in great, great institutes of neuroscience, for example, that it's there, they're in it for the long game of understanding something fundamental about reality in nature. And that's really exciting. So we start to look a little bit more recently. How, you know, is really popular today. How is this world you mentioned algorithmic, but in general is the spirit of the people, the kind of conversations you hear through the grapevine and so on, is that different than the routes that you remember?

[00:44:14]

You know, the same kind of excitement, the same kind of this is really going to make a difference in the world.

[00:44:21]

And it will it has, you know, a lot of the folks, especially young, 20 years old or something, they think we've just found something special here. We're going to change the world tomorrow on a time scale.

[00:44:37]

Do you have a sense of what of the time scale at which breakthroughs in A.I. happen?

[00:44:45]

I really don't, because look at deep learning, that was a Geoffrey Hinton came up with the algorithm in 86, but it took all these years for the technology to be good enough to actually be applicable.

[00:45:09]

So, no, I can't predict that at all. I can't I wouldn't even try.

[00:45:15]

Well, let me ask you to not to try to predict, but to speak to the you know, I'm sure in the 60s as it continues now, there's people that think, let's call it we can call it this fun war, the singularity. When there's a phase shift, there's some profound feeling where we're all really surprised by what's able to be achieved. I'm sure those dreams are there. I remember reading quotes in the 60s in those 15 years.

[00:45:44]

How have your own views, maybe if you look back about the timeline of a singularity changed. Well, I'm not a big fan of the singularity as.

[00:46:01]

Ray Kurzweil has presented it, how would you define the Ray Kurzweil sort of how well, how do you think of singularity in those?

[00:46:09]

And if I understand Kurt's view, it's sort of there's going to be this moment when machines are smarter than humans and, you know, game game over. However, the game over is I mean, do they put us on a reservation?

[00:46:25]

Do they et cetera, et cetera. And. First of all, machines are smarter than humans in some ways all over the place, and they have been since adding machines were invented.

[00:46:39]

So it's not it's not going to come like some great Oedipal crossroads, you know, where they meet each other and our offspring. Oedipus says, your dad. Yeah. It's just not going to happen.

[00:46:55]

Yes. It's already game over with calculators. Right.

[00:46:59]

They already do much better at basic arithmetic than us. But, you know, there's a human like intelligence. And it's not the ones that destroy us. But, you know, somebody that you can have as a as a friend. Oh, you can have deep connections with that kind of passing the Turing test and beyond those kinds of ideas.

[00:47:22]

Have you dreamt of those? Oh, yes, yes, yes.

[00:47:26]

Those possibilities in a book I wrote with Ed Feigenbaum, there is a little story called The Geriatric Robot and. How I came up with the geriatric robot is a story in itself, but here's here's what the geriatric robot does. It doesn't just clean you up and feed you and wheel you out into the sun. It's great. Advantages, it lessens, it says, tell me again about the great coup of 73. Tell me again about how awful or how wonderful your grandchildren are and so on and so forth.

[00:48:09]

And it isn't hanging around to inherit your money. It isn't hanging around because it can't get any other job.

[00:48:17]

This is its job and so on and so forth.

[00:48:21]

Well, I would love something like that.

[00:48:26]

Yeah. I mean, for me, that deeply excites me. So I think there's a lot of us actually got to know it was a joke.

[00:48:34]

I dreamed it up because I needed to talk to college students and I needed to give them some idea of what I might be. And they were rolling in the aisles as I elaborated and elaborated and elaborated when it went into the book.

[00:48:51]

They took my hide off in the New York Review of Books, this is just what we have thought about these people in A.I., their inhuman. Oh, come on, get over it.

[00:49:02]

Don't you think that's a good thing for the world that I could potentially walk? I do, absolutely.

[00:49:08]

And furthermore, I want you know, I'm pushing 80 now. By the time I need help like that, I also want it to roll itself in a corner and shut the fuck up.

[00:49:23]

Let me let me linger on that point, do you really, though? Yeah, I do. Here's what you wanted to push back a little bit. A little. But I have watched my friends go through the whole issue around having help in the house and. Some of them have been very lucky and had fabulous help, and some of them have had people in the house who want to keep the television going on all day. So we want to talk on their phones all day.

[00:49:52]

No. So basically, just roll this stuff in the corner.

[00:49:55]

And unfortunately, us humans, when we're assistants, we we care. We still even when we're assisting others, we care about ourselves more, of course.

[00:50:06]

And so you create more frustration and a robot assistant can really optimize the experience.

[00:50:15]

You. I was just speaking to the point. You actually bring up a very, very, very good point. I was speaking to the fact that as humans are a little complicated, that we don't necessarily want a perfect servant. I don't maybe disagree with that, but there's a. I think there's a push and pull with humans and a little tension, a little mystery that, of course, that's really difficult to get right. But I do sense, especially in.

[00:50:48]

Today with social media, the people are getting more and more lonely, even young folks and sometimes especially young folks, that loneliness, there's a longing for connection. And I can help alleviate some of that loneliness, some just somebody who listens. Like in person. That so we speak so, so, so, so to speak, so to speak. That, to me, is really exciting, but so if we look at that, that level of intelligence, which is exceptionally difficult to achieve actually as the singularity or whatever, that's the human level bar that people have dreamt of that to.

[00:51:35]

Turing dreamt of it. He had a date timeline.

[00:51:40]

Do you have how of your own timeline evolved and past? Don't I don't even think about it. You don't even know just this.

[00:51:52]

Field has been so full of surprises for me that you just take it in and see. Yeah, that's right.

[00:51:59]

Yeah, it's I just can't maybe that's because I've been around the field long enough to think, you know, don't don't go that way. Herb Simon was terrible about making these predictions of when this and that would happen.

[00:52:14]

And he was a sensible guy.

[00:52:16]

Yeah. His quotes are often used, right, as a legend. Yeah, yeah. Do you have concerns about I the existential threats, as many people like Elon Musk and Sam Harris and others are thought? Oh, yeah, yeah. That takes up a half a chapter in my book. I call it the male gaze.

[00:52:46]

Well, you hear me out. The male gaze is actually a term from film criticism. And I'm blocking on the woman who dreamed this up.

[00:52:58]

But she pointed out how most movies were made from the male point of view that. Women were objects, not subjects, they didn't have any agency. So on and so forth.

[00:53:13]

So when Elon and his pals Hawking and so on, I was going to eat our lunch and our dinner and our midnight snack, too.

[00:53:23]

I thought, what? And I said to Ed Feigenbaum, Oh, this is the first guy first. These guys have always been the smartest guy on the block. And here comes something that might be smarter. Oh, let's stamp it out before it takes over. And Ed laughed.

[00:53:39]

He said, I didn't think about it that way, but I did. I did. And. It is the male gaze, you know, OK, suppose these things do have agency. Well, let's wait and see what happens.

[00:53:55]

Can we imbue them with ethics? Can we imbue them with a sense of empathy? Or are they just going to be, uh, I know we've had centuries of guys like that.

[00:54:12]

That's interesting that, um, the the ego. The male gaze is immediately threatened. And so you can't think in a patient calm way of of how the tech could evolve.

[00:54:29]

And he's speaking of which here and then he six book The Future of Women. Oh, I think at the time. And now certainly now I mean, I'm sorry, maybe at the time, but I'm more cognizant of now is extremely relevant. You and Nancy Ramsey talk about four possible futures right now of women in science and tech. So if we look at the decades before and after the book was released, can you tell a history, sorry, of women in science and tech and how it has evolved?

[00:55:05]

How have things changed? Where do we stand? Not enough. They have not changed enough in the.

[00:55:13]

Way that women are ground down and computing is simply unbelievable, but what are the four possible futures for women in tech from the book?

[00:55:27]

What you're really looking at are various aspects of the present. So for each of those, you could say, oh yeah, we do have backlash. Look at what's happening with abortion and so on and so forth. We have one step forward, one step back. The golden age of equality was the hardest chapter to write. And I used something from the Santa Fe Institute, which is the sandpile effect, that you drop sand very slowly onto a pile and it grows and it grows and it grows until suddenly it just breaks apart and.

[00:56:07]

In a way, ME2 has done that. That was the last drop of sand that broke everything apart, that was a perfect example of the sandpile effect and that made me feel good.

[00:56:20]

It didn't change all of society, but it really woke a lot of people up.

[00:56:25]

But are you in general optimistic about. Maybe after me, too, I mean, Mitch is about a very specific kind of thing, boy, solve that and solve everything, but are you in general optimistic about the future?

[00:56:40]

Yes, I'm a congenital optimist. I can't help it.

[00:56:45]

What about A.I.? What are your thoughts?

[00:56:49]

I went be I, I, of course I get asked, what do you worry about? And the one thing I worry about is the things that we can't anticipate.

[00:57:00]

You know, there's going to be something out of left field that we will just say we weren't prepared for that. I am generally optimistic when I first took up being interested in I.

[00:57:18]

Like most people in the field, more intelligence was like more vertue, you know what could be bad, right? And.

[00:57:27]

In a way, I still believe that, but I realize that my notion of intelligence has broadened. There are many kinds of intelligence and we need to employ machines with those many kinds.

[00:57:41]

So you've now just finished or in the process of finishing your the book, you've been working on a memoir. What how have you changed? I know it's just writing for how we change the process. You look back what kind of stuff that you bring up to you that. Surprised you looking at the entirety of it all? The biggest thing, and it really wasn't a surprise, is how lucky I was oh my. To be.

[00:58:20]

To have access to the beginning of a scientific field that is going to change the world. How did I look out?

[00:58:32]

Yes, of course, my my view of things has widened a lot. If I can get back to one, the feminist part of our conversation without knowing it, it really was subconscious. I wanted A.I. to succeed because I was so tired of hearing that intelligence was inside the male cranium.

[00:59:00]

And I thought if there was something out there that wasn't a male. Thinking and doing well, then that would put a lie to this whole notion of intelligence resides in the male cranium. I did not know that until one night Harold Cohen and I were. Having a glass of wine, maybe two, and he said, what drew you to AA? And I said, Oh, you know, smartest people I knew, great project, blah, blah, blah.

[00:59:32]

And I said, and I wanted something besides male smarts.

[00:59:40]

And it just bubbled up out of me like what was brilliant, actually.

[00:59:46]

So I really humbles all of us and humbles the people that need to be humbled the most. Oh, let's hope.

[00:59:56]

Oh, wow. That is so beautiful. Pamela, thank you so much for talking to us.

[01:00:01]

Great pleasure. Thank you.