Transcribe your podcast
[00:00:00]

The following is a conversation with David Ferrucci, who led the team that built Watson, the IBM question answering system that beat the top humans in the world at the game of Jeopardy for spending a couple hours a David, I saw a genuine passion not only for abstract understanding of intelligence, but for engineering it to solve real world problems and a real world.

[00:00:22]

Deadlines and resource constraints where science meets engineering is where brilliant, simple ingenuity emerges, people who work at joining it to have a lot of wisdom earned to failures and eventual success. David is also the founder, CEO and chief scientist of Elemental Cognition, a company working to engineer A.I. systems that understand the world the way people do.

[00:00:47]

This is the Artificial Intelligence Podcast.

[00:00:50]

If you enjoy it, subscribe. I need to give it five stars on iTunes, supported on Patrón or simply connect with me on Twitter. Allex Friedman spelled F.R. Idi Amin. And now here's my conversation with David Froggie. Your undergrad was in biology with a with an eye toward medical school before you went on for the party and computer science. So let me ask you an easy question. What is the difference between biological systems and computer systems in your. When you sit back, look at the stars and think philosophically.

[00:01:44]

I often wonder I often wonder whether or not there is a substantive difference.

[00:01:49]

I mean, I think the thing that got me into computer science and artificial intelligence was exactly this presupposition that if we can get machines to think or I should say this question, this philosophical question, if we can get machines to think, to understand, to process information the way we do.

[00:02:11]

So if we can describe a procedure or describe a process, even if that process where the intelligence process itself, then what would be the difference? So from a philosophical standpoint, I'm not sure. I'm convinced that there is. I mean, you can go in the direction of spirituality. You can go in the direction of the soul.

[00:02:32]

But in terms of, you know, what we can what we can experience from an intellectual and physical perspective, I'm not sure there is clearly there implement there are there are different implementations. But if you were to say is biological information processing system fundamentally more capable than one, we might be able to build out of silicon or or some other substrate. I don't I don't know that there is how distant do you think is the biological implementation so fundamentally they may have the same capabilities, but is it really a far mystery where a huge number of breakthroughs are needed to be able to understand it?

[00:03:19]

Or is it something that for the most part in the important aspects, echoes of the same kind of characteristics? Yeah, that's interesting.

[00:03:28]

I mean, so, you know, your question presupposes that there is this goal to recreate. You know what we perceive as biological intelligence? I'm not I'm not sure that's the I'm not sure that that's how I would state the goal. I mean, I think that studying the go good. So I think there are a few goals. I think that understanding the human brain and how it works is important for us to. Be able to diagnose and treat issues for us to understand our own strengths and weaknesses, both intellectual, psychological and physical.

[00:04:08]

So neuroscience and understanding the brain from that perspective has there's a clear, clear goal there from the perspective of saying, I want to, I want to, I want to mimic human intelligence.

[00:04:21]

That one's a little bit more interesting. Human intelligence certainly has a lot of things we envy. It's also got a lot of problems, too.

[00:04:29]

So I think we're capable of sort of stepping back and saying, what do we want out of what do we want out of an intelligence?

[00:04:38]

How do we want to communicate with that intelligence, how do we want it to behave, how do we want it to perform?

[00:04:43]

Now, of course, it's it's somewhat of an interesting argument because I'm sitting here as a human with a biological brain and I'm critiquing the strengths and weaknesses of human intelligence and saying that we have the capacity, the capacity to step back and say, gee, what what is intelligence and what do we really want out of it? And that in and of itself suggests that.

[00:05:06]

Human intelligence is something quite enviable that it could you know, it can it can it can introspect that it could introspect that way.

[00:05:14]

And the flaws you mentioned the flaws in humans.

[00:05:17]

And I think I think that flaws that human intelligence has is extremely prejudicial and bias in the way it draws many inferences.

[00:05:26]

Do you think those are. Sorry to interrupt. Do you think those are features? Are those bugs? Do you think the the prejudice, the forgetfulness, the fear, what are the flaws? List them all. What love. Maybe that's a flaw. You think those are all things that can be gotten get in the way of intelligence or the essential components of intelligence?

[00:05:49]

Well, again, if you go back and you define intelligence as being able to sort of accurately, accurately, precisely, rigorously reasoned, develop answers and justify those answers in an objective way.

[00:06:02]

Yeah, then human intelligence has these flaws and that it tends to be more influenced by some of the things you said.

[00:06:12]

And it's and it's largely an inductive process, meaning it takes past data, uses that to predict the future very advantageous in some cases, but fundamentally biased and prejudicial in other cases, because it's going to be strongly influenced by its priors, whether they're whether they're right or wrong from some, you know, objective reasoning perspective, you're going to favor them because that's those are the decisions or those are the paths that succeeded in the past.

[00:06:40]

And I think that mode of intelligence makes a lot of sense for when your primary goal is to act quickly and survive and make fast decisions. And I think those create problems when you want to think more deeply and make more objective and reasoned decisions. Of course, humans capable of doing both. They do sort of one more naturally than they do the other, but they're capable of doing both.

[00:07:09]

You're saying they do the one that responds quickly, more naturally.

[00:07:12]

Right, because that's the thing we kind of need to not be eaten by the the predators in the world, for example. But but then we've we've learned to reason through logic. We've developed science.

[00:07:27]

We train people to do that. I think that's harder for the individual to do.

[00:07:33]

I think it requires training and, you know, and teaching. I think we are the human mind is certainly is capable of it, but we find it more difficult. And then there are other weaknesses, if you will, as you mentioned earlier, just memory capacity. And how many chains of inference can you actually go through without losing your way? So you just focus.

[00:07:55]

And so the way you think about intelligence. And we're really sort of floating in this philosophical space, but I think you're like the perfect person to talk about this.

[00:08:08]

Because we'll get to Jeopardy and beyond. That's an incredible one of the most incredible accomplishments in AI in the history of AI, but hence the philosophical discussion. So let me ask you've kind of alluded to it, but let me ask again, what is intelligence underlying the discussion we'll have with with Jeopardy and beyond? How do you think about intelligence? Is it a sufficiently complicated problem being able to reason your way to solving that problem, that kind of how you think about what it means to be intelligent?

[00:08:41]

So I think of intelligence to primarily two ways. One is the ability to predict. So in other words, if I have a problem, what's going to can I predict what's going to happen next, whether it's to, you know, predict the answer to a question or to say, look, I'm looking at all the market dynamics and I'm going to tell you what's going to happen next. Or you're in a in a room and somebody walks in and you're going to predict what they're going to do next or what they're going to say next.

[00:09:09]

Living in a highly dynamic environment full of uncertainty, be able to lots of lockdown. The more the more variables, the more complex, the more possibilities, the more complex.

[00:09:20]

But can I take a small amount of prior data and learn the pattern and then predict what's going to happen next accurately and consistently? That's a that's certainly a form of intelligence.

[00:09:33]

What do you need for that, by the way? You need to have an understanding of the way the world works in order to be able to unroll it into the future. Right. That you what you think is needed to predict depends what you mean by understanding.

[00:09:45]

I need to be able to find that function. And this is very much like what function deep learning does. Machine learning does is if you give me enough prior data and you tell me what the output variable is that matters, I'm going to sit there and be able to predict it.

[00:10:00]

And if I can predict, predict it accurately so I can get it right more often than not, I'm smart. If I can do that with less data and less training time, I'm even smarter. If I can figure out what's even worse, predicting I'm smarter, meaning I'm figuring out what path is going to get me toward a goal, what about picking a goal saying that again?

[00:10:24]

Well, that's interesting about picking a goal. Sort of an interesting thing. And I think that's where you bring in what do you preprogrammed to do? We talk about humans and humans are pre-programmed to survive. So sort of their primary driving goal. What do they have to do to do that? And that that could be very complex. Right. So it's not just it's not just figuring out that you need to run away from the ferocious tiger, but we survive in a social context as an example.

[00:10:54]

So understanding the subtleties of social dynamics becomes something that's important for surviving, finding a mate, reproducing. Right. So we're continually challenged with complex sets of variables, complex constraints, rules, if you will, that we see or patterns and we learn how to find the functions and predict the things.

[00:11:17]

In other words, represent those patterns efficiently and be able to predict what's going to happen. And that's a form of intelligence that doesn't really work, that doesn't really require anything specific other than the ability to find that function and predict that right answer. It's certainly a form of intelligence.

[00:11:34]

But then when we when we say, well.

[00:11:38]

Do we understand each other, in other words, do would you perceive me as as intelligent beyond that ability to predict?

[00:11:47]

So now I can predict, but I can't really articulate how I'm going to that process, what my underlying theory is for predicting.

[00:11:57]

And I can't get you to understand what I'm doing so that you can follow you can figure out how to do this yourself.

[00:12:04]

If you hadn't if you did not have, for example, the right pattern matching machinery that I did.

[00:12:10]

And now we potentially have this breakdown where in effect. I'm intelligent, but I'm sort of an alien intelligence relative to you. You're intelligent, but nobody knows about it or.

[00:12:21]

Well, I can see the I can see the output. So so you're saying let's sort of separate the two things. One is you explaining why you were able to predict the future and the. And the second is me being able to the impressing me that you're intelligent, me being able to know that you successfully predicted the future. Do you think that's. Well, it's not impressing you that I'm intelligent.

[00:12:47]

In other words, you may be convinced that I'm intelligent in some form. So it will be because of my ability to predict. So I would look at him and say, wow, wow, you're right. All you're you're right more times than I am. You're doing something interesting. That's a that's a form of intelligence. But then what happens is if I say, how are you doing that?

[00:13:09]

And you can't communicate with me and you can't describe that to me. Now, I may label you a savant.

[00:13:16]

I may say, well, you're doing something weird and it's and it's just not very interesting to me because you and I can't really communicate.

[00:13:25]

And and so now this is interesting, right? Because now this is you're in this weird place where for you to be recognized as intelligent, the way I'm intelligent, then you and I sort of have to be able to communicate.

[00:13:40]

And then my we start to understand each other and then my respect and my.

[00:13:48]

My appreciation, my ability to relate to you starts to change. So now you're not an alien intelligence anymore.

[00:13:55]

You're you're a human intelligence now because you were you and I can communicate. And so I think when we look at when we look at when we look at animals, for example, animals can do things we can't quite comprehend. We don't quite know how they do them, but they can't really communicate with us. They can't put what they're going through in our terms. And so we think of them as sort of, well, there are these alien intelligences and they're not really worth necessarily what we're worth.

[00:14:19]

We don't treat them the same way as a result of that.

[00:14:22]

But it's it's hard because who knows what you know, what's going on.

[00:14:27]

So just a quick elaboration on that. The explaining that you're intelligent, they're explaining the the reasoning that went into the prediction is not some kind of mathematical proof. If we look at humans, look at political debates and discourse on Twitter, it's mostly just telling stories. So you you your task is so that your task is not to tell an accurate.

[00:14:59]

Depiction of how you reason, but to tell a story, real or not, that convinces me that there was a mechanism by which you ultimately that's what the proof is.

[00:15:09]

I mean, even a mathematical proof is, is that because ultimately the other mathematicians have to be convinced by your proof or otherwise of Dr Bean that up the measurement success? Yeah, there have been several proofs out there where mathematicians would study for a long time before they were convinced that it actually proved anything right. You never know if it proved anything until the community mathematicians decided that it did. So, I mean, so it's but it's a real thing.

[00:15:34]

And that's sort of the point, right, is that ultimately, you know, this notion of understanding us, understanding something is ultimately a social concept.

[00:15:44]

In other words, I have to convince enough people that I did this in a reasonable way. I could do this in a way that other people can understand and replicate, and that makes sense to them.

[00:15:56]

So we're humans. Holdens is bound together in that way. We're bound up in that sense. We sort of never really get away with that until we can sort of convince others that our thinking process, you know, makes sense.

[00:16:12]

Do you think the general question of intelligence is then also a social construct? So if we task. Ask questions of an artificial intelligence system, is this system intelligent? The answer will ultimately be as socially constructed, I think.

[00:16:29]

I think so. So I think you are making two statements.

[00:16:32]

I'm saying we can try to define intelligence in the super objective way that says here, here's this data.

[00:16:39]

I want to predict this type of thing, learn this function, and then if you get it right often enough, we consider you intelligent. But that's more like subagent that I think it it I think it is it doesn't mean it's not useful.

[00:16:53]

It could be incredibly useful to be solving a problem we can't otherwise solve and can solve it more reliably than we can. But then there's this notion of can humans take.

[00:17:05]

Responsibility for the decision that you're that you're making, can we make those decisions ourselves, can relate to the process that you're going through and now you as an agent, whether you're a machine or another human, frankly, are now obliged to make me understand how it is that you're arriving at that answer.

[00:17:27]

And allow me, me, me, or obviously a community or judge of people to decide whether or not whether or not that makes sense.

[00:17:33]

And by the way, that happens with humans as well. You're sitting down with your staff, for example, and you ask for suggestions about what to do next. And someone says, well, I think you should by and I should think you should buy this much more traversal or whatever it is, or I think you should launch the product today or tomorrow or launch this product versus that product, whatever decision may be. And you ask why and the person.

[00:17:56]

So I just have a good feeling about it. And it's not you're not very satisfied now. That person could be, you know, you might say, well, you've been right before, but I'm going to put the company on the line. Can you explain to me why I should believe this?

[00:18:14]

And that explanation may have nothing to do with the truth, just enough to convince the US Congress will be wrong. She's got to be convincing, but it's ultimately got to be convincing. And that's why I'm saying it's we're bound together, right?

[00:18:28]

Our intelligences are bound together. In that sense. We have to understand each other.

[00:18:31]

And and if, for example, you're giving me an explanation, I mean, this is a very important point, right? You're giving me an explanation. And I'm and I and I and I have I I'm not good. And then I'm not good at reasoning well, and being objective and following logical paths and consistent paths, and I'm not good at measuring and sort of competing probabilities across those paths. What happens is collectively, we're not going to do we're not going to do well.

[00:19:06]

How hard is that problem? The second one.

[00:19:09]

So I think we'll talk quite a bit about the the the first on a specific objective metric benchmark performing well, but being able to explain the steps, the reasoning.

[00:19:25]

How hard is that problem? I think that's very hard. I mean, I think that that's. Well, it's hard for unions. The thing that's hard for humans, as you know, may not necessarily be hard for computers and vice versa. So, so so how hard is that problem for computers?

[00:19:47]

I think it's hard for computers.

[00:19:48]

And the reason why I related to were saying that it's also hard for humans is because I think when we step back and we say we want to design computers to do that.

[00:19:59]

One of the things we have to recognize is we're not sure how to do it.

[00:20:05]

Well, I'm not sure we have a recipe for that. And even if you wanted to learn it, it's not clear exactly what data we use. And what judgments we use to learn that well, and so what I mean by that is if you look at the entire enterprise of science, science is supposed to be at about objective reason, a reason.

[00:20:29]

Right.

[00:20:30]

We think about, gee, who's the most intelligent person or group of people in the world. Do we think about the savants who can close their eyes and give you a number, we think about the think tanks or the scientists or the philosophers who kind of work through the details and write the papers and come up with the thoughtful, logical proofs and use the scientific method.

[00:20:54]

And I think it's the latter. And my point is that how do you train someone to do that, and that's what I mean by it's hard. How do you what's the process of training people to do that? Well, that's a hard process. We work as a society.

[00:21:11]

We work pretty hard to get other people to understand our thinking and to convince them of things. Now we could persuade them. Obviously, you talked about this like human flaws or weaknesses. We can. Persuade them, persuade them through emotional means, but to put to get them to understand and connect to and follow a logical argument is difficult.

[00:21:36]

We try it, we do it. We do it as scientists. We try to do it as journalists. We try to do it, as you know, even artists in many forms, as writers, as teachers.

[00:21:46]

We go to a fairly significant training process to do that. And then we could ask, well, why is that so hard? But it's hard, and for humans, it takes a lot of work. And when we step back and take a step back and say, well, how do we get a machine to how do we get a machine to do that? It's a vexing question. How would you begin to try to solve that and maybe just a quick pause, because there's an optimistic notion in the things you're describing, which is being able to explain something through reason.

[00:22:22]

But if you look at algorithms that recommend things that we look at next, whether it's Facebook, Google, advertising based companies. You know. Their goal is to convince you to buy things. Based on anything. So that could be reason because the best of advertisement is showing you things that you really do need and explain why you need it, but it could also be through emotional manipulation. The algorithm that describes why a certain reason, a certain decision was was made, how hard is it to do it through emotional manipulation and why is that a good or a bad thing?

[00:23:09]

So you've kind of focused on reason. Logic really showing in a clear way why something is a good one, is that even a thing that as humans do? And and and two, how do you think of the difference in the reasoning aspect and the emotional manipulation?

[00:23:30]

Well, you know, so you call it emotional manipulation, but more objectively is essentially saying, you know, thing you know, there are certain features of things that seem to attract your attention. I mean, it kind of give you more of that stuff.

[00:23:43]

And manipulation is a bad word. Yeah. I mean, I'm not saying it's right or wrong. It works to get your attention and it works to get you to buy stuff. And when you think about algorithms that look at the patterns of the patterns of features that you seem to be spending your money on, and they're going to give you something with a similar pattern. So I'm going to learn that function because the objective is to get you to click on it or get you to buy it or whatever it is.

[00:24:07]

I don't know. I mean that it is like it is what it is. I mean, that's what the algorithm does. You can argue whether it's good or bad. It depends what your you know, what your what your goal is.

[00:24:16]

I guess since it is very useful for convincing intelligence collecting, for convincing humans. Yeah, it's good because because again, this goes back to how does a you know, what is the human behavior like? How does the human brain respond to things?

[00:24:33]

I think there's a more optimistic view of that, too, which is that if you're searching for certain kinds of things, you've already reasoned that you need them. And these these algorithms are saying, look, that's up to you reason, whether you need something or not, that's your job. You know, you you may have an unhealthy addiction to this stuff or you may have a reasoned and thoughtful.

[00:24:58]

Explanation for why it's important to you and the algorithms are saying, hey, that's like whatever like that's your problem. All I know is you're buying stuff like that. You're interested in stuff like that. Could be a bad reason. Could be a good reason. That's up to you. I'm going to show you more of that stuff.

[00:25:13]

And so and I and I and I think that that's it's not good or bad. It's not reason or not reason. The algorithm is doing what it does, which is saying you seem to be interested in this. I'm going to show you more of that stuff.

[00:25:25]

And I think we're seeing this not just in buying stuff, but even in social media. You're reading this kind of stuff. I'm not judging on whether it's good or bad. I'm not reasoning at all. I'm just saying I'm going to show you other stuff with similar features and, you know, and like in that set.

[00:25:38]

And I wash my hands from it and I say, that's all. That's all that's going on.

[00:25:42]

You know, there is you know, people are so harsh on AI systems. So, one, the bar of performance is extremely high. And yet we also asked them to, in the case of social media, to help find the better angels of our nature and help make a better society. So what do you think about the role of that?

[00:26:04]

So that I agree with you? That's that's the interesting dichotomy, right. Because on one hand, we're sitting there and we're sort of doing the easy part, which is finding the patterns. We're not building the systems, not building a theory that it's consumable and understandable other humans that can be explained and justified. And and so on one hand, to say, oh, you know, I'm doing this, why isn't doing this other thing? Well, there's other things a lot harder.

[00:26:32]

And it's interesting to think about why, why, why it's harder. And because you're interpreting you're interpreting the data in the context of prior models. In other words, understandings of what's important in the world, what's not important, what are all the other abstract features that drive our decision making? What's sensible, what's not sensible, what's good, what's bad? What's moral, what's valuable? What is that? Where is that stuff?

[00:26:57]

No one's applying the interpretation. So when I when I see you clicking on a bunch of stuff and I look at these simple features, the raw features, the features that are there in the data, like what words are being used?

[00:27:11]

Or how long the material is or other very superficial features, what colors are being used in the material like, I don't know why you're clicking on this stuff here or if it's products, what the what the prices or the categories and stuff like that.

[00:27:25]

And I just feed you more of the same stuff that's very different than kind of getting in there and saying, what does this mean?

[00:27:33]

The stuff you're reading, like, why are you reading it?

[00:27:37]

What assumptions are you bringing to the table? Are those assumptions sensible? Is the does the material make any sense? Does it. Does it lead you to thoughtful, good conclusions?

[00:27:50]

Again, there's judgment in interpretation and judgment involved in that process that isn't really happening in in in the AEI today.

[00:27:59]

That's harder, right?

[00:28:01]

Because you have to start getting at the meaning of this, of the of the stuff of the content you have to get at how humans interpret the content relative to their value system and deeper thought processes.

[00:28:16]

So that's what meaning means is not just some kind of deep, timeless semantic thing that the statement represents, but also how a large number of people are likely to interpret.

[00:28:31]

So it's, again, even meaning as a social construct, it's so you have to try to predict how most people would understand this kind of statement.

[00:28:40]

Yeah, meaning is often relative, but meaning implies that the connections go beneath the surface of the artifacts. If I show you a painting, it's a bunch of colors on a canvas.

[00:28:51]

What does it mean to you? And it may mean different things to different people because of their different experiences. It may mean something even different to the artist who painted it. As we try to get more rigorous with our communication, we try to really nail down that meaning, so we go from abstract art to precise mathematics, precise engineering drawings and things like that, we're really trying to say, I want to narrow that that space of possible interpretations because the precision of the communication ends up becoming more and more important.

[00:29:29]

And so that means that I have to specify and I think that's why this becomes really hard, because if I'm just showing you an artifact and you're looking at it superficially, whether it's a bunch of words on a page or whether it's, you know, brush strokes on a canvas or pixels in a photograph, you can sit there and you can interpret lots of different ways at many, many different levels.

[00:29:56]

But when I want to when I want to align our understanding of that, I have to specify a lot more stuff that's actually not in it, not directly in the artifact.

[00:30:08]

Now, I have to say, well, how do you how are you interpreting this image and that image?

[00:30:13]

And what about the colors and what do they mean to what's what perspective are you bringing to the table? What are your prior experiences with those artifacts? What are your fundamental assumptions and values? What is your ability to kind of reason to chain together logical implication as you're sitting there and saying, well, if this is the case, then I would conclude this. And if that's the case, then I would conclude that. And so your reasoning processes and how they work your prior models and what they are, your values and your assumptions, all those things now come together into the interpretation.

[00:30:47]

Getting in sick of that is is hard. And yet humans are able to intuit some of that without any pressure because they have the shared experience and we're not talking about shared two people having a shared experience, we as a society.

[00:31:01]

That's correct.

[00:31:02]

We have the shared experience and we have similar brains. So we tend to in other words, part of our shared experiences, our shared local experience, like we may live in the same culture, we may live in the same society, and therefore we have similar educations.

[00:31:18]

We have some of what we like to call prior models about the way our prior experiences. And we use that as a think of it as a wide collection of interrelated variables. And they're all bound to similar things. And so we take that as our background and we start interpreting things similarly.

[00:31:33]

But as humans, we have we have a lot of shared experience. We do have similar brains, similar goals, similar emotions under similar circumstances because we're both human. So now one of the early questions you ask, how is biological? And, you know, computer information systems are fundamentally different.

[00:31:54]

Well, one, you know, one is humans come with a lot of pre-programmed stuff. Yeah. A ton of program stuff.

[00:32:02]

And they're able to communicate because they have a lot of because they share that stuff.

[00:32:06]

Do you think that shared knowledge. If we can maybe escape the hard work question, how much is encoded in the hardware, just the shared knowledge in the software, the history of the many centuries of wars and so on that came to today, that shared knowledge.

[00:32:26]

How hard is it? To encode. And do you have a hope can you speak to how hard is it to encode that knowledge systematically in a way that could be used by a computer?

[00:32:39]

So I think it is possible to learn to form a machine, to program machine to acquire that knowledge with a similar foundation, in other words, in a similar interpretative interpretive foundation for processing that knowledge.

[00:32:54]

What do you mean by that?

[00:32:55]

So in other words, we view the world in a particular way.

[00:33:00]

So in other words, we we have a, if you will, as humans, we have a framework for interpreting the world around us.

[00:33:08]

So we have multiple framework for interpreting the world around us. But if you're interpreting, for example, social political interactions, you're thinking about whether there's people there's collections and groups of people, they have goals. The goals are largely built around survival and quality of life that are their fundamental economics. Around scarcity of resources and when when humans come and start interpreting a situation like that, because you brought you brought up like historical events, they start interpreting situations like that, they apply or a lot of this a lot of this fundamental framework for interpreting that, well, who are the people, what were their goals, what resources they have, how much power influence that they have over like this fundamental substrate, if you will, for interpreting and reasoning about that.

[00:34:00]

So I think it is possible to imbue a computer with that that stuff that humans like take for granted when they go and sit down and try to interpret things. And then and then with that with that foundation they acquire they start acquiring the details.

[00:34:16]

The specifics in a given situation are then able to interpret it with regard to that framework. And then given that interpretation, they can do what they can predict. But not only can they predict, they can predict now with an explanation. That can be given in those terms and the terms of that underlying framework that most humans share. Now, you could find humans that come and interpret events very differently than other humans because they're like using a different, different framework.

[00:34:46]

You know, the movie Matrix comes to mind where, you know, they decided the humans were really just batteries and that's how they interpreted the value of humans as a source of electrical energy.

[00:34:57]

So but but I think that, you know, for the most part, we we have a way of of interpreting the events or the social events around us because we have this shared framework.

[00:35:10]

It comes from, again, the fact that we're we're similar beings that have similar goals, similar emotions, and we can make sense out of these these frameworks make sense to us. So how much knowledge is there, do you think?

[00:35:24]

So you said it's possible. Well, there's a tremendous amount of detailed knowledge in the world.

[00:35:29]

You know, you can imagine, you know, effectively infinite number of unique situations and unique configurations of these things.

[00:35:38]

But the knowledge that you need, what I referred to as like the frameworks for you need for interpreting them, I don't think I think that's those are finite.

[00:35:47]

You think the frameworks are more important than the bulk of the not so like framing.

[00:35:54]

Yeah, because the frameworks do is they give you now the ability to interpret and reason and interpret and reasoning to interpret and reason over the specifics in ways that other humans would understand.

[00:36:05]

What about the specifics? You know, you acquire the specifics by reading and by talking to other people.

[00:36:11]

So I mostly actually just even if I can focus on even the beginning, the common sense stuff, the stuff that doesn't even require reading or it almost requires playing around with the world or something, just being able to sort of manipulate objects, drink water and so on. All that every time we try to do that kind of thing in robotics or A.I., it seems to be like an onion.

[00:36:37]

You seem to realize how much knowledge is really required to perform you in some of these basic tasks. Do you have that sense as well? And if so, how do we get all those details? Are they written down somewhere? Yeah, they have to be learned through experience.

[00:36:55]

So I think when you're talking about sort of the physics, the basic physics around us, for example, acquiring information about Quiring, how that works.

[00:37:06]

Yeah, I think that I think there's a combination of things going on.

[00:37:08]

I think there's a combination of things going on. I think there is like a fundamental pattern matching, like what we were talking about before. You see enough examples, enough data about something.

[00:37:18]

You start assuming that and with similar input, I'm going to predict similar outputs. You don't can't necessarily explain it at all. You may learn very quickly that when you let something go, it falls to the ground.

[00:37:32]

That's a that's a situation. Can't necessarily explain that.

[00:37:36]

But that's such a deep idea that if you let something go like gravity, I mean, people are letting things go and counting on them falling well before they understand gravity.

[00:37:47]

But that seems to be that's exactly what I mean, is before you take a physics class or or study anything about Newton, just the idea that stuff falls to the ground and then you'd be able to generalise that all kinds of stuff falls to the ground.

[00:38:05]

It just seems like a non if without encoding it like hard coding it in, it seems like a difficult thing to pick up. It seems like you have to have a lot of different knowledge to be able to integrate that into the framework, sort of into everything else.

[00:38:24]

So both know that stuff falls to the ground and start to reason about.

[00:38:30]

Social political discourse is both like the very basic and the high level reasoning decision making, I guess the question is how hard is this problem?

[00:38:42]

And sorry to linger on it because again and we'll get to it for sure as it was in jeopardy, did it take on a problem that's much more constrained but has the same hugeness of scale, at least from the outsider's perspective? So I'm asking the general life question of to be able to be an intelligent being and reasoning in the world about both gravity and politics. How hard is that problem?

[00:39:10]

So I think it's solvable, that's OK now, beautiful.

[00:39:17]

So what about what about time travel?

[00:39:22]

OK, I am not as convinced. Not as convinced, you know, I think it is I think it is solvable.

[00:39:30]

And I think that it's a first of all, it's about getting machines to learn.

[00:39:34]

And learning is fundamental. And I think we're already in a place that we understand, for example, how machines can learn in various ways right now on learning. Our learning stuff is sort of primitive in that we haven't sort of taught machines to learn the frameworks.

[00:39:55]

We don't communicate our frameworks because of our shared.

[00:39:58]

In some cases we do, but we don't annotate, if you will, all the data in the world with the frameworks that are inherent or underlying our understanding.

[00:40:09]

Instead, we just operate with the data. So if we want to be able to reason over the data in similar terms, in the common frameworks, we need to be able to teach the computer, or at least we need to program the computer to require to have access to and acquire, learn the frameworks as well and connect the frameworks to the data. I think this I think this can be done. I think we can start I think machine learning, for example, with enough examples can start to learn these basic dynamics.

[00:40:45]

Will they relate them necessarily to gravity?

[00:40:48]

Not unless they can also acquire those theories as well. And put the experiential knowledge and connect it back to the theoretical knowledge, I think if we think in terms of these class of architectures that are are designed to both learn the specifics, find the patterns, but also acquire the frameworks and connect the data to the frameworks, if we think in terms of robust architectures like this, I think there is a path toward getting there in terms of encoding architectures like that.

[00:41:22]

Do you think systems that were able to do this will look like neural networks or. Representing if you look back to the 80s and 90s with expert systems, so more like Graff's systems that are based in logic. Able to contain a large amount of knowledge where the challenge was the automated acquisition of that knowledge. I guess the question is when you collect both the framework's and the knowledge from the data, what do you think that thing will look like?

[00:41:53]

Yeah, so, I mean, I think asking the question they look like neural networks is a bit of a red herring. I mean, I think that they they will they will certainly do inductive or pattern match based reasoning.

[00:42:03]

And they've already experimented with architectures that combine both that use machine learning and neural networks to learn certain classes of knowledge in order to find repeated patterns in order or in order for it to make good inductive guesses, but then ultimately to try to take those learnings and marry them.

[00:42:22]

Those words connect them to frameworks so that it could then reason over that in terms other humans understand. So, for example, at Elemental Cognition, we do both. We have architectures that that do both, but both those things, but also have a learning method for acquiring the frameworks themselves and saying, look, ultimately I need to take this data. I need to interpret it in the form of these frameworks for that reason.

[00:42:47]

So there is a fundamental knowledge representation like you saying, like these graphs of logic, if you will.

[00:42:53]

There are also neural networks that acquire a certain class of information.

[00:42:58]

They then they then align them with these frameworks. But there's also a mechanism to acquire the frameworks themselves.

[00:43:05]

So it seems like the idea of frameworks requires some kind of collaboration with humans.

[00:43:11]

Absolutely. So do you think of that collaboration as well?

[00:43:15]

And let's let's be clear only for for the express purpose that you're designing.

[00:43:21]

You're designing machine designing and intelligence that can ultimately communicate with humans in the terms of frameworks that help them understand things. Right.

[00:43:33]

So so now, to be really clear, you can create you can independently create a machine learning system and an intel and intelligence that I might call an alien intelligence that does a better job than you would some things but can't explain the framework to you. That doesn't mean as it might be better than you at the thing.

[00:43:53]

It might be that you cannot comprehend the framework that it may have created for itself that is inexplicable to you. That's a reality.

[00:44:01]

But you're more interested in a case where you can. I am, yeah. My sort of approach to eyes because I've set the goal for myself. I want machines to be able to ultimately communicate.

[00:44:16]

Understanding with you and I want to meet with a choir and communicate, acquire knowledge from humans and communicate knowledge to humans, they should be using what, you know, inductive machine learning techniques are good at, which is to observe patterns of data, whether it be in language or whether it be in images or videos or whatever.

[00:44:39]

To acquire these patterns, to induce the generalisations from those patterns, but then ultimately to work with humans, to connect them to framework's interpretations, if you will, that ultimately make sense to humans?

[00:44:52]

Of course, the machine is going to have the strength.

[00:44:55]

It has the richer, a longer memory. But, you know, it has the more rigorous reasoning abilities, the deeper reasoning abilities. So it'll be interesting, you know, complementary relationship between the human and the machine.

[00:45:09]

Do you think that ultimately means explained ability like a machine? So if you look at the study, for example, Tesla, the autopilot a lot, or humans, I don't know if you've driven the vehicle or are aware of that.

[00:45:20]

So you're basically the human and machine and working together there and the human is responsible for their own life to monitor the system. And, you know, the system fails every few miles. And so there's there's hundreds of millions of those failures a day. And so that's like a moment of interaction. Do you see?

[00:45:42]

Yeah, that's exactly right.

[00:45:44]

That's a moment of interaction where, you know, the machine has learned some stuff. It has a failure. Somehow the failure is communicated.

[00:45:54]

The human is now filling in the mistake, if you will, or maybe correcting or doing something that is more successful. In that case, the computer takes that learning. So I believe that the collaboration between human and machine, I mean, that's sort of a primitive example.

[00:46:10]

And it's sort of a more another example is where the machine is literally talking to you and saying, look, I'm I'm reading this thing. I know. I know that, like, the next word might be this or that, but I don't really understand why I have my gas.

[00:46:26]

Can you help me understand the framework that supports this and then can kind of acquire that take that and reason about it and reuse it the next time it's reading to try to understand something not not unlike a human student might do.

[00:46:41]

I mean, I remember like when my daughter was in first grade and she was had a reading assignment about electricity.

[00:46:48]

And, you know, somewhere in the text, it says, and electricity is produced by water flowing over turbines or something like that. And then there's a question that says, well, how is electricity created? And so my daughter comes to me and says, I mean, I could, you know, created and produce are kind of synonyms in this case.

[00:47:05]

So I can go back to the tax and I can copy by water flowing over turbines. But I have no idea what that means.

[00:47:12]

Like, I don't know how to interpret water flowing over turbines and what electricity even is.

[00:47:16]

I mean, I can get the answer right by matching the text, but I don't have any framework for understanding what this means at all and framework, really.

[00:47:25]

I mean, it's a set of not the mathematical axioms of ideas that you bring to the table in interpreting stuff and then you build those up.

[00:47:34]

Somehow you build them up with the expectation that there's a shared understanding of what they are.

[00:47:40]

It's the social right that us humans do. You have a sense that humans on Earth in general share a set of like how many frameworks are there?

[00:47:52]

I mean, it depends on how you bound them, right? So in other words, how big or small like their their individual scope. But there's lots and there are new ones.

[00:48:00]

I think they're I think the way I think about is kind of a layer I think of the architectures being layered in that there is there's a small set of primitives that allow you the foundation to build frameworks.

[00:48:12]

And then there may be many frameworks, but you have the ability to acquire them and then you have the ability to reuse them.

[00:48:19]

I mean, one of the most compelling ways of thinking about this is the reasoning by analogy where I can say, oh, wow, I've learned something very similar. I never heard of this. I never heard of this game soccer. But if it's like basketball in the sense that the goals like the hoop and I have to get the ball in the hoop and I have guards and I have this and I have that, like, where does the where where are the similarities and where the differences.

[00:48:44]

And I have a foundation now for interpreting this new information.

[00:48:47]

And then different groups like the Millennials will have a framework. And then and then. Well, you know. Yeah, well, like the Democrats and Republicans, millennials, nobody wants that framework. Well, I mean, I think one understands it, right?

[00:49:02]

I mean, you're talking about political and social ways of interpreting the world around them. And I think these frameworks are still largely, largely similar. I think they differ in maybe what some fundamental assumptions and values are now from a reasoning perspective, like the ability to process the framework of man might not be that different.

[00:49:20]

The implications of different fundamental values of fundamental assumptions in those framework frameworks may reach very different conclusions.

[00:49:28]

So from so from a social perspective, the conclusions may be very different from an intelligence perspective.

[00:49:34]

I you know, I just followed with my assumptions, took me to the process of it looks similar, but that's a fascinating idea that frameworks really help carve the how a statement will be interpreted.

[00:49:49]

I mean, having a Democrat and a Republican framework and then read the exact same statement and the conclusions that you derive will be totally different from an ad perspective is fascinating.

[00:50:03]

What we would want out of the eye is to be able to tell you that this perspective, one perspective, one set of assumptions, is going to lead you here.

[00:50:11]

Another set of assumptions is going to leave you there and in fact, help people. Reason to say, oh, I see where I see where our differences lie. Yeah. Know, I have this fundamental belief about that. I have this fundamental belief about that.

[00:50:25]

Yeah, it's quite brilliant from my perspective.

[00:50:27]

And an LP, there's this idea that there's one way to really understand this statement, but there probably isn't.

[00:50:35]

There's probably an infinite number of ways then to say, well, there's a lot going on, there's lots of different interpretations. And the you know, the the broader the broader the the content.

[00:50:46]

The richer it is and so, you know, you and I can have very different experiences with the same taxed, obviously, and if we're committed to understanding each other.

[00:50:58]

We start, and that's the other important point. Like, if we're committed to understanding each other, we start decomposing and breaking down our interpretation to as more and more primitive components until we get to that point where we say, oh, I see why we disagree and we try to understand how fundamental that disagreement really is.

[00:51:18]

But that requires a commitment to breaking down that interpretation in terms of that framework in a logical way. Otherwise, you know, and this is why I think of as is really complementing and helping human intelligence to overcome some of its biases and its predisposition to be persuaded by, you know, by more shallow reasoning in the sense that, like we get over this idea.

[00:51:43]

Well, you know you know, I'm right because I'm a Republican or I'm right because I'm Democratic and someone labeled as Democratic point of view or it has the following keywords in it. And and if the machine can help us break that argument down and say, wait a second, you know, what do you really think about this? Right. So essentially holding us accountable to doing more critical thinking when I sit and think about that.

[00:52:05]

That's I love that. I think that's really empowering. USVI for the public discourse is completely disintegrating currently. And as we learn how to do it on social media. Right. So one of the greatest accomplishments in the history of I. Is Watson competing in the game of Jeopardy against humans and you were a lead in a critical part of that, let's start the very basics. What is the game of jeopardy, the game for us humans, human versus human?

[00:52:42]

Right.

[00:52:42]

So it's to take a. Question and answer, it became just the opposite, actually. Well, no, but it's not right. It's really not is really too early to get a question and answer, but it's what we call a factoid question.

[00:53:00]

So this notion of like it's it really relates to some fact that few people would argue whether the facts are true or not.

[00:53:06]

In fact, most people in jeopardy kind of counts on the idea that these these statements have factual answers and.

[00:53:16]

And the idea is to, first of all, determine whether or not you know the answer, which is sort of an interesting twist. So first of all, understand the question. You have to understand the question.

[00:53:25]

What is it asking? And that's a good point, because the questions are not asked directly. Right.

[00:53:30]

They're all like the way the questions are asked is non-linear.

[00:53:34]

It's like it's a little bit witty. It's a little bit playful. Sometimes it's a it's a little bit tricky. Yeah.

[00:53:42]

They're asked in exactly numerous, witty, tricky ways. Exactly what they're asking is not obvious. It takes it takes inexperienced humans a while to go. What is it even asking. Right. And that's sort of an interesting realization that you have when somebody says, oh, what's Jeopardy! Is a question answering show. And there's a girl like I know a lot. And then you read it and you're you're still trying to process the question. And the champions have answered and moved on there like three questions.

[00:54:07]

I had the time you figured out what the question even met. So there's there's definitely an ability there to just parse out what the question even is. So that was certainly challenging. It's interesting.

[00:54:17]

Historically, though, if you look back at the Jeopardy games much earlier, you know, like 60 Minutes, the questions were much more direct.

[00:54:26]

They weren't quite like that. They got sort of more and more interesting. The way they asked them that sort of got more and more interesting and subtle and nuanced than humorous and witty over time, which really required the human to kind of make the right connections in figuring out what the question was even asking.

[00:54:43]

So, yeah, you have to figure out the questions, even asking. Then you have to determine whether or not you think you know the answer.

[00:54:50]

And because you have to buzz in really quickly, you sort of have to make that determination as quickly as you possibly can.

[00:54:57]

Otherwise, you lose the opportunity to buzz in you even before you really know if you know the answer. I think lot I think a lot of humans will will soon.

[00:55:04]

They'll they'll look at they'll look at their process of very superficially. In other words, what's the topic? What are some keywords and just say, do I know this area or not?

[00:55:14]

Before they actually know the answer, then they'll buzz and then they'll buzz in and think about it. This is interesting what humans do know. Some people who know all things like Ken Jennings or something or the more recent big Jeopardy player is that they'll just assume they know all or Jeopardy.

[00:55:30]

And I just said, you know, Watson, interestingly, didn't even come close to knowing all of Jeopardy.

[00:55:36]

Watson, even at the peak, even at its best. Yeah. So, for example, I mean, we had this thing called recall, which is like how many of all the Jeopardy questions, you know, how many did could we even, like, find the right answer for, like, anywhere? Could we come up with if we look you know, we had a big body of knowledge, some of the order of several terabytes.

[00:55:56]

I mean, from from a Web scale was actually very small, but from like a book scale, talking about millions of books. Right. So they could be millions of books as cyclopedia, as dictionaries, books. So a ton of information.

[00:56:08]

And, you know, for I think it was eighty only eighty five percent was the answer anywhere to be found.

[00:56:13]

So you're already down, you're already down at that level just to just to get started. Right. So and so it was important to get a very quick sense of do you think you know the right answer to this question? So we have to compute that confidence as quickly as we possibly could.

[00:56:30]

So in effect, we have to answer it and at least, you know, spend some time essentially answering it and then judging the confidence that we, you know, that that answer was right and then deciding whether or not we were confident enough to buzz in. And that would depend on what else was going on in the game because it was a risk. So, like, if you're really in a situation where I have to take a guess, I have very little to lose, then you'll buzz in with less confidence.

[00:56:56]

So that was a for the the financial standings of the different competitors. Correct.

[00:57:01]

How much of the game was live, how much time was left and where you were in the standings. Things like that.

[00:57:06]

What, how many hundreds of milliseconds are we talking about here. Do you have a sense of what is the target that was the targeted?

[00:57:14]

So I mean, we targeted answering in under three seconds and buzzing it.

[00:57:21]

So the decision to buzz in and then the actual answering are those two.

[00:57:27]

There were two, they were two different things. In fact, we had multiple stages, whereas like we would say, let's estimate our confidence, which which was sort of a shallow answering process and then ultimate and then ultimately decide to buzz in.

[00:57:40]

And then we may take another second or something to kind of go in there and do that.

[00:57:47]

But by and large, we're saying like we can't play the game. We can't even compete if we can't on average answer these questions in around three seconds or less.

[00:57:56]

So you stepped in. So there's this there's these three humans playing a game and you stepped in with the idea that IBM Watson would be one to replace one of the humans and compete against two.

[00:58:08]

Can you tell the story of Watson taking on this game? Sure seems exceptionally difficult. Yeah, so. The story was that it was it was coming up, I think, the 10 year anniversary of a big blue IBM wanted to do sort of another kind of really fun challenge, public challenge that can bring attention to IBM research and the kind of cool stuff that we were doing.

[00:58:36]

I had been working in and I at IBM for some time, I had a team doing what's called open domain factoid question answering, which is, you know, we're not going to tell you what the questions are.

[00:58:47]

We're not even going to tell you what they're about. Can you go off and get accurate answers to these questions? And it was an area of our research that I was involved in. And so it was a big part. It was a very specific passion of mine, language, understanding, and always, always been a passion of mine. One sort of narrow slice on whether or not you could do anything with language was this notion of open domain and meaning I could ask anything about anything factoid, meaning that essentially had an answer and, you know, being able to do that accurately and quickly.

[00:59:17]

So that was a research area that might have already been hit.

[00:59:20]

And so completely independently, several, you know, IBM executives like, what are we going to do with the next cool thing to do?

[00:59:27]

And Ken Jennings was on his winning streak. This is like whatever was 2004, I think was on his winning streak.

[00:59:35]

And someone thought, hey, they'll be really cool if if the computer completes Haverty.

[00:59:40]

And so this was like in 2004, they were shopping this thing around and everyone was telling the the the research you no way like this is crazy.

[00:59:51]

And we have some pretty, you know, senior people in the field and saying, now, that's crazy. And we'll come across my desk. And I was like, but that's kind of what what I'm really interested in doing.

[00:59:59]

And but there was such a prevailing sense of this is not we're not going to risk IBM's reputation on this. We're just not doing it. And this happened in 2004. It happened in 2005.

[01:00:09]

At the end of two thousand six, it was coming around again. And I was coming off of a I was doing that the open domain question, answering stuff.

[01:00:19]

But I was coming off a couple other projects.

[01:00:22]

I had a lot more time to put into this. And I argued that it could be done. And I argue it would be crazy not to do this.

[01:00:29]

Can I be honest at this point? So even though you argue for it, what's the confidence that you had yourself privately that this could be done was we just told the story how you tell stories to convince others.

[01:00:44]

How confident were you? What was your estimation of the problem at that time?

[01:00:48]

So I thought it was possible and a lot of people thought it was impossible. I thought it was possible. A reason why I thought it was possible was because I did some brief experimentation. I knew a lot about how we were approaching open domain factoid question answering. We have been doing it for some years. I looked at the Jeopardy stuff.

[01:01:05]

I said this is going to be hard for a lot of the points that we mentioned earlier are hard to interpret the question, hard to do it quickly enough, hard to compute an accurate confidence.

[01:01:16]

None of this stuff had been done well enough before, but a lot of the technologies we're building with the kinds of technologies that should work.

[01:01:23]

But more to the point, what was driving me was I was an IBM research.

[01:01:29]

I was a senior leader in IBM Research. And this is the kind of stuff we were supposed to do. We were basically supposed to is the moonshot. This is I mean, we were supposed to take things and say this is an active research area.

[01:01:41]

It's our obligation to kind of if we have the opportunity to push it to the limits and if it doesn't work, to understand more deeply why we can't do it. And so I was very committed to that notion, saying, folks, this is what we do. It's crazy not not to do it.

[01:01:58]

This is an act of research. We've been in this for years.

[01:02:01]

Why wouldn't we take this grand challenge and and push it as hard as we can? At the very least, we'd be able to come out and say, here's why this problem is is way hard. Here's what we've tried and here's how we fail. So I was very driven as a scientist from that perspective.

[01:02:20]

And then I also argued, based on what we did, a feasibility study, a why I thought it was hard but possible. And I showed examples of, you know, where it succeeded, where it failed, why it failed, and sort of a high level architectural approach for why we should do it.

[01:02:35]

But for the most part, that at that point, the exact time we were just looking for someone crazy enough to say yes, because for several years at that point, everyone had said, no, I'm not willing to risk my reputation and my career, you know, on this thing.

[01:02:51]

Clearly, you did not have such fears, OK? I did not see you dived right in. And yet, from what I understand, it was performing very poorly in the beginning.

[01:03:02]

So what were the initial approaches and why did they fail?

[01:03:07]

Well, there were lots of hard aspects to it. I mean, one of the reasons why prior approaches that we had worked on in the past failed was because of because the questions were difficult, difficult to interpret, like, what are you even asking for?

[01:03:25]

Right.

[01:03:26]

Very often, like if the question was very direct, like what city you know, or what you know, even then it could be tricky. But but you know what city or what person?

[01:03:38]

Often when it would name it, very clearly you would know that.

[01:03:41]

And and if there was just a small set of them, in other words, we're going to ask about these five types, like it's going to be an answer. And the answer will be a city in this state or a city in this country. The answer will be a person of this type. Right, like an actor or whatever it is.

[01:03:59]

But it turns out that in jeopardy, there were like tens of thousands of these things. And it was a very, very long tale.

[01:04:06]

Meaning, you know, it just went on and on and on, and so even if you focused on trying to encode the types at the very top, like there's five that were the most, let's say five of the most frequent, you still cover a very small percentage of the data. So you couldn't take that approach of saying, I'm just going to try to collect facts about these five or 10 types or 20 types of 50 types or whatever.

[01:04:30]

So that was like one of the first things like what do you do about that? And so we came up with an approach toward that and the approach looked promising.

[01:04:39]

And we continue to improve our ability to to to handle that problem throughout the project.

[01:04:45]

The other issue was that right from the outset I said we're not going to I committed to doing this in three or five years. So we did in four. So I got lucky.

[01:04:57]

But one of the things that that putting that, like, stake in the ground was I and I knew how hard the language of the standing problem was. I said we're not going to actually understand language to solve this problem.

[01:05:10]

We are not going to interpret the question and the domain of knowledge. The question refers to in recent over that answer these questions.

[01:05:19]

We're not going to be doing that.

[01:05:20]

At the same time, simple search wasn't good enough to to confidently answer with a single correct answer.

[01:05:29]

So that's brilliant. That's such a great mix of innovation and practical engineering. Three, three, four. So you're not you're not trying to solve the problem. You're saying let's solve this in any way possible?

[01:05:41]

Oh, yeah. No, I was committed to saying, look, we're just solving the open domain question system problem.

[01:05:47]

We're using Jeopardy as a driver for that big Benchmade. Hard enough. Big benchmark. Exactly. And now how do we do it?

[01:05:55]

We're going to like whatever like just figure out what works, because I want to be able to go back to the academic science community and say, here's what we tried.

[01:06:02]

Here's what worked, what didn't work. Great. I don't want to go in and say, oh, I only have one technol, I have a hammer and only going to use this. I'm going to do whatever it takes. I'm like, I'm going to think out of the box and do whatever it takes one.

[01:06:14]

And I also do another thing.

[01:06:16]

I believe I believe that the fundamental NLP technologies and machine learning technologies would be would be adequate.

[01:06:25]

And this was an issue of how do we enhance them, how do we integrate them, how do we advance them?

[01:06:31]

So I had one researcher and came to me who had been working on question, answering with me for a very long time. Who had said we're going to need Maxwell's equations for question answering and that if we if we need some fundamental formula that breaks new ground and how we understand language, we're screwed.

[01:06:49]

We're not going to get there from here like we I am not counting. I am that my assumption is I'm not counting on some brand new invention.

[01:06:58]

What I'm counting on is the ability to take everything it has done before to figure out an architecture on how to integrate it well and then see where it breaks and make the necessary advances we need to make and hold. This thing works.

[01:07:14]

Yeah, but is it hard to see what breaks then? I mean, that's how people change the world and that's the Yarmuk approach to the rockets space X, that's the Henry Ford and so on. And I happen to be in this case, I happen to be right.

[01:07:28]

But but like we didn't know. Right. But you kind of have to put a stake in there. So how are you going to run the project?

[01:07:33]

So, yeah. And backtracking to search. So if you were to do, what's the brute force solution? What would you search over? You have a question. How would you search the possible space of answers like web search has come a long way even since then. But at the time, like, you know, first of all, I mean, there are a couple of other constraints around the problems. Interesting. So you couldn't go out to the Web, you couldn't search the Internet.

[01:08:01]

In other words, the A.I. experiment was we want a self-contained device, device devices as big as a room.

[01:08:08]

Fine. It's as big as a room. But we want a self-contained advice, contained device. You're not going out to the Internet. You don't have a life lifeline to anything. So it had to kind of fit in a shoe box, if you will, or at least the size of a few refrigerators, whatever it might be. So but also, you couldn't just get out there. You couldn't go off network. Right. To to kind of go.

[01:08:29]

So there was that limitation. But then we did.

[01:08:32]

But the basic thing was, go, go, go do a web search. Problem was even when we went and did a Web search.

[01:08:39]

I don't remember exactly the numbers, but someone the order of sixty five percent of the time, the answer would be somewhere, you know, in the top 10 or 20 documents. So first of all, that's not even good enough to play Jeopardy.

[01:08:52]

In other words, even if you could pull the Évian, if you could perfectly pull the answer out of the top 20 documents, top 10 documents, whatever was which we didn't know how to do.

[01:09:01]

But even if you could do that, you'd be and you knew it was right. And we've had enough confidence in it. Right. So you have to pull out the right answer. You have you'd have to have confidence it was the right answer and then you'd have to do that fast enough to now go buzz in and you'd still only get sixty five percent of them.

[01:09:16]

Right. Which doesn't even put you in the winner's circle winner's circle.

[01:09:19]

You have to be up over 70 and you have to do it really. And you have to do a really quickly. But now the problem is.

[01:09:25]

Well, even if I had somewhere in the top 10 documents, how do I figure out where in the top 10 documents that answer is and how do I computer confidence of all the possible candidates. So it's not like I go in knowing the right answer and I have to pick it. I don't know the right answer. I have a bunch of documents. Somewhere in there is the right answer. How do I as a machine, go out and figure out which one's right and then how do I score it?

[01:09:48]

So and now how do I deal with the fact that I can't actually go out to the web?

[01:09:53]

First of all, if you pause and then just think about it, if you could go to the Web, do you think that problem is solvable if you just pause on it, just thinking even beyond Jeopardy?

[01:10:05]

Do you think the problem of reading text to find where the answer is? Well, we thought we solved that in some definition of solved, given the Jeopardy challenge.

[01:10:14]

But how did you do it for Jeopardy! So how do you take a body of work on a particular topic and extract key pieces of information? So what?

[01:10:22]

So now forgetting about the huge volumes that are on the web. Right. So now we have to figure out we did a lot of source research.

[01:10:29]

In other words, what body of knowledge is going to be small enough but broad enough to answer Jeopardy! And we ultimately did find the body of knowledge that did that. I mean, it included Wikipedia and a bunch of other stuff.

[01:10:41]

So like encyclopedia type of stuff. I don't know if you can use dictionaries, different types of semantic resources like WorldNet and other types of semantic resources like that, as well as like some Web crawls, in other words, where we went out and took that content and then expanded it based on producing statistical see, you know, statistically producing seeds, using those seeds for other researchers searches and then expanding that. So using these, like, expansion techniques, we went out and that found enough content and were like, OK, this is good.

[01:11:10]

And even up until the end, you know, we had a threat of researchers always trying to figure out what content could we efficiently include.

[01:11:18]

I mean, there's a lot of popular culture.

[01:11:19]

What is the church lady? Well, I think was one of the like what?

[01:11:25]

Where I guess that's probably an encyclopedia. So I guess that would be. But then we but then we would take that stuff. Then we would go out and we would expand. In other words, we go find other content that wasn't in the core resources and expand it.

[01:11:39]

You know, the amount of content grew it by an order of magnitude. But still, again, from a Web scale perspective, this is a very small amount of content. It's very select.

[01:11:47]

We then we then took all that content that we we pre analyze the crap out of it, meaning we we we passed it, you know, broke it down into all those individual words. And we did semantic syntactic and semantic passes on it, you know, had computer algorithms that annotated it. And we in the we indexed that in a very rich and very fast index.

[01:12:09]

So we have a relatively huge amount of, let's say, the equivalent of for the sake of argument, two to five million bucks.

[01:12:15]

We've now analyzed all that blowing up in size even more because now it was metadata and we then we richly indexed all of that and by way in a giant in-memory cache.

[01:12:25]

So Watson did not go to disk.

[01:12:28]

So the infrastructure component there you to speak to it. How tough it I mean, I know the two thousand maybe this is two thousand eight nine. You know that that's kind of a long time ago, right? How hard is it to use multiple machines and how hard is the infrastructure for the hardware component?

[01:12:47]

We used IBM, we said we used IBM hardware. We had something like I forget exactly what to close to three thousand cores, completely connected.

[01:12:57]

They had a switch where every CPU was connected to every other thing.

[01:13:00]

They were sharing memories, some kind of way, kind of shared memory. Right. And all this data was pre analyzed and put into a very fast indexing structure that was all all all in all in memory.

[01:13:14]

And then we took that question. We would analyze the question, so all the content was now pre analyzed, so if so, if I went and tried to find a piece of content, it would come back with all the metadata that we had pre computed.

[01:13:30]

How do you shelve the question?

[01:13:33]

How do you connect the big stuff with the big knowledge base of the metadata that's indexed to the the simple little bitty confusing question? Right. So therein lies the question mark is right, so we would take the question, we would analyze the question, so which means that we would pass it and interpret a bunch of different ways. We try to figure out what is it asking about.

[01:13:57]

So we would we had multiple strategies to kind of determine what was it asking for that might be represented as a simple string and character string or something. We would connect back to different semantic types that were from existing resources.

[01:14:12]

So anyway, the bottom line is we would do a bunch of analysis on the question and question. Analysis had to finish and had to finish fast.

[01:14:20]

So we do the question analysis because then from the question analysis, we would now produce searches.

[01:14:26]

So we would and we had built using open source search engines. We modified them, but we had a number of different search engines we would use that had different characteristics. We went in there and engineered and modify those search engines.

[01:14:40]

Ultimately, to now take our question analysis, produce multiple queries based on different interpretations of the question and figure out a whole bunch of searches in parallel.

[01:14:54]

And they would they would come back with passages, so this is these were passages or diagrams that would come back with passages. And so now let's say you had a thousand passages. Now for his passage, you parallelize again. So you went out and you paralyzed, paralyzed the search. A search would now come back with a whole bunch of passages. Maybe you had a total of A or 5000 whatever passages for each passage.

[01:15:20]

Now, you'd go and figure out whether or not there was a candidate who would call a candidate answering there. So he had a whole bunch another a whole bunch of other algorithms that would find candidate answers, possible answers to the question.

[01:15:32]

And so you had a candidate answer called candidate answers generators, the whole bunch of those. So for every one of these components, the team was constantly doing research. Coming up, better ways to generate search queries from the questions, better ways to analyze the question, better ways to generate candidates and speed.

[01:15:48]

So better is accuracy and speed.

[01:15:52]

Correct. So right. Speed and accuracy for the most part were separated. We handle that sort of in separate ways, like I focus purely on accuracy and accuracy. Are we ultimately getting more questions and producing more accurate confidences? And then a whole nother team that was constantly analyzing the workflow to find the bottlenecks and then figuring out how to both paralyze and drive the algorithm speed.

[01:16:14]

But anyway, so so now think of it like you have this big fan out now, right? Because you have you had multiple queries. Now you have now you have thousands of candidate answers for each candidate answering your score it.

[01:16:26]

So you can use all the data that built up, you can use the question analysis, you can use how the crew was generated, you're going to use the passage itself and you're going to use the candidate answer that was generated and you're going to score that. So now we have a group of researchers coming up with scores. There are hundreds of different scores.

[01:16:48]

So now you're getting a fan out of it again from whoever many candidate answers you have to all the different scores.

[01:16:55]

So if you have a two hundred different scores and you have a thousand candidates, they have two hundred thousand scores.

[01:17:01]

And and so now you got to figure out, you know, how do I now rank these rank these answers based on the scores that came back. And I want to rank them based on the likelihood that there are correct answer to the question.

[01:17:14]

So every score was its own research project.

[01:17:17]

What you mean by score.

[01:17:18]

So is that the invitation process of basically a human being saying that this this answer you think, think, think, think of it if you want to think of it, what you're doing, you know, if you want to think about what a human would be doing, human would be looking at a possible answer.

[01:17:33]

They'd be reading the, you know, Emily Dickinson, they'd been reading the passage in which that occurred, they'd be looking at the question and they'd be making a decision of how likely it is that Emily Dickinson, given this evidence in this passage, is the right answer to that question.

[01:17:50]

Got it. So that that's the annotation task, that standard scoring task. But scoring implies zero to one.

[01:17:57]

That's right. Continues zero to one. Score is not a binary. No, give it a score. Give it a zero. Yeah, exactly, so when humans give different scores so that you have to somehow normalize and all that kind of stuff, the deal with all that depends on what your strategy is.

[01:18:12]

We both we could be relative to it could be we actually looked at the raw scores as well, standardized scores, because humans are not involved in this. Humans are not involved.

[01:18:22]

So I'm misunderstanding the process here. The passages where is the ground truth coming from, Grant?

[01:18:29]

Truth is only the answer to the questions. So end to end, end to end, so we all so I was always driving end and it was a very interesting, a very interesting engineering approach and ultimately scientific and research approach always driving into.

[01:18:47]

Now, that's not to say we wouldn't make hypotheses that individual component performance was related in some way and then performance, of course we would, because people would have to build individual components. But ultimately, to get your component integrated into the system, you have to show impact on end to end performance, question answering performance.

[01:19:12]

So there's many very smart people working on this and they're basically trying to sell their ideas as a component that should be part of the system.

[01:19:19]

That's right. And they would do research on their component and they would say things like, you know, I'm going to improve this as a candidate or I'm going to improve this as a question score, or I was a passive scorer, I'm going to improve this or as a passer and I can improve it by two percent on its component metrics, like a better pass or better candidate or a better type estimation, whatever it is.

[01:19:46]

And then I would say I need to understand how the improvement on that component metric is going to affect the end to end performance.

[01:19:53]

If you can't estimate that and can't do experiments to demonstrate that it doesn't get in, that's like the best run I project I've ever heard this.

[01:20:04]

Awesome. OK, what breakthrough would you say? Like, I'm sure there's a lot of data to break this, but was there like a breakthrough that really helped improve performance, like what people began to believe, or is it just a gradual process?

[01:20:18]

Well, I think it was a gradual process, but. One of the things that I think gave people confidence that we can get there was that as we follow this, as we follow this procedure of. Different ideas build different components, plug them into the architecture, run the system, see how we do do the air analysis, start off new research projects to improve things and the NT and the very important idea that the individual component.

[01:20:52]

Work. Did not have to deeply understand everything that was going on with every other component, and this is where we we leveraged machine learning in a very important way.

[01:21:03]

So while individual components could be statistically driven machine learning components, some of them were juristic, some of them were machine learning components. The system has a whole combined all the scores using machine learning.

[01:21:16]

This was critical because that way you can divide and conquer.

[01:21:20]

So you can say, OK, you work on your candidate generator or you work on this approach to answer scoring. You work on this approach to type scoring. You work on this approach to passage search or to passive selection and so forth.

[01:21:33]

But when you just plug it in and we had enough training data to say, now we can we can train and figure out how do we weigh all the scores relative to each other based on the predicting the outcome, which is right or wrong on Jeopardy.

[01:21:50]

And we had enough training data to do that. So this enabled people to work independently and to let the machine learning do the integration dutiful.

[01:22:00]

So yeah, the machine learning is doing the fusion and then it's a human orchestrated ensemble that's different approaches as that's great.

[01:22:09]

Still impressive. They're able to get it done in a few years, though that not obvious to me that it's doable if I just put myself in that mindset. But when you look back at the Jeopardy challenge.

[01:22:24]

Again, when you're looking up at the stars, what are you most proud of? Looking back at those days. I'm most proud of my. My commitment and my team's commitment. To be true to the science. To not be afraid to fail. That's beautiful because there's so much pressure, because it is a public event, it is a public show that you are dedicated to the idea.

[01:23:03]

That's right. Do you think it was a success? In the eyes of the world, it was a success. By your, I'm sure, exceptionally high standards. Is there something you regret you would do differently? It was a success. It was a success for our goal. Our goal was to build the most advanced open the main question, answering system. We went back to the old problems that we used to try to solve, and we did dramatically better on all of them as well as we beat Jeopardy, so we won in jeopardy.

[01:23:44]

So it was it was a success. It was.

[01:23:47]

I worry that the world would not understand it as success because it came down to only one game. And I knew, statistically speaking, this can be a huge technical success and we could still lose that one game.

[01:24:00]

And that's a whole nother theme of this of the journey. But it was a success. It was not a success in natural language understanding, but that was not the goal.

[01:24:12]

Yeah, that was that. But I would argue I understand what you're saying in terms of the science, but I would argue that the inspiration of it right there, not a success in terms of solving natural language, understanding it was a success of being an inspiration to future challenges.

[01:24:34]

Absolutely. That drive future efforts.

[01:24:37]

What's the difference between how a human being compete in jeopardy and how Watson does it? That's important in terms of intelligence.

[01:24:45]

Yeah, so that stat actually came up very early on in the project also. In fact, I had people who wanted to be on the project who were. Early on, he sort of approached me once I committed to do it. That want to think about how humans do it, and they were, you know, from a cognition perspective like human cognition and how that should play, and I would not take them on the project because another.

[01:25:10]

Assumption or another state I put in the ground was I don't really care how you do this, at least in the context of this, I need to build in the context of this project and then tell you and in building an AI that understands how it needs to communicate with humans, I very much care.

[01:25:27]

So it wasn't that I didn't care. In general, in fact, as an A.I. scientist, I care a lot about that, but I'm also a practical engineer and I committed to getting this thing done and I wasn't going to get distracted.

[01:25:44]

I had to kind of say, look, if I'm going to get this done, I'm going to charge this path in this path, says we're going to engineer a machine that's going to get this thing done. And we know what search and NLP can do. We have to build on that foundation. If I come in and take a different approach and start wondering about how the human mind might or might not do this, I'm not going to get there from here in the time.

[01:26:08]

And, you know, in the time frame, I think that's a great way to lead the team.

[01:26:12]

But now there's done and there's one when you look back. Right. So analyze what's the difference, actually.

[01:26:19]

So so I was a little bit surprised, actually, to discover over time as this would come up from time to time and we'd reflect on it.

[01:26:27]

That and talking to Ken Jennings a little bit and hearing Ken Jennings talk about how he answered questions, that it might have been closer to the way humans answer questions than I might have imagined previously, because humans are probably in the game of jeopardy at the level of Ken Jennings probably also cheating their way to winning.

[01:26:50]

Right.

[01:26:51]

And feeling shallow analysis. They're doing that as fast as possible. They're doing shallow analysis. So they are. Very quickly analyzing the question and coming up with some, you know, key you know, key vectors or cues, if you will, and they're taking those cues and they're very quickly going through like their library of stuff, not deeply reasoning about what's going on.

[01:27:13]

And then sort of like a lots of different like what we call these these scorers would kind of score that in a very shallow way and then say, oh, boom, you know, that's what it is.

[01:27:25]

And and so it's interesting as we reflect on that.

[01:27:28]

So we may be doing something that's not too far off from the way humans do it, but we certainly, certainly didn't approach it by saying, you know, how would a human do this now in elemental cognition, like the project I'm leading now?

[01:27:43]

We asked those questions all the time because ultimately we're trying to do something that is to make the intelligence and the machine and intelligence of the human very compatible, well compatible in the sense they could communicate with one another and they can reason with their shared understanding.

[01:28:00]

So how they think about things and how they build answers, how they build explanations becomes a very important question to consider.

[01:28:08]

So what's the difference between this open domain but cold constructed question answering a jeopardy and more?

[01:28:21]

Something that requires understanding for shared communication with humans and machines. Yeah, well, this goes back to the interpretation of what we were talking about before Jeopardy the systems, not trying to interpret the question.

[01:28:35]

And it's not interpreting the content, its reasoning with regard to any particular framework. I mean, it's it is parsing it and parsing the content and using grammatical cues and stuff like that. So if you think of grammar as a human framework, in some sense it has that. But when you get into the richer semantic frameworks, what do people how do they think? What motivates them? What are the events that are occurring and why are they occurring and what causes what else to happen and when?

[01:29:02]

Where are things and time and space.

[01:29:03]

And it's like when you started thinking about how humans formulate and structure the knowledge that they acquire in their head and wasn't doing any of that. What do you think are the essential challenges of, like free flowing communique, free flowing dialogue versus question answering even with the framework of the interpretation dialogue?

[01:29:27]

Yep.

[01:29:28]

Do you see free flowing dialogue as a fundamentally more difficult than question answering, even with shared interpretation?

[01:29:39]

So dialogue is as important a number of different ways. I mean, it's a challenge. So first of all, when I think about the machine that I think about a machine that understands language and ultimately can reason in an objective way that can take the information that it perceives through language or other means and connect it back to these frameworks, reason and explain itself.

[01:30:04]

That system ultimately needs to be able to talk to humans or it needs to be able to interact with humans, so in some sense, the dialogue, that doesn't mean that it it that sometimes people talk about dialogue and they think. How do humans talk? How do humans talk to, like, talk to each other in a in a casual conversation and you can mimic casual conversations?

[01:30:28]

We're not trying to mimic casual conversations, we're really trying to. Produce a machine whose goal is it is to help you think and help you reason about your answers and explain why. So instead of like talking to your friend down the street about having a smoke, having a small talk conversation with your friend on the street, this is more about like you would be communicating to the computer on Star Trek were like, what do you want to think about? Like, what do you want a reason?

[01:30:53]

I'm going to tell you the information I have. I'm gonna have to summarize it. I'm going to ask you questions. You can answer those questions.

[01:30:58]

I'm going to go back and forth with you. I'm going to figure out what your mental model is. I'm going to now relate that to the information I have and presented to you in a way that you can understand that and we could ask follow up questions. So it's that type of dialogue that you want to construct. It's more structured, it's more goal oriented, but it needs to be fluid.

[01:31:20]

In other words, it can't. It can't. It has to be engaging in fluid.

[01:31:25]

It has to be productive and not distracting. So there has to be a model of those words. The machine has to have a model of how humans think through things and discuss them.

[01:31:38]

So basically a productive, rich conversation. Unlike this part, yes, what I like to think it's more similar to the podcast. I was joking. I'll ask you about humor as well, actually, but. What's the hardest part of that, because it seems we're quite far away. As a community from that store to be able to so one is having a shared understanding that I think a lot of the stuff he said with Framework's is quite brilliant, but just creating a smooth discourse.

[01:32:18]

Yeah, it feels clunky right now which aspects of this whole problem, the you specified of having a productive conversation is the hardest and that were or maybe maybe any aspect of it you can count on because it's so shrouded in mystery.

[01:32:37]

So I think to do this, you kind of have to be creative in the following sense. If I were to do this is purely a machine learning approach and someone said learn how to have a good flow and structured knowledge acquisition conversation.

[01:32:54]

I go out and say, OK, I have to collect a bunch of data of people doing that, people reasoning, well, having a good structured conversation that both requires knowledge efficiently as well as produces answers and explanations as part of the process.

[01:33:10]

And you struggle, I don't know what to do to collect the data because I don't know how much data is like that, OK?

[01:33:20]

OK, so there's one there's a humorous commentary on the lack of disclosure.

[01:33:24]

But also, even if it's out there, say, was out there, how do you ask how? I think I think like successful example.

[01:33:33]

So I think any like any problem like this where you don't have enough data to represent the phenomenon, you want to learn the words you want use.

[01:33:42]

If you have enough data, you could potentially learn the pattern in an example like this is hard to do. This is sort of a human sort of thing to do, which recently came out.

[01:33:51]

IBM was the debater project. Interesting, right? Because now you you do have these structured dialogues, these debate things where they did use machine learning techniques to generate generate these debates.

[01:34:05]

Dialogues are a little bit tougher, in my opinion, than than generating a structured argument where you have lots of other structured arguments like this, you can potentially annotate that data and you could say this is a good response to a bad response in a particular domain here.

[01:34:21]

I have to be responsive and I have to be opportunistic with regard to what is the human saying. So I'm I'm goal oriented and saying I want to solve the problem. I want to acquire the knowledge necessary. But I also have to be opportunistic and responsive to what the human is saying.

[01:34:37]

So I think that it's not clear that we could just train on the body of data to do this, but we can bootstrap it. In other words, we can be creative and we could say, what do we think? What do we think the structure of a good dialogue is that does this well?

[01:34:52]

And we can start to create that if we can if we can create that more programmatically, at least to get this process started.

[01:35:00]

And I can create a tool that now engages humans effectively. I could start both. I can start generating data, I could start the human learning process, and I can update my machine. But it goes to start the automatic learning process as well.

[01:35:14]

But I have to understand what features to even learn over. So I have to bootstrap the process a little bit first.

[01:35:21]

And that's a creative design task that I could then use as input into more automatic learning task, some creativity and.

[01:35:31]

Yeah, and and bootstrapping. All right. What elements of conversation do you think?

[01:35:36]

You would like to see, though, one of the benchmarks for me is humor, right? That seems to be one of the hardest. And to me, the biggest contrast is sort of Watson.

[01:35:47]

So one of the greatest sketches of comedy sketches of all time is the SNL celebrity Jeopardy with with Alex Trebek and Sean Connery and Burt Reynolds and so on with with Sean Connery commentating on Alex Trebek there a lot.

[01:36:04]

So and I think all of them are in the negative point. So they're clearly all losing in terms of the game in jeopardy, but they're winning in terms of comedy.

[01:36:14]

So what do you think about humor in this whole interaction, in the dialogue that's productive or even just whatever what humor represents to me is?

[01:36:28]

It is the same idea that you're saying about a framework because humor only exists within a particular human framework. So what do you think about humor? What do you think about things like humor that connect to the kind of creativity you mentioned it's needed?

[01:36:41]

I think there's a couple of things going on there.

[01:36:42]

So I sort of feel like and I might be too optimistic this way, but I think that there are we did a little bit about with this and with puns in jeopardy, we literally sat down and said, well, you know how the puns work.

[01:36:59]

And, you know, it's like wordplay and you could formalize these things. So I think there's a lot of aspects of humor that you could formalize. You could also learn humor. You could just see what people laugh at.

[01:37:09]

And if you have enough, again, if you have enough data to represent a phenomenon, you might be able to, you know, weigh the features and figure out, you know, what humans find funny and what they don't find funny. You might she might not be able to explain why the mind body unless we unless we sit back and think about that more formally. I think, again, I think you do a combination of both. And I'm always a big proponent of that.

[01:37:30]

I think, you know, robust architectures and approaches are always a little bit combination of us reflecting and being creative about how things are structured and how to formalize them, and then taking advantage of large data and doing learning and figuring how to combine these two approaches.

[01:37:45]

I think there's another aspect to humor, though, which goes to the idea that I feel like I can relate to the person telling the story, telling the person telling the story.

[01:37:55]

And I think that's that's an interesting theme in the whole A.I. theme, which is do I feel differently when I know it's a robot and when I know when I imagine that the robot is not conscious, the way I'm conscious, when I imagine the robot does not actually have the experiences that I experience, do I find it funny or do because it's not as relatable?

[01:38:19]

I don't imagine the person relating it to it the way I relate to it.

[01:38:24]

I think this also you see this in in the arts and in entertainment where, you know, sometimes you have savants who are remarkable out of thing, whether it's sculpture, it's music or whatever.

[01:38:36]

But the people who get the most attention are the people who can who can evoke a similar emotional response, who can get you to emote right about the way they are.

[01:38:48]

In other words, who can basically make the connection from the artifact, from the music or the painting of the sculpture to to the emotion and get you to share that emotion with them.

[01:38:58]

And then and that's when it becomes compelling. So they're communicating it a whole different level. They're just not communicating the artifact. They're communicating their emotional response to the artifact. And then you feel like, oh, wow, I can relate to that person. I could connect to that. I could connect to that person. So I think humor has that has that aspect as well.

[01:39:16]

So the idea that you can connect to that person, person being the critical thing, but we're also able to anthropomorphize objects, pretty robots and our systems pretty well. So we're almost looking to make them human. They're maybe from your experience with Watson, maybe you can comment on. Did you consider that as part? Well, obviously, the problem of Jeopardy doesn't require anthropomorphised, but nevertheless, there was some interest in doing that.

[01:39:48]

And that's a that's another thing I didn't want to do, too. I didn't want to distract from the from the actual scientific tasks.

[01:39:55]

But you're absolutely right. I mean, humans do anthropomorphize and and without necessarily a lot of work.

[01:40:02]

I mean, you just put some eyes on a couple of eyebrow movements and you're getting humans to react emotionally. And I and I think you can do that.

[01:40:09]

So I didn't mean to suggest that that that connection cannot be mimicked.

[01:40:16]

I think that connection can be mimicked and can get you to can produce that emotional response. I just wonder, though. If you're told what's really going on, if you know that. The machine is not conscious, not having the same richness of of of emotional reactions and understanding that it doesn't really share the understanding, but essentially just moving its eyebrow drooping its eyes or making them big or whatever, it's doing this getting the emotional response. Will you still feel it?

[01:40:47]

Interesting. I think you probably would for a while.

[01:40:50]

And then when it becomes more important that there's a deeper, a deeper understanding, it may run flat. But I don't know.

[01:40:56]

I'm pretty I'm pretty confident that it will that majority of the world, even if you tell them, won't matter or not matter, especially if the machine herself says that she is conscious. That's very possible.

[01:41:12]

So you, the scientists that made the machine is saying. That this is how the algorithm works, everybody will just assume you're lying and that there's a conscious being that you're deep into the science fiction genre now. But I don't think it's actually psychology. I think it's not science fiction. I think it's reality.

[01:41:31]

And I think it's a it's a really powerful one that will have to be exploring in the next few decades. It's a very interesting element of intelligence. So what do you think?

[01:41:41]

We've talked about social constructs of intelligences and frameworks into the way humans kind of interpret information. What do you think is a good test of intelligence in your view? So there's Alan Turing with the Turing Test.

[01:41:57]

Watson, accomplish something very impressive with Jeopardy! What do you think is the test that would impress the heck out of you that you saw that a computer could do? They would say this is crossing a kind of threshold that that gives me pause in a good way. My expectations for A are generally high, what is high look like, by the way, so not the threshold test as a threshold. What do you think is the destination? What do you think is the ceiling?

[01:42:32]

I think machines will, in many measures, will be better than us, will become more effective, in other words, better predictors about a lot of things, and then then then ultimately we can do I think where they're going to struggle is what we talked about before, which is.

[01:42:50]

Relating to communicating with and understanding humans in deeper ways, and so I think that's a key point, like we can create the super parrot. What I mean by the super parrot is given enough data a machine can mimic your emotional response, can even generate language that will sound smart and what someone else might say under similar circumstances. Like, I would pause on that, like, that's a superpower, right? So given similar circumstances, moves as it's faces in similar ways, changes its tone of voice in similar ways, produces strings of language that would similar that a human might say, not necessarily being able to produce a logical interpretation or understanding that would ultimately satisfy a critical interrogation or critical understanding.

[01:43:43]

I think you just described me in a nutshell, and I think sides I think philosophically speaking, you could argue that that's all we're doing as human beings to.

[01:43:53]

So I was going to say it's very possible, you know, humans to behave that way, too. And so upon deeper probing and deeper interrogation, you may find out that there isn't a shared understanding, because I think humans do both, like humans are statistical language model machines and they are capable reasoners.

[01:44:13]

You know, they're both and you don't know which is going on. Right. So and I think it's I think it's an interesting problem.

[01:44:25]

We talked earlier about like where we are in our our social and political landscape. Can you distinguish someone who can string words together and sound like they know what they're talking about from someone who actually does?

[01:44:40]

Can you do that without dialogue, without interpretive or probing dialogue? So it's interesting because humans are really good at, in their own mind, justifying or explaining what they hear because they project their understanding on onto yours. So you could say you could put together a string of words and and someone will sit there and interpret in a way that's extremely bias to the way they want interpret it. They want assuming you're an idiot and they'll interpret it one way. They were all senior genius and then interpret it another way that suits their needs.

[01:45:11]

So this is tricky business.

[01:45:14]

So I think to answer your question, as I guess better and better, better and better mimic you, we create a super parrots.

[01:45:22]

We're challenged just as we are with we're challenge with humans.

[01:45:25]

Do you really know what you're talking about? Do you have a meaningful interpretation, a powerful framework that you could reason over and justify? Your answers justify your predictions and your beliefs, why you think they make sense. Can you convince me what the implications are? Can you.

[01:45:46]

So can you reason intelligently and make me believe that those are the implications of your prediction and so forth.

[01:45:57]

So what happens is it becomes reflective.

[01:46:01]

My standard for judging your intelligence depends a lot on mine. But you're saying there should be a large group of people with a certain standard of intelligence that would be convinced by this particular A.I. system. Then there should be, I think, one of the depending on the content, one of the problems we have there is that if that large community of people are not judge judging it with regard to a rigorous standard of objective logic and reason, you still have a problem like masses of people can be persuaded.

[01:46:40]

The Millennials. Yeah. To turn and turn their brains off. Right. OK, sorry.

[01:46:48]

I have nothing against the I just so you you're a part of one of the great benchmark's challenges of our history.

[01:46:59]

What do you think about Alpha Zero OpenAir five Alpha Star accomplishments on video games recently, which are also, I think at least in the case of Go with Alpha Go and Alpha Zero, playing go was a monumental accomplishment as well. What are your thoughts about that challenge? I think it was a giant landmark, I think was phenomenal.

[01:47:20]

I mean, as one of those other things, nobody thought like solving go was going to be easy, particularly because it's kind it's hard for particularly hard for humans, for humans to learn, hard for humans to excel at. And so it was another measure, a measure of intelligence.

[01:47:37]

It's very cool. I mean, it's very interesting. You know, what they did mean and I loved how they solved, like, the data problem, which is, again, they bootstrapped it and got the machine to play itself to generate enough data to learn from.

[01:47:49]

I think that was brilliant. I think that was great.

[01:47:51]

And of course, the result speaks for itself.

[01:47:55]

I think it makes us think about again, it is OK, what's intelligence?

[01:47:59]

What aspects of intelligence are important? Can can the go machine help me make me a better go player?

[01:48:05]

Is it an alien intelligence? You know, is am I even capable of like again, if we if we put in very simple terms, it found the function, we found the go function. Can I even comprehend the go function? Can I talk about the go function? Can I conceptualize the go function like whatever it might be?

[01:48:21]

So one of the interesting ideas of the system is that plays against itself. Right. But there's no human in the loop there. So like you're saying, it could have by itself created an alien intelligence, pointed toward a goal like magic.

[01:48:37]

You're sentencing your judge in your sentencing people or you're setting policy or you're you know, you're making medical decisions and you can't explain. You can't get anybody to understand what you're doing or why.

[01:48:53]

So it's it's it's an interesting dilemma. For the applications of A.I., do we hold A.I. to this accountability that says, you know. Humans have to be humans have to be willing to take responsibility. You know, for for the decision, in other words, can you explain why you would do the thing? Will you will you get up and speak to other humans and convince them that this was a smart decision? Is the A.I. enabling you to do that?

[01:49:23]

Can you get behind the logic that was made there, do you think?

[01:49:28]

Sorry to linger on this point, because it's a fascinating one, it's a great goal for I.

[01:49:33]

Do you think it's achievable in many cases or OK, there's two possible worlds that we have in the future.

[01:49:42]

One is where A.I. systems do like medical diagnosis or things like that, or drive a car without ever explain to you why it fails when it does.

[01:49:54]

That's one possible world and we're OK with it or the other where we are not OK with it and we really hold back the technology from getting to good before it's able to explain which of those worlds are more likely do think, and which are concerning to you?

[01:50:09]

Not.

[01:50:09]

I think the reality is it's going to be a mix. You know, I'm not sure I have a problem with that.

[01:50:13]

I mean, I think there are tasks that perfectly fine with. Machines show a certain level of performance, and that level performance is already better, already better than humans.

[01:50:24]

So, for example, I don't know that I take driverless cars off driverless cars to learn how to be more effective driver drivers than humans, but can't explain what they're doing. But bottom line, statistically speaking, they're, you know, 10 times safer than humans.

[01:50:38]

I don't know that I care.

[01:50:41]

I think when we we have these edge cases, when something bad happens and we want to decide who's liable for that thing and who made that mistake and what do we do about that. And I think those cases are interesting cases. And now do we go to designers of the AI and the AI says, I don't know if that's what it learned to do. And he says, well, you didn't train it properly.

[01:50:59]

You know, you were you were negligent in the training data that you gave that machine. Like, how do we drive down the reliable?

[01:51:05]

So I think those are I think those are interesting questions.

[01:51:08]

And so the optimization problem there say is to create a system that's able to explain the lawyers away.

[01:51:16]

There you go. I think that I think is going to be interesting. I mean, I think this is where technology and social discourse are going to get deeply intertwined and how we start thinking about problems, decisions and problems like that.

[01:51:29]

I think in other cases it becomes more obvious where. You know, it's like. Like, why did you decide to give that person, you know, a longer sentence or to deny them parole?

[01:51:43]

Again, policy decisions or why did you pick that treatment like that treatment, the killing, that guy like, why was that a reasonable choice to make?

[01:51:51]

So so and people are going to demand explanations. Now, there's a reality, though, here. And the reality is that it's not I'm not sure humans are making reasonable choices when they do these things, they are using statistical hunches, biases or even systematically using statistical averages to make calls is what happened. My dad, if you saw that, talking about that, but.

[01:52:19]

You know, I mean, they decided that my father was brain dead. He had went into cardiac arrest and it took to took a long time for the ambulance to get there and was not resuscitated right away and so forth. And they came they told me he was brain dead. And why was he brain dead? Because essentially they gave me a purely statistical argument under these conditions with these four features, 90 percent chance he's brain dead.

[01:52:41]

I said, but can you just tell me not inductively, but deductively go there and tell me his brain still functioning is the way for you to do that.

[01:52:49]

And the protocol in response was just how we make this decision. I said this is inadequate for me. I understand the statistics and I don't know, you know, there's a two percent chance. So, like, I just don't know the specifics. I need the specifics of this case.

[01:53:05]

And I want the deductive, logical argument about why you actually know he's brain dead. So I wouldn't sign that. Do not resuscitate and I don't know, was like they went through lots of procedures, a big, long story. But the bottom was fascinating story, by the way, about how I reasoned and how the doctors reasoned through this whole process. But I don't know, somewhere around twenty four hours later or something, he was sitting up in bed with zero zero brain damage.

[01:53:30]

What lessons do you draw from that? Story that experience that the data that they're the data that's being used to make statistical inferences doesn't adequately reflect the phenomenon. So in other words, you're getting shit wrong. I'm sorry you're getting stuff wrong because your your model is not robust enough.

[01:53:51]

And you might be better off not using statistical inference and statistical averages in certain cases when you know the models are insufficient and that you should be reasoning about the specific case more logically and more deductively and hold yourself responsible and hold yourself accountable to doing that.

[01:54:11]

And perhaps I has a role to say the exact thing we just said, which is perhaps this is a case you should think for yourself, you should reason deductively. So it's it's so it's hard because it's hard to to to know that. You know, you'd have to go back and you'd have to have enough data to essentially say, and this goes back to how do we this goes back to the case of how do we decide whether it is good enough to do a particular task.

[01:54:41]

And regardless of whether or not A produces an explanation. So and and what standard do we hold right for that?

[01:54:51]

So. You know, if you look at it, you look more broadly, for example, my father as a mental medical case. The medical system ultimately helped him a lot throughout his life, without it, you probably would have died much sooner. So overall sort of worked for him and sort of a net net kind of way. Actually, I don't know if that's fair, but maybe not in that particular case, but overall, like the medical system overall does more for the medical system overall, you know, was doing more more good than bad.

[01:55:30]

Now, there's another argument to suggest that that wasn't the case. But for for the sake of argument, let's say like that's, let's say a net positive. And I think you have to sit there in there and take that and take that into consideration. Now, you look at a particular use case, like, for example, making this decision. Have you done enough studies to know? How good that prediction really is, right, and how have you done enough studies to compare it to say, well, what if we what if we dug in in a more direct, you know, let's get the evidence, let's let's do the deductive thing and not use statistics here.

[01:56:05]

How often would that have done better? Right. So you have to do the studies to know how good the eye actually is. And it's complicated because it depends how fast you have to make decision. So if you have to make decisions super fast, you have no choice.

[01:56:19]

Right? If you have more time. Right. But if you're ready to pull the plug and this is a lot of the argument that I had with a doctor, I said, what's he going to do if you do it?

[01:56:29]

What's going to happen to him in that room if you do it my way? You know, if he's going to die anyway. So let's do it my way. That. I mean, it raises questions for our society to struggle with, as is the case with your father, but also in things like race and gender, started coming into play when when certain, when when judgments are made based on things that are complicated in our society, at least in discourse.

[01:56:56]

And it starts you know, I think I think I'm safe to say that most of the violent crimes committed by males.

[01:57:05]

So if you discriminate based, you know, this is a male versus female saying that if it's a male more likely to commit the crime, this is one of my my very positive and optimistic view views of of why the study of artificial intelligence, the process of thinking and reasoning logically and statistically and how to combine them is so important for the discourse today because it's causing regardless of what. What state aid devices are or not? It's causing this dialogue to happen.

[01:57:38]

This is one of the most important dialogues that, in my view, the human species can have right now, which is how to think well and how to reason well, how to understand our own cognitive biases and what to do about them. That has got to be one of the most important things we as as as a species can be doing.

[01:58:02]

Honestly, we are, which we've created an incredibly complex society. We've created amazing abilities to amplify noise faster than we can amplify signal.

[01:58:15]

We are challenged. We are deeply, deeply challenged. We have big segments of the population getting hit with enormous amounts of information.

[01:58:25]

Do they know how to do critical thinking? Do they know how to objectively, objectively reason?

[01:58:30]

Do they understand what they are doing? Never mind with their eyes doing. This is such an important dialogue to be having and and and, you know, we are fundamentally are thinking can be an easily becomes fundamentally bias.

[01:58:47]

And there are statistics and we shouldn't blind us. We shouldn't discard statistical inference, but we should understand the nature of this conference as a as a society. As you know, we decide to reject statistical inference, to favor individuals, to favor understanding and and deciding on the individual. Yes. We we consciously make that choice, so even if the statistics said. Even if the system said males are more likely to have, you know, to be violent criminals, we still take each person as an individual and we treat them based on the logic.

[01:59:33]

And the knowledge of that situation, we purposefully and intentionally. Reject. The statistical inference. We do that out of respect for the individual, for the individual. Yeah, and that requires reasoning and correct thinking. Looking forward, what grand challenges would you like to see in the future? Because the Jeopardy challenge captivated the world.

[02:00:01]

Alpha go alpha zero capital of the world. DeBlois certainly beating Kasparov. Gary's bitterness aside and captivated the world.

[02:00:12]

What do you think of ideas for next? Grand challenges for future challenges of that?

[02:00:17]

You know, look, I mean, I think there are lots of really great ideas for grand challenges. I'm particularly focused on one right now, which is can you can you demonstrate that they understand that they could read and understand that they can they can acquire these frameworks and communicate, you know, reason and communicate with humans. So it is kind of like the trying times, but it's a little bit more demanding than the Turing test. It's not enough.

[02:00:43]

It's not enough to convince me that you might be human because you could you can parrot a conversation.

[02:00:51]

I think, you know, the standard is a little bit higher is, for example, can you you know, the standard is higher.

[02:00:59]

And I think one of the challenges of devising this grand challenge is that we're not sure.

[02:01:08]

What intelligence is, we're not sure how to determine whether or not two people actually understand each other and in what depth they understand it, you know, and what to what depth they understand each other. So the challenge becomes something along the lines of, can you satisfy me?

[02:01:28]

That we have a shared understanding.

[02:01:31]

So if I were to probe and probe and you probably can can can can machines really act like thought partners where they can satisfy me that they that we have a share?

[02:01:43]

Our understanding is shared enough that we can collaborate and produce answers together and that they can help me explain and justify those answers.

[02:01:53]

So maybe here's an idea. So we have a system run for president and convince that's too easy. I'm sorry. You have to convince the voters that they should vote.

[02:02:07]

So I guess what? Again, that's why I think this is such a challenge, because we go back to the emotional persuasion.

[02:02:16]

We go back to, you know, now we're we're checking off an aspect of human cognition that is in many ways weak or flawed.

[02:02:27]

Right. We're so easily manipulated.

[02:02:30]

Our minds are drawn for often the wrong reasons. Right. Not the reasons that ultimately matter to us, but the reasons that can easily persuade us. I think we can be persuaded to believe one thing or another for reasons that ultimately don't serve us well in the long term.

[02:02:49]

And a good benchmark should not play with those elements of emotional manipulation. I don't think so.

[02:02:57]

I think that's where we have to set the higher standard for ourselves. Of what? What does it mean? This goes back to rationality and it goes back to objective thinking.

[02:03:06]

And can you produce can you acquire information and produce reasoned arguments into those reasoned arguments, pass a certain amount of muster?

[02:03:15]

And is it and can you acquire new knowledge?

[02:03:18]

Can you can you for example, can you reasonably have acquired new knowledge? Can you identify where it's consistent or contradictory with other things you've learned? And can you explain that to me and get me to understand that? So I think another way to think about it, perhaps.

[02:03:37]

Is can a machine teach you? Can I help? That's a really nice that's a nice way to put it. Can I help you understand something that you didn't really understand before, where where it was? You know, it's taking you. So you're not you know, again, it's almost like can it can it teach you? Can it help you learn and and in an arbitrary space so it can open those domain space?

[02:04:05]

Who can you tell the machine? Again, this borrows from some science fiction, but can you go off and learn about this topic that I'd like to understand better?

[02:04:14]

And then work with me to help me understand it. That's quite brilliant, what the machine that passes that kind of test do you think it would need to have?

[02:04:26]

Self-awareness or even consciousness? What do you think about consciousness and the importance of it, maybe in relation to having a body, having a presence, an entity? Do you think that's important?

[02:04:42]

You know, people used to ask if Watson was conscious. And I just think the conscious of what exactly? I mean, I think, you know, maybe self depends what it is that you're conscious. I mean, like so, you know, did it if you you know, it's certainly easy for it to answer questions about it would be trivial to Pergament. So to answer questions about whether or not it was playing Jeopardy, I mean, you could certainly answer questions that imply that it was aware of things.

[02:05:07]

Exactly what does it mean to be aware and what does it mean to consciousness? It's sort of interesting.

[02:05:10]

I mean, I think that we differ from one another based on what we're conscious of.

[02:05:17]

But wait. Yes, for sure. There's degrees of consciousness.

[02:05:20]

So it is just in terms areas like it's not just degrees. What what what are you aware of? What are you not aware of?

[02:05:27]

But nevertheless, there's a very subjective element to our experience that let me even not talk about consciousness. Let me talk about another to me, really interesting topic of mortality, fear of mortality.

[02:05:42]

Watson, as far as I could tell, did not have a fear of death. Certainly not most most humans do wasn't conscious of it, was there?

[02:05:56]

So there's an element of finiteness to our existence that I think, like we mentioned, survival that adds to the whole thing that I mean, consciousness is tied up with that, that we are a thing. It's a subjective thing that ends and that seems to add a color and flavor to our motivations in a way that seems to be fundamentally important for intelligence or at least the kind of human intelligence.

[02:06:24]

Well, I think for generating goals. Again, I think you could have you could have an intelligence capability and a capability to learn, a capability to predict. But I think without. I mean, again, you get Rafea, but essentially without the goal to survive. So you think you can just encode that without having to really code?

[02:06:47]

I mean, you get to create a robot now and you could say, you know, you plug it in and say protect your power source, you know, and give it some capabilities and we'll sit there and operate to try to protect its power source and survive.

[02:06:59]

So I don't know that that's philosophically a hard thing to demonstrate. It sounds like a fairly easy thing to demonstrate that you can give it.

[02:07:06]

That goal will come up with that goal by itself as you have to program that goal in.

[02:07:10]

But there's something because I think as you touched on, intelligence is kind of like a social construct, the the fact that a robot will be protecting its power source. Would would add depth and grounding to its intelligence in terms of. US being able to respect. I mean, ultimately, it boils down to us acknowledging that it's intelligent and the fact that it can die, I think is an important part of that.

[02:07:42]

The interesting thing to reflect on is how trivial that would be. And I don't think if you knew how trivial that was, you would associate that with being intelligence. I mean, I literally put in a statement a code that says, you know, you have the following actions you can take.

[02:07:56]

You give it a bunch of actions, like if you mount a laser going on there or, you know, you have the ability to scream or screech or whatever. And, you know, and you say, you know, if you see your power source threatened and you could program that in and. You know, you're going to you're going to take these actions to protect it. Teacher it on a bunch of things, so and now you can look at that and say, well, you know, that's intelligence because is protecting spouses maybe, but that's against this human bias that says the thing I had done, I identify my intelligence and my conscience so fundamentally with the desire or at least the behaviors associated with the desire to survive that.

[02:08:37]

If I see another thing doing that, I'm going to assume it's intelligent.

[02:08:43]

What timeline year will society have a. Something that would that you would be comfortable calling an artificial general intelligence system. What's your intuition? Nobody can predict the future, certainly not next few months. Or 20 years away, but what's your intuition? How far away are we? It is hard to make these predictions. I mean, I would be, you know, I would be guessing and there's so many different variables, including just how much we want to invest in it and how important it is and how important we think it is.

[02:09:19]

What kind of investment we're willing to make it. What kind of talent we end up bringing to the table or, you know, the incentive structure, all these things. So I think it is possible. To do this sort of thing, I think it's. I think trying to sort of. Ignore many of the variables and things like that. Is it a 10 year thing? Is a twenty three year probably closer to 20 or something, I guess, but not this year?

[02:09:45]

No, I don't think it's several hundred years. I don't think it's several hundred years. But again, so much depends on how committed we are to investing and incentivizing this type of work, this type of work.

[02:09:59]

And it's sort of interesting.

[02:10:01]

Like, I don't think it's obvious. How incentivized we are, I think from a task perspective, you know, if we see business opportunities to take this technique or that technique to solve that problem, I think that's the main driver for many for many of these things from a from a general intelligence.

[02:10:21]

Kind of an interesting question, are we really motivated to do that? And like, we just struggled ourselves right now to even define what it is.

[02:10:31]

So it's hard to incentivize when we don't even know what it is we're incentivized to create. And if you said mimicking human intelligence. I just think there are so many challenges with the significance and meaning of that, that there's not a clear directive, there's no clear directive to do precisely that thing.

[02:10:48]

So assistance in larger and larger number of tasks.

[02:10:52]

So being able to a system that's particularly able to operate by microwave and making a grilled cheese sandwich, I don't even know how to make one of those. And then the same system would be doing the vacuum cleaning and then the same system would be teaching my kids that I don't have math.

[02:11:12]

I think that when when when you get into a general intelligence for learning physical tasks.

[02:11:20]

And again, I want to go back to your body questions, then your body question was interesting, but. You want to go back to learning abilities to physical tasks you might have. We might get I imagine in that time frame, we will get better and better at learning these kinds of tasks, whether it's mowing the lawn or driving a car or whatever it is, I think we will get better and better at that words. Learning how to make predictions over large bodies of data.

[02:11:43]

I think we're going to continue to get better and better at that.

[02:11:46]

And machines will outpace humans in a variety of those things.

[02:11:51]

The underlying mechanisms for doing that may be the same, meaning that maybe these are deep gnat's, there's infrastructure to train them, reusable components to get them to different classes of tasks, and we get better and better at building these kinds of machines.

[02:12:10]

You could see argue that the general learning infrastructure in there is is a form of a general type of intelligence. I think what starts getting harder is this notion of.

[02:12:22]

You know, can we can we effectively communicate and understand and build that shared understanding because of the layers of interpretation that are required to do that and the need for the machine to be engaged with humans at that level in a continuous basis.

[02:12:36]

So how do you get in? How do you get the machine in the game? How do you get the machine in the intellectual game?

[02:12:43]

Yeah, and to solve ajai, you probably have to solve that problem. You have to get the machine. So it's a little bit of a bootstrapping thing. Can we get the machine engaged in the intellectual we're a game but in the intellectual dialogue with the humans, are the humans sufficiently in intellectual dialogue with each other to generate enough to generate enough data in this context?

[02:13:05]

And how do you bootstrap that? Because every one of those conversations, every one of those conversations, those intelligent interactions require so much prior knowledge that it's a challenge to bootstrap it.

[02:13:17]

So that's so. So the question is and and how committed. So I think that's possible. But when I go back to are we incentivized to do that? I know one incentivized to do the former, are we incentivized to do the latter significantly enough to people understand what the latter really is, while another part of the elemental cognition mission is to try to articulate that better and better, you know, through demonstrations and to try to craft these grand challenges and get people to say, look, this is a class of intelligence.

[02:13:46]

This is a class of I do. We do. We want this. What is the potential of this? What are the what's the business potential side of your potential to that? And, you know, and to build up that incentive system around that?

[02:14:01]

Yeah, I think if people don't understand yet, I think they will I think is a huge business potential here. So it's exciting that you're working on it.

[02:14:09]

You've kind of skipped over. But I'm a huge fan of physical presence of things. Do you think.

[02:14:17]

You know what's ahead of body, do you think having a body adds to the interactive element between the eye system and a human or just in general to intelligence? So I think I think. Going back to that shared understanding between humans are very connected to their bodies. I mean, one of the reasons, one of the challenges in getting an eye to kind of be a compatible human intelligence is that our physical bodies are generating a lot of features that make up the input.

[02:14:54]

So in other words, where our bodies are the tool we use to affect output, but they're also but they also generate a lot of input for our brains. So we generate emotion. We generate all these feelings, we generate all these signals that machines don't have.

[02:15:09]

So machines that have this is the input data and they don't have the feedback that says, OK, I've gotten this, I've gotten this emotion or I've gotten this idea. I now want to process it. And then I can then affects me as a physical being and then I and I can play that out.

[02:15:28]

In other words, I could realize the implications of the implications again on my body, mind, body complex.

[02:15:33]

I then process that and the implications again are internal features are generated.

[02:15:38]

I learn from them.

[02:15:39]

They have an effect on my mind body complex. So it's interesting when we think do we want a human intelligence? Well, if we want a human compatible intelligence, probably the best thing to do is embedded embedded in a human body.

[02:15:53]

Just to clarify both concepts. Beautiful is humanoid robots.

[02:15:58]

So robots that look like humans is one or did you mean. Actually, sort of what Elon Musk is working with, NewLink, really embedding intelligence systems to ride along human bodies. No, I'm riding along is different.

[02:16:18]

I meant like, if you want to create an intelligence that is human compatible, meaning that it can learn and develop a shared understanding of the world around it, you have to give it a lot of the same substrate. Part of that substrate is the idea that it generates these kinds of internal features like sort of emotional stuff and has similar senses. It has to do a lot of the same things with those same sensors. Right.

[02:16:44]

So I think if you want that again, I don't know that you want that look like that's not my specific goal. I think that's a fascinating scientific goal, is to get us all kinds of other applications. That's sort of not the goal. I like I want to I want to create I think of it as like create intellectual thought martyrs for for humans. So that kind of that kind of intelligence. I know there are other companies that are creating physical thought partner, its fiscal partners out for you, but that's kind of not where where I'm at.

[02:17:12]

But but but the important point is that a big part of how of what we process.

[02:17:20]

Is that physical experience of the world around us and the point of thought partners, what role does an emotional connection or forgive me love have to play in that thought partnership? Is that something you're interested in? Put another way, sort of having a deep connection.

[02:17:43]

Beyond intellectual with the eye to eye between human and. Is that something that gets in the way of the the rational discourse to something that's useful?

[02:17:55]

I worry about biases, you know, obviously so. In other words, if you develop an emotional relationship with the machines, do all of a sudden you start are more likely to believe what it's saying, even if it doesn't make any sense.

[02:18:06]

So, you know, I worry about that.

[02:18:09]

But at the same time, I think the opportunity to use machines to provide human companionship is actually not crazy.

[02:18:15]

And to an intellectual and social companionship, it's not a crazy idea.

[02:18:22]

Do you have concerns, as a few people do? Elon Musk, Sam Harris, about long term existential threats of A.I. and perhaps short term threats of AI? We talked about bias. We talked about the famous users. But do you have concerns about thought partners, systems that are able to help us make decisions together, humans somehow having a significant negative impact on society in the long term?

[02:18:50]

I think there are things to worry about. I think the giving machines too much leverage is a problem. And what I mean by leverage is. Is too much control over things that can hurt us, whether it's socially, psychologically, intellectually or physically, and if you give them machines too much control, I think that's a concern. Forget about the A.I. just when you give them too much control, human bad actors can hack them and produce havoc.

[02:19:21]

So, you know, that's a problem, and you imagine hackers taking over the driverless car network and, you know, creating all kinds of havoc, but you could also imagine given. Given. The ease at which humans could be persuaded one way or the other.

[02:19:39]

And now we have algorithms that can easily take control over over that, over that and amplify noise and move people one direction or another. I mean, humans do that to other humans all the time.

[02:19:50]

And we have marketing campaigns.

[02:19:51]

We have political campaigns that take advantage of our our our emotions or our fears. And this is done all the time when but with machines, machines are like giant megaphones. Right. We can amplify this and orders of magnitude and and fine tune its control so we can tailor the message. We can now very rapidly and efficiently tailor the message to the audience taking taking advantage of their biases and amplifying them and using them to persuade them in one direction or another in ways that are not fair or not logical, not objective, not meaningful.

[02:20:30]

And humans, machines and power that.

[02:20:33]

So so that's what I mean by leverage. Like it's not new. But wow, it's powerful because machines can do it more effectively, more, more, more quickly, and we see that already going on in social media, in other places, in other places.

[02:20:48]

That's scary. And and that's why, like, I'm I'm. That's why I go back to saying one of the most important public dialogues we could be having is about the nature of intelligence and the nature of inference and logic and reason and rationality. And us understanding our own biases, us understanding our own cognitive biases and how they work and then how machines work and how do we use them to complement it basically so that in the end we have a stronger overall system, that's just incredibly important.

[02:21:29]

I don't think most people understand that. So is it like telling telling your kids or telling your students? This goes back to the cognition, here's how your brain works. Here's how easy it is to trick your brain.

[02:21:44]

There are fundamental cognitive, but you should appreciate the different the different types of thinking and how they work and what you're prone to. And, you know, and and what do you prefer and under what conditions does this make sense versus that makes sense.

[02:21:59]

And then say, here's what I can do. Here's how we can make this worse and here's how we can make this better. And that's where the air has a role, is to reveal that that trade off. So if you imagine a system that is able to. Beyond any definition of the Turing test, the benchmark really in AGI system, as a thought partner that you one day will create. What question, what topic of discussion, if you get to pick one, would you have with that system?

[02:22:40]

What would you ask? And you get to find out. The truth together. So you threw me a little bit with finding the truth at the end, but the truth is a whole nother topic.

[02:22:57]

But the I think the beauty of it, I think what excites me is the beauty of it is if I really have that system, I don't have to pick. So in other words, I you know, I can go to it and say this is what I care about today.

[02:23:10]

And that's what we mean by it, like this general capability to go out with the stuff in the next three milliseconds. And I want to talk to you about it. I want to draw analogies. I want to understand how this affects this decision or that decision. What if this were true? What if that were true? What what knowledge should I be aware of that could impact my decision? Here's what I'm thinking is the main implication. Can you find can you prove that?

[02:23:37]

How can you give me the evidence that supports that? Can you give me evidence supports this other saying, boy, that would that be incredible? Would that be just incredible?

[02:23:44]

Just a long discourse just to be part of whether it's a medical diagnosis or whether it's, you know, the various treatment options or whether it's a legal case or whether it's a social problem that people are discussing, like be part of the dialogue, one that holds itself and us accountable to reasons and objective dialogue.

[02:24:08]

You know, I just I get goose bumps talking about it.

[02:24:10]

It's like this is what I want.

[02:24:13]

So when you created please come back on the podcast and we can have a discussion together and make it even longer. This is a record for the longest conversation and there's an honor. It is a pleasure. David, thank you so much for thanks so much. A lot of fun.