Transcribe your podcast
[00:00:00]

The following is a conversation with Gary Marcus is a professor emeritus at NYU, founder of Robust A.I. and Geometric Intelligence. The latter is a machine learning company that was acquired by UBA in 2016. He's the author of several books, A Natural and Artificial Intelligence, including his new book, Rebooting A.I. Building Machines We Can Trust. Gary has been a critical voice highlighting the limits of deep learning. And they are in general and discussing the challenges before our A.I. community that must be solved in order to achieve artificial general intelligence.

[00:00:38]

As I'm having these conversations. I try to find pastorate insight towards new ideas. I try to have no ego in the process. It gets in the way. I'll often continuously try on several hats, several roles. One, for example, is the role of a three year old who understands very little about anything and asks big what and why questions. The other might be a role of a devil's advocate who presents counter ideas with the goal of arriving at greater understanding through debate.

[00:01:08]

Hopefully, both are useful, interesting and even entertaining. At times I ask for your patience as I learn to have better conversations. This is the Artificial Intelligence Podcast. If you enjoy subscribe, I need to give it five stars on iTunes supporter and patron or simply connected me on Twitter at Lux, Friedman spelled F.R. Idi Amin. And now here's my conversation with Gary Marcus.

[00:01:53]

Do you think human civilization will one day have to face an age of a technological singularity that will occur in a societal way, modify our place in the food chain of intelligent living beings on this planet?

[00:02:06]

I think our place in the food chain has already changed.

[00:02:11]

So there are lots of things people used to do by hand that they do with machine. If you think of a singularity is like one single moment, which is, I guess, what it suggests. I don't know if it be like that, but I think that there's a lot of gradual change and AI is getting better and better.

[00:02:25]

I mean, I'm here to tell you why I think it's not nearly as good as people think. But, you know, the overall trend is clear. Maybe, you know, maybe Ray Kurzweil thinks it's an exponential and I think it's linear. In some cases it's close to zero right now, but it's all going to happen. I mean, we are going to get to human level intelligence or whatever you want, what you call artificial general intelligence at some point.

[00:02:46]

And that's certainly going to change our place in the food chain, because a lot of the tedious things that we do now, we're going to have machines doing a lot of the dangerous things that we do now. We're going to have machines, too. And I think our whole lives are going to change from people finding their meaning through their work, through people finding their meaning, through creative expression.

[00:03:05]

So the the singularity will be a very gradual in fact, removing the meaning of the word singularity will be a very gradual transformation in your view?

[00:03:17]

I think that it'll be somewhere in between. And I guess it depends what you mean by gradual and sudden. I don't think it can be one day. I think it's important to realize that intelligence is a multidimensional variable.

[00:03:28]

So, you know, people sort of write this stuff as if, like, IQ was one number. And, you know, the day that you hit two hundred and sixty two or whatever, you displace the human beings. And really there's lots of facets to intelligence. So there's verbal intelligence and there's motor intelligence and there's mathematical intelligence and so forth.

[00:03:48]

Machines in their mathematical intelligence far exceed most people already in their ability to play games.

[00:03:54]

They far exceed most people already in their ability to understand language. They lag behind my five year old, far behind my five year old. So there are some facets of intelligence. The machines have graphs and some that they haven't. And, you know, we have a lot of work left to do to get them to say, understand natural language or to understand how to flexibly approach some, you know, kind of novel McGyver problem solving kind of situation.

[00:04:19]

And I don't know that all of these things will come once. I think there are certain vital prerequisites that we're missing now. So, for example, machines don't really have common sense now. So they don't understand that bottles contain water and that people who drink water to quench their thirst and that they don't want to dehydrate. They don't know these basic facts about human beings. And I think that that's a rate limiting step for many things. It's limiting step for reading, for example, because stories depend on things like, oh, my God, that's Christens running out of water.

[00:04:48]

That's why they did this thing. Or, you know, if they only they have water, they could put out the fire water. So, you know, you watch a movie and your knowledge about how things work matter and to a computer can understand that movie if it doesn't have that background knowledge.

[00:05:02]

Same thing if you read a book. And so there are lots of places where if we had a good machine interpretable set of common sense, many things would accelerate relatively quickly. But I don't think even that is like a single point. There's many different aspects of knowledge and we might, for example, find that we make a lot of progress on physical reason and getting machines to understand, for example, how kids fit into locks or, you know, that kind of stuff, or I mean, how this gadget here works and so forth.

[00:05:33]

And so machines might do that long before they do a really good psychological reasoning because it's easier to get kind of labelled data or to do direct experimentation on a microphone stand than it is to do direct experimentation on human beings, to understand the levers that that guide that.

[00:05:51]

That's a really interesting point, actually, whether it's easier to to gain common sense, knowledge or psychological knowledge, I would say that common sense knowledge includes both physical knowledge and psychological knowledge.

[00:06:03]

And the argument I was getting, physical versus psychological, physical versus psychological, and the argument I was making is physical knowledge might be more accessible because you could have a robot, for example, lift a bottle, try putting on a bottle, cap on it, see the you know, falls off. If it does this and see that, it could turn it upside down. And so the robot could do some experimentation. We do some of our psychological reasoning by looking at our own minds.

[00:06:25]

So I can sort of guess how you might react to something based on how I think I would react to it in robots don't have that intuition and they also can't do experiments on people in the same way. We probably shut them down.

[00:06:37]

So, you know, if we wanted to have robots figure out how I respond to pain by pinching me in different ways, like that's probably, you know, it's not going to make it past the human subjects board and, you know, companies are going to get sued or whatever.

[00:06:49]

So, like, there's certain kinds of practical experience that are. Limited are off limits to robots, and that's a really interesting point. What is more? Difficult to gain a grounding in because to play devil's advocate, I would say that, you know, human behavior is easier expressed in data in digital form.

[00:07:13]

And so when you look at Facebook algorithms, they get to observe human behavior. So you get to study and manipulate even human behavior in a way that you perhaps cannot study or manipulate the physical world.

[00:07:26]

So it's true why you said pain is like physical pain, but that's again, the physical world.

[00:07:32]

Emotional pain might be much easier to experiment with, perhaps unethical, but nevertheless, some would argue it's already going on.

[00:07:41]

I think that you're right, for example, that Facebook does a lot of experimentation in psychological reasoning.

[00:07:49]

In fact, Zuckerberg talked about I at a talk that he gave nips. I wasn't there.

[00:07:55]

But the conference has been renamed NRPs, but be called Nips when he gave the talk and he talked about Facebook basically having a gigantic theory of mind. So I think it is certainly possible. I mean, Facebook does some of that. I think they have a really good idea of how to addict people to things. They understand what draws people back to things. I think they exploit it in ways that I'm not very comfortable with. But even so, I think that there are only some slices of human experience that they can access through kind of interface they have.

[00:08:23]

And of course, they're doing all kinds of VR stuff and maybe that'll change and they'll expand their data. And, you know, I'm sure that that's part of their goal. So it is an interesting question.

[00:08:33]

I think love, fear, insecurity, all the things that I would say some of the deepest things about human nature in the human mind could be explored.

[00:08:45]

The digital form. It's a you're actually the first person just now that brought up. I wonder what is more difficult, because I think folks who are the slow and we'll talk a lot about deep learning, but the people who are thinking beyond deep learning are thinking about the physical world.

[00:09:03]

You're starting to think about robotics in the home, robotics. How do we make robots manipulate objects, which requires an understanding of the physical world that requires commonsense reasoning and that has felt to be like the next step for common sense reasoning. But you've not brought up the idea that there's also the emotional part. And it's interesting whether that's hard or easy.

[00:09:23]

I think some parts of it are and some aren't. So my company that I recently founded with Brad Brooks from MIT for many years and so forth, we're interested in both.

[00:09:33]

We're interested in physical reasoning and psychological reasoning, among many other things. And, you know, there are pieces of each of these that are accessible. So if you want a robot to figure out whether it can fit under a table, that's a relatively accessible piece of physical reasoning. You know, if you know the height of the table and you know the height of the robot, it's not that hard. If you wanted to do physical reasoning about Jenga, it gets a little bit more complicated and you have to have, you know, higher resolution data in order to do it with psychological reasoning.

[00:10:03]

It's not that hard to know, for example, that people have goals and they like to act on those goals. But it's really hard to know exactly what those goals are or ideas of frustration.

[00:10:13]

I mean, you could argue it's extremely difficult to understand the sources of human frustration as they're playing Jenga with you or or not.

[00:10:22]

Yeah, that is very accessible. There's some things that are going to be obvious and some not.

[00:10:27]

So like I don't think anybody really can do this well yet, but I think it's not inconceivable to imagine machines in the not so distant future being able to understand that if people lose in a game that they don't like that.

[00:10:42]

Right. Right. You know, that's not such a hard thing to program and it's pretty consistent across people. Most people don't enjoy losing. And so, you know, that makes it, you know, relatively easy to code. On the other hand, if you wanted to capture everything about frustration, will people get frustrated for a lot of different reasons? They might get sexually frustrated, they might get frustrated, they can get their promotion at work, all kinds of different things.

[00:11:03]

And the more you expand the scope, the harder it is for anything like the existing techniques to really do that.

[00:11:09]

So I'm talking to Garry Kasparov next week, and he seemed pretty frustrated with this game against you Blue. So, yeah, well, I'm frustrated with my game against him last year because I, I played him. I had two excuses. I'll give you my excuses up that it won't mitigate the outcome. I was jet lagged and I hadn't played in 25 or 30 years. But the outcome is he completely destroyed me and wasn't even close.

[00:11:30]

Have you ever been beaten in any board game by a machine? I have. I actually played the predecessor to Deep Blue, um, deep thought, I believe it was called, and that too crushed me.

[00:11:48]

And that was. And after that you realize it's over for us. There's no point in my playing deep blue.

[00:11:53]

I mean, it's a waste of deep competition. I mean, I played Kasparov because we both gave lectures. This same event, and he was playing 30 people, I forgot to mention that not only did he crush me, but he crashed into 29 other people at the same time.

[00:12:07]

I mean, but the actual philosophical and emotional experience of being beaten by a machine, I imagine is I mean, to you who thinks about these things may be a profound experience.

[00:12:20]

But it was it was a simple no, I mean, mathematical experience. Yeah. I think a game like chess, particularly where it's you know, you have perfect information. It's to play or closed end and there's more competition for the computer. It's no surprise the machine wins. I mean, I'm not sad when a computer I'm not sad when a computer calculates a cube root faster than me. Like, I know I can't win that game. I'm not going to try.

[00:12:45]

Well, with a system like Alpha Go or Alpha Zero, do you see a little bit more magic in a system like that, even though it's simply playing a board game, but because there's a strong learning component?

[00:12:56]

You know, funny you should mention that in the context of this conversation, because Kasparov and I are working on an article that's going to be called A.I. is not magic. And neither one of us thinks that it's magic and part of the point of this article is that A.I. is actually a grab bag of different techniques and some of them have or they each have their own unique strengths and weaknesses.

[00:13:16]

So, you know, you read media accounts and it's like, oh, I it must be magical or can solve any problem. Well, no, some problems are really accessible like chess and go in.

[00:13:27]

Other problems like reading are completely outside the current technology. And it's not like you can take the technology that drives Alpha, go and apply it to reading again anywhere. You know, deep mine has tried that a bit. They have all kinds of resources. You know, they built Avago and they have you know, I wrote a piece recently they lost and you can argue about the word loss, but they spent five hundred thirty million dollars more than they made last year.

[00:13:51]

So they're making huge investments, very large budget, and they have applied the same kinds of techniques to to reading or to language. And it's it's just much less productive there because it's a fundamentally different kind of problem. Chess and go and so forth are closed end problems. The rules haven't changed in 2500 years. There's only so many moves you can make. You can talk about the financial as you look at the combinations of moves. But fundamentally, the board has 361 squares.

[00:14:17]

That's it. That's the only you know, those those intersections are the only places that you can place your stone. Whereas when you're reading the next sentence could be anything, you know, it's completely up to the writer what they're going to do next.

[00:14:31]

That's fast.

[00:14:31]

Fascinating. You think this way. You're clearly a brilliant mind who points out the emperor has no clothes. But so I'll play the role of a person who says, you know, put clothes on the emperor. Good luck with the romanticized is the notion of the emperor period the suggestion that clothes don't even matter. OK, so.

[00:14:49]

That's really interesting that you're talking about language, so there's the physical world of being able to move about the world, making an omelet and coffee and so on, there's language where you first understand what's being written and then maybe even more complicated than that, having a natural dialogue.

[00:15:08]

And then there is the game of go and chess. I would argue that language is much closer to go than it is to the physical world, like it is still very constrained when you say the possibility of the number of sentences that could come. It is huge, but it nevertheless is much more constrained.

[00:15:25]

If maybe I'm wrong, then the the possibility that the physical world brings us there's something to what you say in some ways in which I disagree.

[00:15:34]

So. One interesting thing about language is that it abstracts away this bottle, I don't know if it would be in the field of view is on this table. And I use the word on here and I can use the word on here.

[00:15:47]

Maybe not here, but there's that one word encompasses, you know, in analog space, a sort of infinite number of of possibilities. So there is a way in which language filters down the variation of the world and there's other ways.

[00:16:03]

So, you know, we have a grammar and more or less you have to follow the rules, that grammar.

[00:16:08]

You can break them a little bit, but. By and large, we follow the rules of grammar, and so that's a constraint on language. So there are ways in which language is a constraint system. On the other hand, there are many arguments. Let's say there's an infinite number of possible sentences and you can establish that by just stacking them up. So I think there's water on the table. You think that? I think there's water on the table.

[00:16:28]

Your mother thinks that. You think that? I think the water is on the table. Your brother thinks that maybe your mom is wrong to think that you think that. I think we you know, we can make sentences of infinite length or we can stack up adjectives. This is a very silly example of very, very silly example of very, very, very, very, very, very silly example, insofar mean they're good arguments that there's an infinite range of sentences.

[00:16:49]

In any case, it's vast by any reasonable measure. And, for example, almost anything in the physical world we can talk about in the language world. And interestingly, many of the sentences that we understand, we can only understand if we have a very rich model of the physical world.

[00:17:04]

So I don't ultimately want to adjudicate the debate that I think you just set up.

[00:17:08]

But I find it interesting, you know, maybe the physical world is even more complicated than language. I think that's fair. But you think that language is really, really hard. It's really, really hard.

[00:17:20]

Well, it's really, really hard for machines, for linguists, people trying to understand it. It's not that hard for children. And that's part of what's driven my whole career. I was a student of Steven Pinker's and we were trying to figure out why kids could learn language when machines couldn't. I think we're going to get into language. We're going to get into communication, intelligence and neural networks and so on. But let me return to the high level, the futuristic for for a brief moment.

[00:17:49]

So you've written in your book, your new book, it would be arrogant to suppose that we could forecast or I will be where the impact will have in a thousand years or even five hundred years. So let me ask you to be arrogant.

[00:18:04]

Uh, what do I systems with or with our physical bodies look like one hundred years from now, if you would? Just a you can't predict. But if you were to surprise and imagine, do I first justify the arrogance before you try to push me beyond it? Sure. I mean, there are examples like, you know, people figured out how electricity worked, they had no idea that that was going to lead to cell phones. Right. I mean, things can move awfully fast when new technologies are perfected.

[00:18:34]

Even when they made transistors, they weren't really thinking that cell phones would lead to social networking.

[00:18:40]

There are nevertheless predictions of the future, which are statistically unlikely to come to be, but nevertheless is asking me to be wrong, asking you to be this way.

[00:18:49]

What I like to be wrong pick the least unlikely to be wrong thing, even though it's most very likely to be wrong.

[00:18:56]

I mean, here are some things that we can safely predict, I suppose. Sure. We can predict that I will be faster than it is now.

[00:19:03]

It will be cheaper than it is now. It will be better in the sense of being more general and applicable in more places. It will be pervasive. I mean, these are easy predictions. I'm sort of modeling them in my head on. Jeff Bezos has famous predictions. He says, I can't predict the future, not in every way I'm paraphrasing, but I can predict that people will never want to pay more money for their stuff.

[00:19:29]

They're never going to want it to take longer to get there. Know like you can't predict everything, but you can predict something. Sure. Of course, it's going to be faster and better and.

[00:19:38]

We can't really predict is the full scope of of where he will be in a certain period. I mean, I think it's safe to say that although I'm very skeptical about current I that it's possible to do much better. You know, there's no in principle an argument that says A.I. is an insolvable problem, that there's magic inside our brains that will never be captured. I mean, I've heard people make those kind of arguments. I don't think they're very good.

[00:20:05]

So I is going to come and probably five hundred years of planning to get there and then once it's here, it really will change everything.

[00:20:15]

So when you say I is going to come, are you talking about human level intelligence?

[00:20:20]

So maybe I like the term general intelligence.

[00:20:23]

So I don't think that the ultimate A.I., if there is such a thing, is going to look just like humans.

[00:20:28]

I think it's going to do some things that humans do better than current machines like reason flexibly and understand language and so forth. But it doesn't mean they have to be identical to humans. So, for example, humans have terrible memory and they suffer from what some people call motivated reasoning. So they like arguments that seem to support them and they they dismiss arguments that they don't like. There's no reason that a machine should ever do that.

[00:20:55]

So you see that those the limitations of memory as a bug, not a feature. Absolutely. I'll say two things about that.

[00:21:03]

One is I was on a panel with Danny Kahneman, the Nobel Prize winner, last night, and we were talking about this stuff. And I think, you know, we converged on is that humans are a low bar to succeed. They may be outside of our skill right now. But as you know, I programers, but.

[00:21:19]

Eventually, I will exceed it, so we're not talking about human level here, we're talking about general intelligence that can do all kinds of different things and do it without some of the flaws that human beings have. The other thing I'll say is I wrote a whole book, actually, about the flaws of humans. It's actually a nice bookend to the counterpoint to the current book. So I wrote a book called Cluj, which was about the limits of the human mind.

[00:21:40]

Current book is kind of about those few things that humans do a lot better than machines.

[00:21:45]

Do you think it's possible that the flaws of the human mind, the limits of memory, our mortality, our bias is a strength, not a weakness, that that is the thing that enables from which motivation springs and meaning springs and heart?

[00:22:04]

A lot of arguments like this, I've never found them that convincing. I think that there's a lot of making lemonade out of lemons.

[00:22:11]

So we, for example, do a lot of free association where one idea just leads to the next and they're not really that well connected. And we enjoy that and we make poetry out of it and we make kind of movies with free associations and it's fun and whatever. I don't think that's really a virtue of the system.

[00:22:29]

I think that the limitations in human reasoning actually get us in a lot of trouble. Like, for example, politically, we can't see eye to eye because we have the motivational reasoning I was talking about and something related called confirmation bias. So we have all of these problems that actually make for a rougher society because we can't get along, because we can't interpret the data in shared ways. And then we do some nice stuff with that, so my free associations are different from yours and you're kind of amused by them and that's great and hence poetry.

[00:22:59]

So there are lots of ways in which we take a lousy situation and make it good. Another example would be our memories are terrible.

[00:23:07]

So we play games like concentration, where you flip over two cards, try to find a pair. You imagine a computer playing the computer is like, this is the game in the world. I know where all the cards are. I see at once.

[00:23:16]

I know where it is. What are you even talking about when we make a fun game out of having this terrible memory? So we are imperfect in discovering and optimizing some kind of utility function. But you think in general there is a utility function, there's an objective function that's better than others?

[00:23:35]

I didn't say that the presumption when you say I think you could design a better memory system, you could argue about utility functions and how you want to think about that.

[00:23:48]

But objectively, it would be really nice to do some of the following things, to get rid of memories that are no longer useful. Like objectively, that would just be good and we're not that good.

[00:24:00]

So when you park in the same lot every day, you confuse where you park today with where you park yesterday, with where you park the day before and so forth. So you blur together a series of memories. There's just no way that that's optimal.

[00:24:12]

I mean, I've heard all kinds of wacky jargons, people trying to defend that. But in the end of the day, I don't think any of them hold water.

[00:24:17]

Memories of traumatic events could be possibly a very nice feature to have to get rid of those.

[00:24:23]

It'd be great if you could just be like, I'm going to wipe this sector. You know, I'm done with that. I didn't have fun last night. I don't want to think about it anymore. Bye. Gone. But we can't.

[00:24:34]

Do you think it's possible to build a system? So you said human level intelligence is a weird concept, but all that I'm saying, I prefer general intel, Jamie. I mean, human level intelligence is a real thing. And you could try to make a machine that matches people or something like that. I'm saying that shouldn't be the objective, but rather that we should learn from humans the things they do well and incorporate that into our eye just as we incorporate the things that machines do well that people do terribly.

[00:24:59]

So, I mean, it's great that A.I. systems can do all this brute force computation that people can. And one of the reasons I work on this stuff is because I would like to see machine solve problems that people can't. They combine this or that in order to be solved would combine the strengths of machines to do all this computation with the ability, let's say, of people to read. So, you know, I'd like machines that can read the entire medical literature in a day, 7000 newspapers or whatever, as the numbers comes out every day.

[00:25:28]

There's no way for, you know, any doctor ever to read them all.

[00:25:32]

Machine that could read would be a brilliant thing, and that would be strength of brute force computation, combined with kind of subtlety and understanding medicine that, you know, good doctor scientist has so far can linger a little bit on the idea of general intelligence. So young McKuen believes that human intelligence is in general at all. It's very narrow. How do you think?

[00:25:53]

I don't think that makes sense. We have lots of narrow intelligence for specific problems. But the fact is, like anybody can walk into, let's say, a Hollywood movie and reason about the content of almost anything that goes on there. So you can reason about what happens in a bank robbery or what happens when someone is infertile and wants to, you know, go to IVF to try to have a child or you can you know, the list is essentially endless.

[00:26:22]

And, you know, not everybody understands every scene in the movie, but there's a huge range of things that pretty much any ordinary adult can understand.

[00:26:31]

His argument is, is that actually the set of things seems large to us humans because we're very limited in considering the kind of possibilities of experiences that are possible. But in fact, the amount of experience that are possible is is infinitely larger then.

[00:26:49]

I mean, if you want to make an argument that humans are constrained in what they can understand, I have no I have no issue with that.

[00:26:57]

I think that's right. But it's still not the same thing at all as saying here's a system that can play go. It's been trained on five million games. And then I say, can it play on a rectangular board rather than a square board? And you say, well, if I retrain it from scratch and another five million games, again, that's really, really narrow. And that's where we are. We don't have even a system that could play go and then without further retraining, play on a rectangular board, which any good human could do with very little problem.

[00:27:29]

So that's what I mean by narrow.

[00:27:31]

And so it's just wordplay to semantics and then it's just words then. Yeah. You mean General in a sense that you can do all kinds of go board shapes. Flexibly, well, that mean that would be like a first step in the right direction, but obviously that's not really meaning you're kidding. What I mean by a general is that you could transfer the knowledge you learn in one domain to another. So if you learn about bank robberies in movies and there's chase scenes, then you can understand that amazing scene in Breaking Bad.

[00:28:04]

When Walter White has a car chase scene with only one person, he's the only one in it. And you can reflect on how that car chase scene is like all the other car chase scenes you've ever seen and totally different and why that's cool.

[00:28:17]

And the fact that the number of domains you can do that with is finite doesn't make it less general. So the idea of general is you can just do it on a lot of transferred across a lot of domains.

[00:28:26]

Yeah, I mean, I'm not saying humans are infinitely general or that humans are perfect. I just said a minute ago is a low bar, but it's just it's a low bar. But, you know, right now the bar is here and we're there and eventually we'll get way past it.

[00:28:39]

So speaking of low bars, you've highlighted in your new book as well. But a couple of years ago, I wrote a paper titled Deep Learning, a critical appraisal that lists 10 challenges faced by current deep learning systems. So let me summarize them as. Data efficiency, transfer learning, hierarchical knowledge, Open-Ended inference, explain ability, integrating prior knowledge, causal reasoning, modeling, unstable world robustness, adversarial examples and so on. And then my favorite probably is reliability and engineering of real world systems.

[00:29:15]

So whatever people can read the paper, they should definitely read the paper, should read your book. But which of these challenges is solved, in your view, has the biggest impact on the AI community?

[00:29:27]

That's a very good question. And I'm going to be evasive because I think that they go together a lot, so, you know, some of them might be solved independently of others. But I think a good solution to AI starts by having real what I would call cognitive models of what's going on.

[00:29:45]

So right now, we have an approach that's dominant where you take statistical approximations of things, but you don't really understand them. Yes.

[00:29:52]

So you know that, you know, bottles are correlated in your data with bottle caps, but you don't understand.

[00:29:58]

There's a thread on the bottle cap that that fits with the thread on the bottle. And then that's what tightens. And if I tighten enough that there's a seal in the water can come out like there's no machine that understands that. And having a good cognitive model of that kind of everyday phenomena is what we call common sense. And if you had that, then a lot of these other things start to fall into at least a little bit better place. Right now.

[00:30:20]

You like learning correlations between pixels when you play a video game or something like that, and it doesn't work very well. It works when the video game is just the way that you studied it. And then you all through the video game in small ways, like you move the paddle and break out a few pixels and the system falls apart because it doesn't understand it doesn't have a representation of a paddle, a ball, the wall, the set of bricks and so forth.

[00:30:40]

And so its reasoning at the wrong level.

[00:30:43]

So the idea of common sense is for mystery. You've worked on it, but it's nevertheless full of mystery, a full promise. What is what is common sense mean? What does knowledge mean to the way you've been discussing it now is very intuitive. It makes a lot of sense that that is something we should have and that's something deep learning systems don't have.

[00:31:02]

But the argument could be that we're oversimplifying it because we're oversimplifying the notion of common sense, because that's how we weave. It feels like we as humans at the cognitive level approach problems.

[00:31:15]

So a lot of people aren't actually going to read my book. But if they did read the book, one of the things that might come as a surprise to them is that we actually say a common sense is really hard and really complicated. So they would you know, my critics know that I like common sense.

[00:31:31]

But, you know, that chapter actually starts by speeding up not on deep learning, but kind of on our own home team as it will. So Ernie and I are first and foremost people that believe in at least some of what good old fashioned way I try to do.

[00:31:45]

So we believe in symbols and logic and programming.

[00:31:49]

Things like that are important and we go through why even those tools that we hold fairly dear aren't really enough.

[00:31:56]

So we talk about why common sense is actually many things and some of them fit really well with those classical sets of tools. So things like taxonomy. So I know that a bottle is an object or it's a vessel, let's say, and I know a vessel is an object and objects are material things in the physical world.

[00:32:14]

So I can make some inferences.

[00:32:17]

If I know that vessels need to, you know, not have holes in them, then I can infer that in order to carry their contents that I can further the bottle shouldn't have a hole in in order to carry its contents so you can do hierarchical inference and so forth. And we say, that's great, but it's only a tiny piece of what you need.

[00:32:36]

For common sense, we give lots of examples that don't fit into that, so another one that we talk about is a cheese grater. You've got holes in a cheese grater.

[00:32:44]

You've got a handle on top. You can build a model in the game engine sense of a model so that you could have a little cartoon character flying around through the holes of the crater. But we don't have a system yet.

[00:32:56]

Taxonomy doesn't help us that much that really understands why the handle is on top and what you do with the handle or why all of those circles are sharp, or how you'd hold the cheese with respect to the greater in order to make it actually work.

[00:33:08]

Do you think these ideas are just abstractions that could emerge on a system like a very large, deep neural network?

[00:33:16]

I'm a skeptic that that kind of emergence, per say, can work. So I think that deep learning might play a role in systems that do what I want systems to do, but it won't do it by itself.

[00:33:26]

I've never seen a deep learning system really extract an abstract concept what they do, principled reasons for that stemming from how back propagation works, how the architectures are set up.

[00:33:39]

One example is deep learning. People actually all build in something like building something called convolution, which John McCain is famous for, which is an abstraction. They don't have their systems learn this. So the abstraction is the object looks the same if it appears in different places. And what lican figured out and why, you know, essentially why he was a co winner of the Turing Award was that if you program this in innately, then your system would be a whole lot more efficient.

[00:34:07]

In principle, this should be learnable, but people don't have systems that kind of reify things that make them more abstract.

[00:34:14]

And so you'd what you'd really wind up with if you don't program that in advance, is a system the kind of realizes that this is the same thing as this. But then I take your little clock there and I move it over and it doesn't realize that the same thing applies to the clock.

[00:34:27]

So the really nice thing, you're right, that convolution is just one of the things that's like it's an innate feature of this program by the human expert.

[00:34:35]

But we need more of those, not less. So the but the nice feature is it feels like that requires coming up with that brilliant idea, can get your Turing Award, but it requires less effort than encoding and something we'll talk about an expert system.

[00:34:53]

So encoding a lot of knowledge by hand. So it feels like one there's a huge amount of limitations which you clearly outline with deep learning. But the nice feature, deep learning, whatever it is able to accomplish, it does it.

[00:35:08]

It does a lot of stuff automatically without human intervention.

[00:35:11]

Well, and that's part of why people love it. Right.

[00:35:13]

But I always think of this quote from Bertrand Russell, which is it has all the advantages of theft over honest toil. It's really hard to program into a machine a notion of causality or, you know, even how a bottle works or what containers are. Ernie Davis and I wrote a, I don't know, 45 page academic paper trying just to understand what a container is, which I don't think anybody ever read the paper.

[00:35:37]

But it's a very detailed analysis of all the things that even some of the things you need to do in order to understand a container. It would be a whole lot nice.

[00:35:46]

And, you know, I'm a co-author on the paper. I made it a little bit better, but Ernie did the hard work for that particular paper and it took him like three months to get the logical statements correct.

[00:35:57]

And maybe that's not the right way to do it. It's a way to do it. But on that way of doing it, it's really hard work to do something as simple as understanding containers. And nobody wants to do that hard work. Even Ernie didn't want to do that hard work.

[00:36:12]

Everybody would rather just like feed their system in with a bunch of videos, with a bunch of containers and have the systems infer how convenient containers work. It would be like so much less effort. Let the machine do the work. And so I understand the impulse. I understand why people want to do that. I just don't think that it works. I've never seen anybody build a system that in a robust way can actually watch videos and predict exactly which containers would leak and which ones wouldn't.

[00:36:38]

There's something like, I know someone's going to go out and do that.

[00:36:41]

Since I said it, you know, I look forward to seeing it, but getting these things to work robustly is really, really hard.

[00:36:49]

So young Thakoon, who was my colleague at NYU for many years, thinks that the hard work should go into defining an unsupervised learning algorithm that will watch videos use the next frame, basically in order to tell it what's going on. And he thinks that's the royal road and he's willing to put in the work in devising that algorithm. Then he wants the machine to do the rest. And again, I understand the impulse, my intuition.

[00:37:16]

Based on years of watching this stuff and making predictions 20 years ago that still hold, even though there's a lot more competition and so forth, is that we actually have to do a different kind of hard work, which is more like building a design specification for what we want the system to do, doing hard engineering work to figure out how we do things like what John did for convolution in order to figure out how to encode complex knowledge into the systems. The current systems don't have that much knowledge other than convolution, which is again, you know, objects being in different places and having the same perception, I guess I'll say.

[00:37:53]

People don't want to do that work, they don't see how to naturally fit one with the other. I think that's yes, absolutely. But also on the system side, there's a temptation to go too far the other way. So it's just having an expert sort of sit down and encode the description, the framework for what a container is and then having the system reason. The rest, from my view, like one really exciting possibilities of active learning, where it's continuous interaction between a human and machine as the machine, that there's kind of deep learning type extraction of information from data patterns and so on.

[00:38:26]

But humans also guiding the the learning procedures, guiding the the both the the process and the framework of how the machine learns. Whatever the time I was with you, with almost everything you said except the phrase deep learning what I think you really want, there is a new form of machine learning. So let's remember, deep learning is a particular way of doing machine learning. Most often it's done with supervised data for perceptual categories.

[00:38:55]

There are other things you can do with deep learning, some of them quite technical.

[00:38:59]

But the standard use of deep learning is I have a lot of examples and I have labels for them. So here are pictures. This one's the Eiffel Tower. This one's the Sears Tower. This one's the Empire State Building. This one's a cat, this one's a pig and so forth.

[00:39:11]

They just get, you know, millions of examples, millions of labels. And deep learning is extremely good at that. It's better than any other solution that anybody has devised. But it is not good at representing abstract knowledge.

[00:39:24]

It's not good at representing things like bottles contain liquid and, you know, half tops to them.

[00:39:30]

And so it's not very good at learning or representing that kind of knowledge. It is an example of having a machine learn something right. But it's a machine that learns a particular kind of thing, which is object classification. It's not a particularly good algorithm for learning about the abstractions that govern our world. There may be such a thing.

[00:39:49]

Part of what we counsel in the book is maybe people should be working on devising such things.

[00:39:53]

So one possibility, just I wonder what you think about it is so deep. Neural networks do form abstractions, but they're not accessible to us humans in terms of we can't some truth in that.

[00:40:07]

So is it possible that either current or future neural networks form very high level abstractions which are as powerful as as are human abstractions of common sense? We just can't get a hold of them. And so the problem is essentially what we need to make them explainable.

[00:40:25]

This is an astute question, but I think the answer is at least partly no.

[00:40:29]

One of the kinds of classical neural network architectures we call an auto associate, it just tries to take an input, goes through a set of hidden layers and comes out with an output. And it's supposed to learn essentially the identity function that your input is the same as your output. So you think of this binary numbers you've got like the one the to the for the the sixteen and so forth. And so if you want to input twenty four, you turn on the sixteen, you turn on the eight.

[00:40:52]

It's like binary one one bunch of theories.

[00:40:55]

So I did some experiments in 1998 with the precursors of contemporary deep learning.

[00:41:03]

And what I showed was you could train these networks on all the even numbers and they would never generalize the odd number. A lot of people thought that I was, I don't know, an idiot or faking the experiment or wasn't true or whatever.

[00:41:16]

But it is true that with this class of networks that we had and that day that they would never, ever make this generalization. And it's not that the networks were stupid, it's that they see the world in a different way than we do.

[00:41:30]

They were basically concerned what is the probability that the rightmost output node is going to be one and as far as they were concerned in everything they'd ever been trained on, it was a zero. That node had never been turned on.

[00:41:43]

And so they figured, why turn it on now? Whereas a person would look at the same problem and say, well, it's obvious we're just doing the thing that corresponds the Latin for it is mutatis mutandis will change what needs to be changed.

[00:41:54]

And we do this. This is what algebra is. So I can do F of X equals Y plus two and I can do it for a couple values. I can tell you why is three then X is five and have ways for X is six and now I can do it with some totally different number, like a million that you can say, well obviously it's a million and two because you have an algebraic operation that you're applying to a variable and deep learning systems kind of emulate that, but they don't actually do it.

[00:42:19]

The particular example, you can fudge a solution to that particular problem. The general form of that problem remains that what they learn is really correlations between different input and output nodes and their complex correlations where with multiple nodes involved and so forth, ultimately they're correlated. They're not structured over these operations, over variables.

[00:42:39]

Now, someday people may do a new form of deep learning that incorporates that stuff. And I think it will help a lot. And there's some tentative work on things like differentiable programming right now that fall into that category. But the sort of classic stuff like people use for image now doesn't have it. And you have people like Kinton going around saying symbol manipulation. Like what? Markese What I advocate is like the gasoline engine. It's obsolete. We should just use this cool electric power that we've got with a deep learning.

[00:43:07]

And that's really destructive because we really do need to have the gasoline engine stuff that represents I mean, I don't think it's a good analogy, but we really do need to have the stuff that represents symbols.

[00:43:20]

Yeah. And Hytten as well.

[00:43:22]

Let's say that we do need to throw out everything and start over so that there is you know, Hinton said that to Axios and I had a friend to interview him and tried to pin him down on what exactly we need to throw. And he was very evasive and well of course, because we can't if he knew that, he'd throw it out himself. But I mean, you can't have it both ways. You can't be like, I don't know what to throw out, but I am going to throw out the symbols.

[00:43:46]

I mean, and not just the symbols, but the variables in the operations of valves. And don't forget the operations of the variables, the stuff that I'm endorsing and which, you know, John McCarthy did when he founded I. That stuff is the stuff that we build most computers out of. There are people now who say we don't need computer programmers anymore. Not quite looking at the statistics of how much computer programmers actually get paid.

[00:44:08]

Right now, we need lots of computer programmers and most of them, you know, they do a little bit of machine learning, but they still do a lot of code write code where it's like, you know, if the value of X is greater than the value of why, then do this kind of thing like conditionals and comparing operations over variables like there's this fantasy, can machine learn anything? There's some things you would never want to machine learn. I would not use a phone operating system.

[00:44:31]

There was machine learned, like you made a bunch of phone calls and you recorded which packets were transmitted and you just machine learned to be insane or to build a web browser by taking logs of keystrokes and images, screenshots and then trying to learn the relation between them. Nobody would ever know rational person would ever try to build a browser that they would use simple manipulation.

[00:44:54]

The stuff that I think I needs to avail itself of in addition to deep learning.

[00:44:58]

Can can you describe what your view of symbolic manipulation in its early days? Can you describe exper systems and where do you think they hit a wall or a set of challenges? Sure.

[00:45:10]

So I mean, first, I just want to clarify, I'm not endorsing expert systems per say. You've been kind of contrasting them. There is a contrast, but that's not the thing that I'm endorsing.

[00:45:19]

Yes. So expert systems try to capture things like medical knowledge with a large set of rules. So if the patient has this symptom and this other symptom, then it is likely that they have this disease.

[00:45:32]

So there are logical rules and they were simply manipulating rules of just the sort that I'm talking about and the penal code, a set of knowledge that the experts can put in in very explicitly so.

[00:45:42]

So you'd have somebody interview an expert and then try to turn that stuff into rules. And at some level, I'm arguing for rules.

[00:45:50]

But the difference is those guys did in the 80s was almost entirely rules, almost entirely handwritten with no machine learning. What a lot of people are doing now is almost entirely one species of machine learning with no rules. And what I'm counseling is actually a hybrid. I'm saying that both of these things have their advantage. So if you're talking about perceptual classification, how do I recognize a bottle? Deep learning is the best tool we've got right now. If you're talking about making inferences about what a bottle does, something closer to the expert systems is probably still the best available alternative.

[00:46:23]

And probably we want something that is better able to handle quantitative and statistical information than. Those classical systems typically were so we need new technologies that are going to draw some of the strength of both the expert systems and the deep learning, but are going to find new ways to synthesize them.

[00:46:40]

How hard do you think it is to add knowledge at the low level, so mine human intellects to add extra information to Simbel, manipulating systems and some domains?

[00:46:53]

It's not that hard, but it's often really hard. Partly because a lot of the things that are important, people wouldn't bother to tell you.

[00:47:02]

So if you pay someone on Amazon, Mechanical Turk to tell you stuff about bottles, they probably won't even bother to tell you some of the basic level stuff that's just so obvious to a human being and yet so hard to capture in machines.

[00:47:21]

They're going to tell you more exotic things and like they're all well and good, but they're not getting to the root of the problem.

[00:47:29]

So untutored humans aren't very good at knowing. And why should they be what kind of knowledge the computer system developers actually need?

[00:47:40]

I don't think that that's an irremediable problem. I think it's historically been a problem. People have had crowdsourcing efforts and they don't work that well. There's one at MIT. We're recording this at MIT.

[00:47:51]

Called Virtual Home, where we talk about this in the book, find the exact example there, but people were asked to do things like describe an exercise routine and the things that the people describe it are very low level and don't really capture what's going on.

[00:48:06]

So, like, go to the room with the television and the and the weights, turn on the television, press or press the remote to turn on the television, lift weights, put weight down, or it's like very micro level.

[00:48:20]

And it's not telling you what an exercise routine is really about, which is like I want to fit a certain number of exercises in a certain time period. I want to emphasize these muscles. I mean, you want some kind of abstract description, the fact that you happen to press the remote control in this room when you, you know, watch this television isn't really the essence of the exercise routine.

[00:48:39]

But if you just ask people like, what did they do, then they give you this fine grain. And so it takes a level of expertise about how the A.I. works in order to craft the right kind of knowledge. So there's this ocean of knowledge that we all operate on. Some of it may not even be conscious, or at least we're not able to communicate it effectively.

[00:48:59]

And most of it we would recognize if somebody said it, if it was true or not. But we wouldn't think to say that it's true or not.

[00:49:06]

It's a really interesting mathematical property. This ocean has the property that every piece of knowledge in it. We will recognize it as true if we're told, but we're unlikely to retrieve it in the reverse. So that that interesting property, I would say there's a huge ocean of that knowledge. What's your intuition? Is it accessible to assist them somehow? Can we see? I mean, most of it is not.

[00:49:35]

I'll give you an asterisk on this. And second, most of it is not ever been encoded in machine interpretable form. And so, I mean, if you say accessible, there's two meanings of that. One is like, could you build it into a machine? Yes. The other is like, is there some database that we could go, you know, download and stick into the first, you know, could could we what we could.

[00:49:58]

I think it hasn't been done right.

[00:50:01]

You know, the closest and this is the asterisk is the CIC sike system. Try to do this.

[00:50:07]

A lot of logicians worked for Doug Leinert for 30 years on this project. I think they stuck too closely to logic, didn't represent enough about probabilities, tried to hand coded their various issues and it hasn't been that successful.

[00:50:21]

That is the closest existing system to trying to encode this.

[00:50:27]

Why do you think there's not more excitement and money behind this idea?

[00:50:32]

Currently, there was people view that project as a failure. I think that they. Confuse the failure of a specific instance that was conceived 30 years ago for the failure of an approach which they don't do for deep learning.

[00:50:44]

So, you know, in 2010, people had the same attitude towards deep learning there. Like, this stuff doesn't really work. And, you know, all these other algorithms work better and so forth. And then certain key technical advances were made. But mostly it was the advent of graphics processing units that changed that. It wasn't even anything foundational in in the techniques. And there were some new tricks, but mostly it was just more compute and more data, things like image net that didn't exist before that allowed deep learning.

[00:51:15]

And it could be to work.

[00:51:17]

It could be that, you know, SYK just needs a few more things or something like like. But the widespread view is that that just doesn't work. And people are reasoning from a single example. They don't do that with deep learning. They don't say nothing that existed in 2010. And there were many, many efforts in deep learning was really worth anything. Right. I mean, really, there's no model from 2010 in deep learning of the processes, the deep learning that is any commercial value whatsoever.

[00:51:45]

At this point. They're all failures. But that doesn't mean that there wasn't anything there. I have a friend. I was getting to know him and he said I had a company, too. I was talking about I had a new company and he said I had company, too, and it failed. And I said, well, what did you do?

[00:52:00]

And he said, deep learning. And the problem was he did it in nineteen eighty six or something like that. And we didn't have the tools then or 1990.

[00:52:07]

We didn't have the tools then, not the algorithms. His algorithms weren't that different from modern algorithms, but he didn't have the GPS to run it fast enough, he didn't have the data and so it failed. It could be that, you know, symbol manipulation, persay with modern amounts of data and and maybe some advance in compute for that kind of computer might be great.

[00:52:31]

My, my. Perspective on it is not that we want to resuscitate that stuff per say, but we want to borrow lessons from it, bring together with other things that we've learned.

[00:52:39]

And it might have an image net moment where it will spark the world's imagination and there'll will be an explosion of symbol manipulation efforts. Yeah, I think the people that are to the Paul Allen's institute are trying to they're trying to build data sets that well, they're not doing it for quite the reason, they say, but they're trying to build data sets that at least spark interest in common sense, reasoning to create benchmarks that benchmarks for common sense, that there's a large part of what the 82 to dot org is working on right now.

[00:53:08]

So speaking of computer, Reg Sutton wrote a blog post titled Bitter Lesson. I don't know if you've read it, but he said that the biggest lesson that can be read from 70 years of research is that general methods that leverage computation are ultimately the most effective.

[00:53:23]

Do you think most effective at what. Right. So so they have been most effective for perceptual classification problems and for some reinforcement learning problems. He works on reinforcement.

[00:53:36]

No, let me push back on that. You're absolutely right.

[00:53:39]

But I would also say they've been most effective generally because everything we've done up to the why would you argue against that is a to me, deep learning is the first thing that has been successful at anything in A.I. And you're pointing out that this success is very limited, folks.

[00:54:03]

But has there been something truly successful before, deep learning?

[00:54:08]

Sure. I mean, yeah, I mean, I want to make a larger point, but on the narrower point, the, you know, classical A.I. is used, for example, in doing navigation instructions.

[00:54:20]

You know, you're very successful. Everybody on the planet uses it now like multiple times a day.

[00:54:26]

That's a measure of success, right. So I don't think classically I was wildly successful, but there are cases like that where it is used all the time, nobody even notices them because they're so pervasive. So there are some successes for classically, I think deep learning has been more successful, but my usual line about this and I didn't invent it, but I like it a lot, it's just because you can build a better ladder doesn't mean you can build a ladder to the moon.

[00:54:53]

So the bitter lesson is if you have a perceptual classification problem, throwing a lot of data at it is better than anything else. But that has not given us any material progress in natural language. Understanding, common sense reasoning like a robot would need to navigate a home, problems like that.

[00:55:13]

There is no actual progress there.

[00:55:16]

So flipside of that, if we remove data from the picture, another bit lesson is that you just have a very simple algorithm and you wait for compute to scale.

[00:55:29]

Does this have to be learning? It doesn't have to be deep learning. It doesn't have to be data driven. But just wait for the compute. So my question for you, do you think computers can unlock some of the things with either deep learning or simple manipulation? That.

[00:55:42]

Sure, but I'll put a proviso on that. Like more computers, always better. Like nobody's going to argue with more computers, like having more money. I mean, the returns are more money. Exactly. There's diminishing returns on more money, but nobody's going to argue if you want to give them more money. Right. Except maybe the people who signed the giving pledge and some of them have a problem. They've promised to give away more money than they're able to.

[00:56:06]

But the rest of us, you know, if you want to give me more money, say more money, more problems.

[00:56:10]

But OK, that's true to what I would say to you is your brain uses like 20 watts and it does a lot of things the deep learning doesn't do or the simple manipulation doesn't do that that A.I. just hasn't figured out how to do so. It's in existence proof that you don't need, you know, server resources that are Google scale in order to have an intelligence. You know, I built with a lot of help from my wife to intelligences that are 20 watts each and far exceed anything that anybody else has built out of silicon.

[00:56:43]

Speaking speaking of those two robots, how what have you learned about AI from having.

[00:56:49]

Well, they're not robots, but the intelligent agents listen to intelligent agents. I've learned a lot by watching my two intelligent agents.

[00:56:59]

I think that what's fundamentally interesting. Well, one of the many things that's fundamentally interesting about them is the way that they set their own problems to solve. So my two kids are a year and a half apart, both five and six and a half. They play together all the time and they're constantly creating new challenges like that's what they do, is they make up games. They're like, well, what if this or what if that or what if I had this super power or what if, you know, you could walk through this wall.

[00:57:27]

So they're doing these what if scenarios all the time, and that's how they learn something about the world and grow their minds.

[00:57:35]

And the machines don't really do that. So that's interesting. And you've talked about this. You've written about it. You thought about it. Nature versus nurture.

[00:57:45]

So what innate knowledge do you think we're born with and what do we learn along the way in those early months and years? Can I just say how much I like that question? You phrased it just right and almost nobody ever does, which is what is the innate knowledge and what's learned along the way.

[00:58:05]

So many people dichotomies it and they think it's nature versus nurture when it is obviously has to be nature and nurture. They have to work together. You can't learn the stuff along the way unless you have some someone needs stuff. But just because you have the innate stuff doesn't mean you don't learn anything. And so many people get that wrong, including in the field, like people think.

[00:58:26]

If I work in machine learning the learning side, I must not be allowed to work on the innate side would be cheating. Exactly. People have said that to me and that's just absurd. So thank you.

[00:58:40]

But, you know, you could break that apart more. I've talked to folks who study the development of the brain and I mean the growth of the brain in the first few days, in the first few months in the womb. All of that, you know, is that innate. So that process of development from a stem cell to the growth of the central nervous system and so on to the of the information that's encoded through the long arc of evolution.

[00:59:08]

So all of that comes into play. And it's unclear. It's not just whether it's a dichotomy or not, it's where most or where the knowledge is encoded. So what's your intuition about the innate knowledge, the power of it, what's contained in it? What can we learn from it?

[00:59:27]

One of my earlier books was actually trying to transcend the biology of this book was called The Birth of the Mind. Like, how is it the genes even build innate knowledge?

[00:59:35]

And from the perspective of the conversation we're having today, there's actually two questions.

[00:59:40]

One is what innate knowledge or mechanisms or what have you people or other animals might be endowed with?

[00:59:47]

I always like showing this video of a baby ibex climbing down a mountain. That baby IBEX, you know, a few hours after his birth, knows how to climb down a mountain. That means that it knows not consciously, something about its own body and physics and 3D geometry and all of this kind of stuff.

[01:00:04]

So there's one question about like, what does biology give its creatures, you know, what has evolved in our brains? How is that represented in our brains? The question I thought about in the book, The Birth of the Mind, and then there's a question of what I should have, and they don't have to be the same. But I would say that, you know, it's a pretty interesting set of things that we are equipped with that allows us to do a lot of interesting things.

[01:00:27]

So I would argue or guest based on my reading of the developmental psychology literature, which I've also participated in. That children are born with an ocean of space time, other agents places. And also this kind of mental algebra that I was describing before, no of causation, if I didn't just say that.

[01:00:49]

So at least those kinds of things, they're like frameworks for learning the other things. So are they disjoint in your view or is it just somehow all connected? You've talked a lot about language.

[01:01:01]

Is it is it all kind of connected in some mesh, that language like if understanding concepts altogether or I don't think we know for people how they're represented in machines just don't really do this yet.

[01:01:14]

So I think it's an interesting open question both for science and for engineering.

[01:01:20]

Some of it has to be at least interrelated in the way that, like the interfaces of a software package, have to be able to talk to one another. So, you know, the the systems that represent space and time can't be totally disjoint because a lot of the things that we reason about our relations between space and time and cause. So, you know, I put this on and I have expectations about what's going to happen with the bottle cap on on top of the bottle and those spane space and time.

[01:01:48]

You know, if the cap is over here, I get a different outcome. If the timing is different, if I put this year after I move that, then, you know, I get a different outcome that relates to causality. So obviously, these mechanisms, whatever they are, can certainly communicate with each other.

[01:02:06]

So I think evolution had a significant role to play in the development of this whole kluge.

[01:02:12]

Right. How efficient do you think is evolution? Oh, it's terribly inefficient, except that. Well, can we do better? Well, let's come out. And sure, it's inefficient, except that once it gets a good idea, it runs with it.

[01:02:27]

So it took, I guess, a billion years if I spent roughly a billion years. To evolve to a vertebrate. Brainpan and once that further reporting plan involved, it spread everywhere, so fish have it and dogs have had it, we have it, we have adaptations of it and specialisations of it.

[01:02:50]

But and the same thing with a primate brain plan. So monkeys have than apes have it and we have it.

[01:02:57]

So there are additional innovations like color vision and those spread really rapidly. So takes evolution a long time to get a good idea.

[01:03:05]

But, you know, being anthropomorphic and not literal here, but once it has that idea, is that so to speak, which cashes out into one set of genes or in the genome, those genes spread very rapidly. And they're like subroutines or libraries, I guess the word people might use nowadays or be more familiar with their libraries. They get used over and over again. So once you have the library for building something with multiple digits, you can use it for a hand, but you can also use it for a foot.

[01:03:32]

You just kind of reuse the library with slightly different parameters. Evolution does a lot of that, which means that the speed over time picks up so evolution can happen faster because you have bigger and bigger libraries. And what I think has happened in attempts at evolutionary computation is that people start with libraries that are very, very minimal, like almost nothing. And then, you know, progress is slow and it's hard for someone to get a good Ph.D. thesis out of it and they give up.

[01:04:04]

If we had richer libraries to begin with, if you were evolving from systems that had an original structure to begin with, then things might speed up or more students.

[01:04:15]

If the evolution process is indeed an amateur way, runs away with good ideas.

[01:04:20]

You need to have a lot of ideas, pool of ideas in order for it to discover one day you can run away with and students representing individual ideas as well.

[01:04:29]

Yeah, I mean, you could throw a billion Ph.D. students on it. Yeah. The Monkees at typewriters were Shakespeare. Yeah.

[01:04:36]

Well I mean, those aren't cumulative, right. That's just random. And part of the point that I'm making is that evolution is cumulative. So if you have a billion monkeys independently, you don't really get anywhere.

[01:04:49]

But if you have a billion monkeys and I think Dawkins made this point, there are probably other people. Dawkins made it very nice and either a selfish gene or blind watchmaker. If there is some sort of fitness function that can drive you towards something, I guess that's Dawkins point in my point, which is a variation on that, is that if the evolution is cumulative, the related points, then you can start going faster.

[01:05:12]

Do you think something like the process of evolution is required to build intelligent systems?

[01:05:16]

So logically so all the stuff that evolution did a good engineer might be able to do so. For example, evolution made quadrupeds which distribute the load across a horizontal surface.

[01:05:30]

A good engineer come up with that idea. I mean, sometimes good engineers come up with ideas by looking at biology. There's lots of ways to get your ideas.

[01:05:38]

Part of what I'm suggesting is we should look at biology a lot more. We should look at the biology of thought and understanding and the biology by which creatures intuitively reason about physics or other agents or like how to dogs reason about people like they're actually pretty good at it, if we could understand. At my college we joked Diggnation Nishan, if we could understand cognition well and how it was implemented, that might help us with our AI.

[01:06:06]

So do you think do you think it's possible that the kind of timescale that evolution took is the kind of timescale that will be needed to build intelligent systems, or can we significantly accelerate that process inside a computer?

[01:06:22]

I mean, I think the way that we accelerate that process is we borrow from Buyology.

[01:06:27]

Not slavishly, but I think we look at how biology to solve problems and we say, does that inspire any engineering solutions here, try to mimic biological systems and then therefore have a shortcut?

[01:06:38]

Yeah, I mean, there's a field called biomimicry and people do that for, like material science all the time.

[01:06:45]

We should be doing the analog of that for A.I. and the analog for that Prii is to look at cognitive science or the cognitive sciences, which is psychology, maybe neuroscience, linguistics and so forth. Look to those for insight.

[01:07:00]

What do you think is a good test of intelligence in your view?

[01:07:02]

So I don't think there's one good, good test. In fact, I try to organize a movement toward something called a Turing Olympics, and my hope is that Francois is actually going to take France. Russia is going to take over this. I think he's interested in I just don't have a place in my life at this moment. But the notion is that there'll be many tests and not just one, because intelligence is multifaceted. There can't really be a single measure of it because it isn't a single thing, you know, like just the crudest level.

[01:07:33]

The city is a verbal component in the math component because they're not identical. And Howard Gardner has talked about multiple intelligence like kinesthetic intelligence and verbal intelligence and so forth. There are a lot of things that go into intelligence and, you know, people can get good at one or the other. I mean, in some sense, like every expert has developed a very specific kind of intelligence. And then there are people that are generalists. And, you know, like I think of myself as a generalist with respect to cognitive science, which doesn't mean I know anything about quantum mechanics, but I know a lot about the different facets of the mind.

[01:08:05]

And, you know, there's a kind of intelligence to thinking about intelligence.

[01:08:09]

I like to think that I have some of that. But social intelligence, I'm just OK, you know, there are people that are much better at that than I am sure.

[01:08:17]

But what would be really impressive to you?

[01:08:20]

You know, I think the idea of a Turing Olympics is really interesting, especially somebody like France was running it.

[01:08:26]

But to you in general, not as a benchmark, but if you saw an ace system being able to accomplish something that would impress the heck out of you, what would that thing be? Would it be natural language conversation for me personally?

[01:08:43]

I would like to see kind of comprehension that relates to what you just said, so I wrote a piece in The New Yorker and I think 2015, right after Eugene Goostman, which was a software package, won a version of the Turing test.

[01:08:59]

And the way that it did this is it. Well, the way you win the Terang test. So when it is, you know, the Turing test is you fool a person into thinking that that machine is a person, is you are evasive, you pretend to have limitations so you don't have to answer certain questions and so forth. So this particular system pretended to be a 13 year old boy from Odessa who didn't understand English and was kind of sarcastic and wouldn't answer your questions and so forth.

[01:09:26]

And so judges got fooled into thinking briefly with a very little exposure to the 13 year old boy.

[01:09:31]

And it ducked all the questions Turing was actually interested in, which is like, how do you make the machine actually intelligent?

[01:09:37]

So that test itself is not that good. And so in The New Yorker, I proposed alternative and alternative, I guess. And the one that I propose, there was a comprehension test.

[01:09:46]

And it must like Breaking Bad, because I've already given you one Breaking Bad example in that article, I have one as well, which was something that if Walter what you should be able to watch an episode of Breaking Bad or maybe you have to watch the whole series to answer the question and say, you know, if Walter White took a hit out on Jesse, why did he do that? So if you could answer kind of arbitrary questions about characters motivations, I would be really impressed with that.

[01:10:09]

I mean, he built software to do that. They could watch a film or there are different versions. And so ultimately, I wrote this up with Praveen Paratus in a special issue of a magazine that basically was about the turning Olympics. There are like 14 tests proposed. And the one that I was pushing was a comprehension challenge. And Praveen, who's at Google, was trying to figure out, like, how we would actually run it. And so we wrote a paper together.

[01:10:32]

And you could have a text version, too, you know, you could have an auditory podcast version. You'd have written version. But the point is that you win at this test if you can do, let's say, human level better than humans at answering kind of arbitrary questions. Why did this person pick up the stone? What were they thinking when they picked up the stone? Were they trying to knock down glass? And I mean, ideally, these wouldn't be multiple choice either, because multiple choice pretty easily gamed.

[01:10:57]

So if you could have relatively open ended questions and you can answer why people are doing this stuff, I would be very impressed. Of course, humans can do this right.

[01:11:06]

If you watch a well constructed movie and somebody picks up a rock, everybody watching the movie knows why they picked up the rock, right?

[01:11:16]

They all know, oh, my gosh, he's going to, you know, hit this character or whatever. We have an example in the book about when a whole bunch of people say, I am Spartacus. You know, this famous scene. You know, the viewers understand, first of all, that everybody or everybody minus one has to be lying. They can't all be Spartacus. We have enough common sense knowledge to know they could not have the same name.

[01:11:40]

We know that they're lying and we can infer why they're lying, right. They're lying to protect someone and to protect things they believe in. You get a machine that can do that. They can say this is why these guys all got up and said, I am Spartacus. I will sit down and say, I, as you know, has really achieved a lot. Thank you. Without cheating any part of the system. Yeah.

[01:12:00]

I mean, if you do it, there are lots of ways to cheat. Like, you could build a Spartacus machine that works on that film.

[01:12:06]

Like, that's not what I'm talking I'm talking about. You can do this with essentially arbitrary film, sort of, you know, from a lot of Indian films, because it's possible such a system would discover that the number of narrative arcs in film is like limited to like the famous thing about the classic seven plots or whatever.

[01:12:23]

I don't care if you want to build in the system. You know, boy meets girl, boy loses girl, boy finds girl. That's fine. I don't mind having some knowledge.

[01:12:31]

OK, good. I mean you could build it internally or you could have your system watch a lot of films again if you can do this at all.

[01:12:39]

But with a wide range of films, not just one film in one genre, but even if you could do it for all Westerns, I'd be reasonably impressed. Yeah.

[01:12:48]

So in terms of being impressed, just for the fun of it, because you've put so many interesting ideas out there in your book Challenge in the Community for further steps, is it possible on the deep learning front that you're wrong about its limitations, that deep learning will unlock GIANNAKOU next year?

[01:13:10]

We'll publish a paper that achieves this comprehension. So do you think that way often as a scientist, do you consider that your intuition, that deep learning could actually run away with it? I'm more worried about rebranding as a kind of political thing, so, I mean, what's going to happen, I think, is the deep learning is going to start to encompass symbol manipulation. So I think Hinton's just wrong. You know, Hinton says we don't want hybrid's think people will work towards hybrids and they will relabel their hybrids as deep learning.

[01:13:41]

We've already seen some of that. So Alpha Go is often described as a deep learning system, but it's more correctly described as a system that has deep learning, but also Montecarlo tree search, which is a classical II technique, and people will start to blur the lines in the way that IBM blurred Watson. First, Watson meant this particular system and then it was just anything that IBM built in their cognitive division.

[01:14:00]

But purely let me ask for sure. That's that's a branding question and that's a giant mess. I mean, purely a single neural network being able to accomplish. I don't I don't stay up at night worrying that that's going to happen. And I'll just give you two examples. One is a guy, a deep mind, thought he had finally outfoxed me at surgey.

[01:14:21]

Lord, I think it's his Twitter handle.

[01:14:24]

And he said he specifically made an example. Marcus said that such and such, he fed it into GP2, which is the A.I. system that is so smart that openly I couldn't release it because it would destroy the world. Right. You remember that three months ago. So he decided to into two.

[01:14:42]

And my example was something like a rose is a rose, a tulip as a tool of a lily is a blank. And he got it to actually do that, which was a little bit impressive.

[01:14:50]

And I wrote back and he said, that's impressive. But can I ask you a few questions?

[01:14:54]

I said, Can it was that just one example?

[01:14:56]

Can it do it generally and can do it with novel words, which is part of what I was talking about in 1998 when I first raised the example. So a tax is a tax write. And he sheepishly wrote back about 20 minutes later and the answer was, well, it had some problems with those. So, you know, I made some predictions.

[01:15:13]

Twenty one years ago that still hold in the world of computer science. That's amazing, right? Because, you know, there's a thousand or a million times more memory and, you know, computations a million times, a million times more operations per second, you know, spread across a cluster.

[01:15:31]

And there's been advances in, you know, placing sigmoidal with other functions and so forth, these all kinds of advances. But the fundamental architecture hasn't changed in the fundamental limit hasn't changed. And what I said then is kind of still true. Then here's a second example. I recently had a piece in Wired that's adapted from the book and the book didn't was went to press before GP2 came out. But we describe this children's story and all the inferences that you make in this story about a boy finding a lost wallet.

[01:16:04]

And for fun in the Wired piece, we ran it through GP2 to do something called Talk to Transformer Dotcom and your viewers can try this experiment themselves.

[01:16:14]

Go to the Wired piece that has the link and it has the story. And the system made perfectly fluent text that was totally inconsistent with the conceptual underpinnings of the story. Right. This is what you know, again, I predicted in 1998, for that matter, Chomsky Miller made the same prediction in 1963.

[01:16:33]

I was just updating their claim for a slightly new text. So those particular architectures that don't have any built-In knowledge, they're basically just a bunch of layers doing correlational stuff.

[01:16:45]

They're not going to solve these problems. So 20 years ago, he said the emperor has no clothes. Today the emperor has no clothes. The lighting's better, though. The lighting is better.

[01:16:55]

And I think you yourself are also I mean, and we found out some things to do with naked emperors. I mean, this stuff is worthless. I mean, they're not really naked. It's more like they're in their briefs and everybody thinks that. And so, like I mean, they are great at speech recognition. But the problems that I said were hard because I didn't literally say the emperor has no clothes. I said this is a set of problems that humans are really good at.

[01:17:18]

And it wasn't couched as I it was just cognitive science.

[01:17:21]

But I said, if you want to build a neural model of how humans do certain class of things, you're going to have to change the architecture. And I stand by those claims.

[01:17:30]

So and I think people should understand you're quite entertaining in your cynicism, but you're also very optimistic and a dreamer about the future of AI, too.

[01:17:40]

So you're both it's just there's a famous saying about being, you know, people overselling technology in the short run and underselling it in the long run.

[01:17:50]

And so I actually end the book or Ernie Davis and I and our book with an optimistic chapter, which kind of killed Ernie because he's he's even more pessimistic than I am. He describes me as a contrarian and him as a pessimist.

[01:18:04]

But I persuaded that we should end the book with a look at what would happen if I really did incorporate, for example, the common sense reasoning in the nativism and so forth, the things that we counseled for.

[01:18:16]

And we wrote it. It's an optimistic chapter that I suitably reconstructed so that we could trust it, which we can't and now could really be world changing.

[01:18:26]

So on that point, if you look at the future trajectories of EHI, people have worries about negative effects of A.I., whether it's at the large existential scale or smaller shortterm scale of negative impact on society. So you write about trustworthy AI. How can we how can we build AI systems that align with our values that make for a better world, that we can interact with, that we can trust?

[01:18:51]

First thing we have to do is to replace deep learning with deep understanding. So you can't have alignment with a system that traffics only in correlations and doesn't understand concepts like bottles or harm.

[01:19:04]

So you Asama talked about these famous laws and the first one was first, do no harm. And you can quibble about the details of Asmus laws.

[01:19:13]

But we have to if we're going to build real robots in the real world, have something like that, that means we have to program in a notion that's at least something like harm. That means we have to have these more abstract ideas. The deep learning is not particularly good and they have to be in the mix somewhere. I mean, you could do statistical analysis about probabilities of given harms or whatever, but you have to know what a harm is in the same way that you have to understand that a bottle isn't just a collection of pixels.

[01:19:37]

And also be able to you're implying that you need to also be able to communicate that to humans so the A.I. systems would be able to prove to humans that they understand that they know what harm means.

[01:19:52]

I might run it in the reverse direction, but roughly speaking, I agree with you. So we probably need to have committees of wise people, ethicists and so forth.

[01:20:02]

Think about what these rules ought to be and we should just leave it to software engineers.

[01:20:06]

It shouldn't just be software engineers and it shouldn't just be, you know, people who own large mega corporations that are good at technology, ethicists and so forth should be involved, but.

[01:20:18]

You know, there should be some assembly of wise people, as I was putting it, that tries to figure out what the rules ought to be and those have to get translated into code.

[01:20:29]

You can argue or code or neural networks or something, they have to be. Translated into something that machines can work with, and that means there has to be a way of working the translation. And right now we don't we don't have a way. So let's say you and I were the committee and we decided that as a mouse, first law is actually right. And let's say it's not just two white guys, which would be kind of unfortunate that we have a broad and so representative sample of the world or however we want to do this.

[01:20:54]

And the committee decides eventually, OK, as a most first law is actually pretty good. There are these exceptions to it. We want to program in these elections. But let's start with just the first one and then we'll get to the exceptions. First one is first do no harm. Well, somebody has to now actually turn that into a computer program or neural network or something. And one way of taking the whole book, the whole argument that I'm making is that we just don't have to do that yet and we're fooling ourselves.

[01:21:20]

If we think that we can build trustworthy A.I., if we can't even specify in any kind of, you know, we can't do it in Python and we can't do it in Tenzer flow, we're fooling ourselves and thinking that we can make trustworthy AI if we can't translate harm into something that we can execute. And if we can't, then we should be thinking really hard. How could we ever do such a thing? Because if we're going to use AI in the ways that we want to use it to make job interviews or to do surveillance, not that I personally want to do that or what I mean, we're going to use AI in ways that have practical impact on people's lives or medicine.

[01:21:54]

It's got to be able to understand stuff like that.

[01:21:57]

So one of the things your book highlights is that, you know, a lot of people in the deep learning community, but also the general public politicians, just people in all general groups and walks of life have different levels of misunderstanding of AI.

[01:22:14]

So when you talk about committees, what's your advice to our society?

[01:22:22]

How do we grow? How do we learn about A.I. such that such committees could emerge where large groups of people could have a productive discourse about how to build successfully AI systems?

[01:22:34]

Part of the reason we wrote the book was to try to inform those committees. So part of the reason we wrote the book was to inspire a future generation of students to solve what we think are the important problems. So a lot of the book is trying to pinpoint what we think are the hard problems where we think effort would most be rewarded. And part of it is to try to train people.

[01:22:54]

Who talk about A.I. but aren't experts in the field to understand what's realistic and what's not, one of my favorite parts in the book is the six questions you should ask any time you read a media account. So like no one is, if somebody talks about something, look for the demo. If there's no demo, don't believe it. Like the demo that you can try. If you can't try it at home, maybe it doesn't really work that well yet.

[01:23:15]

So if we don't have this example in the book. But if Cynda Pinchao says we have this thing that allows it to sound like human beings in conversation, you should ask, can I try it? And you should ask how general it is. And it turns out at that time I'm alluding to Google duplex when it was announced it only worked on calling hairdressers, restaurants and finding opening hours. That's not very general.

[01:23:37]

That's that's narrow. And I'm not going to ask your thoughts about Sophia. Yeah, I understand. That's a really good question to ask of any kind of hyped up idea.

[01:23:47]

He has very good material written for her, but she doesn't understand the things that she's saying.

[01:23:52]

So a while ago, you've written a book on the science of learning, which I think is fascinating. But the learning case studies of playing guitar, that's called Guitar Zero. I love guitar myself. I've been playing my whole life. So let me ask a very important question.

[01:24:06]

What is your favorite song, rock song to listen to or try to play?

[01:24:12]

Well, this would be different, but I'll say that my favorite rock song to listen to is probably All Along the Watchtower. The Jimi Hendrix version. Jimi Hendrix version is feels magic to me. I've actually recently learned of that song. I've been trying to put it on YouTube myself. Singing singing is the scary part.

[01:24:27]

If you could party with the rock star for a weekend, living or dead, who would you choose?

[01:24:34]

And pick their mind is not necessarily about the party. Thanks for the clarification. I guess John Lennon is such an intriguing person and.

[01:24:44]

I mean, being a troubled person, but an intriguing one, so beautiful. Well, Imagine is one of my favorite songs, so also one of my favorite songs. That's a beautiful way to end. And Gary, thank you so much for talking to it.

[01:24:56]

Thanks so much for having me.