Transcribe your podcast
[00:00:00]

The following is a conversation with Juergen Schmidt Hueber, he's the co-director of a Swiss lab and a cocreator of long, short term memory networks. Electrodes are used in billions of devices today for speech recognition, translation and much more.

[00:00:17]

Over 30 years, he has proposed a lot of interesting out of the box ideas and learning, adversarial networks, computer vision and even a formal theory of, quote, creativity, curiosity and fun. This conversation is part of the MIT course in artificial general intelligence and the Artificial Intelligence Podcast.

[00:00:38]

If you enjoy it, subscribe on YouTube, iTunes or simply connect with me on Twitter at Lux. Friedman spelled F.R. I.D.. And now here's my conversation with Juergen Schmidt Hubb.

[00:01:09]

Early on, you dreamed of a system that self improve recursively. When was that dream born? When I was a baby. No, that's not true when I was a teenager and what was the catalyst for that birth? What was the thing that first inspired you? When I was a boy, I'm. I was thinking about what to do in my life, and then I thought the most exciting thing is to solve the riddle of the universe. And that means you have to become a physicist, however.

[00:01:47]

Then I realized that there something even grander. You can try to build a machine that isn't really a machine any longer, that learns to become a much better physicist than I could ever hope to be. And that's how I thought maybe I can multiply my tiny little bit of creativity into infinity.

[00:02:10]

But ultimately that creativity will be multiplied to understand the universe around us. That's that's the the curiosity for that mystery that that drove you.

[00:02:21]

Yes. So if you can build a machine that learns to solve more and more complex problems and more and more general problem solver, then you basically have solved all the problems, at least all the solvable problems.

[00:02:41]

So how do you think what is the mechanism for that kind of general solver look like? Obviously, we don't quite yet have one or know how to build one way of ideas.

[00:02:53]

And you have had throughout your career several ideas about it.

[00:02:56]

So how do you think about that mechanism? So in the 80s, I thought about how to build this machine that learns to solve all these problems and I cannot solve myself.

[00:03:10]

And I thought it is clear it has to be a machine that not only learns to solve this problem here and this problem here, but it also has to learn to improve the learning algorithm itself.

[00:03:24]

Right.

[00:03:25]

So it has to have the learning algorithm and representation that allows it to inspect it and modify it so that it can come up with a better learning algorithm. So I called that metal learning, learning to learn and recursive self improvement.

[00:03:43]

That is really the pinnacle of that, where you then not only learn how to improve on that problem and on that, but you also improve the way the machine improves and you also improve the way it improves, the way it improves itself.

[00:04:01]

And that was my 1987 diploma thesis, which was all about that hierarchy of metal answers that have no computational limits except for the well known limits that Google identified in 1931 and for the limits of physics in the recent years, better learning has gained popularity in a in a specific kind of form.

[00:04:28]

You've talked about how that's not really matter of learning with neural networks.

[00:04:33]

That's more basic transfer learning. Can you talk about the difference between the big general metal learning and a more narrow sense of motor learning the way it's used today, the way it's talked about today?

[00:04:46]

Let's take the example of a deep neural network that has learned to classify images and maybe you have to train that network on 100 different databases of images. And now a new database comes along and you want to quickly learn the new thing as well.

[00:05:09]

So one simple way of doing that as you take the network, which I already know is 100 types of databases, and then you would just take the top layer of that and you retrain that. Using the new label data that you have in the new image database, and then it turns out that it really, really quickly can learn that tool one shot, basically because from the first 100 datasets, it already has learned so much about computer vision that it can reuse that.

[00:05:45]

And that is then almost good enough to solve the new task, except you need a little bit of adjustment on the top.

[00:05:54]

So that is transfer learning and it has been done in principle for many decades, people have done similar things for decades.

[00:06:04]

Medal learning, true medal earning is about having the learning algorithm itself open to introspection by the system that is using it and also open to modification such that the learning system has an opportunity to modify any part of the learning algorithm and then evaluate the consequences of that modification and then learn from that to create a better learning algorithm and so on recursively.

[00:06:41]

So that's a very different animal where you are opening the space of possible learning algorithms to the learning system itself. Right.

[00:06:52]

So you've like in the 2004 paper you described, get on machines and programs that were right themselves? Yeah, right. Philosophically and even in your paper mathematically, these are really compelling ideas.

[00:07:05]

But practically, do you see these self-referential programs being successful in the near term to having an impact where sort of a demonstrates to the world that this direction is is a good one to pursue in the near term?

[00:07:24]

Yes, we had these two different types of fundamental research on how to build a universal problem solver, one basically exploiting.

[00:07:37]

Proof such and things like that that you need to come up with asymptotically optimal, theoretically optimal self improvers and problem solvers. However, one has to admit that through this process. Comes in an additive constant, an overhead and additive overhead that. Vanishes in comparison to what you have to do to solve large problems. However, for many of the small problems that we want to solve in our everyday life, we cannot ignore this constant overhead.

[00:08:19]

And that's why we also have been doing other things, non universal things, such as recurring neural networks, which are trained by gradient descent and local search techniques which aren't universal at all, which aren't provably optimal at all.

[00:08:37]

Like the other stuff that we did, but which are much more practical as long as we only want to solve the small problems that we. Typically trying to solve in this environment here sort of universal problems, all of us like the Google machine, but also Marcus Hutton's fastest way of solving all possible problems, which he developed around 2000 to in my lab.

[00:09:04]

They are associated with these costs and overheads for proof such which guarantees that the thing that you're doing is optimal.

[00:09:13]

For example, there is this fastest way of solving our problems with a computer solution, which is due to a Marko's home.

[00:09:24]

And to explain what's going on there, let's take travelling salesman problems with traveling salesman problems. You have a number of cities and cities and you're trying to find the shortest path through all these cities without visiting any city twice.

[00:09:45]

And nobody knows the fastest way of solving a traveling salesman problems TSP is.

[00:09:54]

But let's assume there is a method of solving them with an end to the five operations where n as the number of cities.

[00:10:06]

Then the universal method of Marqués is going to solve the same traveling salesman problem also was an end to the five steps, plus a couple of one, plus a constant number of steps that you need for the processor, which you need to show that this particular class of problems that travelling salesmen, salesman problems can be solved within a certain time, bound within order into the five steps, basically.

[00:10:40]

And this additive constant doesn't care for N, which means as N is getting larger and larger as you have more and more as it is, the constant overhead pales in comparison. And that means that almost all large problems are solved. And the best possible way of today, we already have a universal problem solver like that.

[00:11:06]

However, it's not practical because the overhead, the constant overhead is so large that for the smaller kinds of problems that we want to solve in this little biosphere.

[00:11:20]

By the way, when you say small, you're talking about things that fall within the constraints of our computational systems, that they can seem quite large to us mere humans, right?

[00:11:30]

That's right. Yeah. So they seem large and even unsolvable in a practical sense today. But they are still small compared to almost all problems, because almost all our problems are large problems, which are much larger than any constant. Do you find it useful as a person who has dreamed of creating a general learning system, has worked on creating one, has done a lot of interesting ideas there to think about P versus and P, this formalization of how hard problems are, how they scale this kind of worst case analysis type of thinking.

[00:12:11]

Do you find that useful or is it only just a mathematical it's a set of mathematical techniques to give you intuition about what's good and bad.

[00:12:21]

So P versus and P, that's super interesting from a theoretical point of view. And in fact, as you are thinking about that problem, you can also get inspiration for better practical problems, all of us.

[00:12:37]

On the other hand, we have to admit that at the moment the best practical problem solvers for all kinds of problems that we are now solving through what is called A.I. at the moment, they are not of the kind that is inspired by these questions.

[00:12:54]

You know that we are using general purpose computers such as recurrent neural networks.

[00:13:00]

But we have a search technique which is just localized search gradient descent to try to find a program that is running on these networks such that it can solve some interesting problems, such as speech recognition or machine translation and something like that.

[00:13:19]

And there is very little theory behind the best solutions that we have at the moment that can do that.

[00:13:26]

Do you think that needs to change? You think that will change? How can we go?

[00:13:30]

Can we create a general intelligence systems without ever really proving that that system is intelligence, some kind of mathematical way of solving machine translation perfectly or something like that within some kind of syntactic definition of a language? Or can we just be super impressed by the thing working extremely well? And that's sufficient.

[00:13:50]

There's an old saying, and I don't know who brought it up first, which says there's nothing more practical than a good theory and.

[00:14:02]

Yeah, and a good theory of Problem-Solving. And the limited resources like here in this universe or on this little planet. Has to take into account the limited resources, and so probably that is lacking.

[00:14:23]

A theory which is related to one we already have, this is some tightly optimal problems always, which which tells us what we need in addition to that, and to come up with a practically optimal problem solver.

[00:14:38]

So I believe we will have something like that. And maybe just a few little tiny twists are necessary to to change what we already have to come up with that as well.

[00:14:52]

As long as we don't have that, we admit that we are taking suboptimal ways and we can see on the horizon long term memory for.

[00:15:03]

And equipped with local search techniques, and we are happy that it works better than any competing method, but that doesn't mean that we we think we are done.

[00:15:16]

You've said that an AGI system will ultimately be a simple one. A general intelligence system will ultimately be a simple one. Maybe pseudo code of a few lines would be able to describe it. Can you.

[00:15:29]

Talk through your intuition behind this idea why you feel that at its core, intelligence is a simple. Algorithm. Experience tells us that the stuff that works best is really simple. So the asymptotically optimal ways of solving problems, if you look at them and just a few lines of code, it's really term, although they aren't these amazing property is just a few lines of code. Then the most promising and most useful practical things maybe don't have this proof of optimality associated with them.

[00:16:13]

However, they are also just a few lines of code, the most successful neural networks. You can write them down and five lines are pseudocode.

[00:16:24]

That's a beautiful, almost poetic idea. But what you're. Describing there is the lines of pseudocode are sitting on top of layers and layers of abstractions in a sense.

[00:16:38]

So you're saying at the very top it'll be a beautifully written sort of algorithm, but do you think that there's many layers of abstraction we have to first learn to construct?

[00:16:52]

Yeah, of course. We are building on all these great abstractions that people have invented over the millennia, such as matrix multiplications and real numbers and.

[00:17:10]

Basic arithmetic and calculus and derivations of error functions and derivatives off error functions and stuff like that.

[00:17:21]

So without that language, that greatly simplifies our way of thinking about these problems, we couldn't do anything.

[00:17:30]

So in that sense, as always, we are standing on the shoulders of the Giants who in the past simplified the problem of problem solving so much that now we have a chance to do. The final step to the final step will be a simple one.

[00:17:49]

If it if we take a step back through all of human civilization and just the universe in general, how do you think about evolution? And what if creating a universe is required to achieve this final step? What if going through the very painful and inefficient process of evolution is needed to come up with a set of abstractions that ultimately lead to intelligence?

[00:18:13]

Do you think there's a shortcut or do you think we have to create? Something like our universe in order to create something like human level intelligence. Hmm.

[00:18:25]

So far, the only example we have is this one, this universe.

[00:18:30]

And you better. Maybe not, but we are part of this whole process. So. Apparently so it might be the case that the code that runs the universe is really, really simple. Everything points to that possibility because gravity and other basic forces are really simple laws that can be easily described also in just a few lines of code, basically.

[00:19:02]

And and then there are these other events that the apparently random events in the history of the universe, which, as far as we know at the moment, don't have a complete code.

[00:19:16]

But who knows, maybe somebody in the near future is going to figure out the pseudo random generator, which is which is computing, whether the measurement of that speed up or down thing here is going to be positive or negative. Underlying quantum mechanics. Yes. Do you ultimately think quantum mechanics is a pseudo random number? So all deterministic? There's no randomness in our universe.

[00:19:46]

Does God play dice? So a couple of years ago, a famous physicist. Quantum physicist Anton Zeilinger, he wrote an essay in Nature, and it started more or less like that. One of the fundamental insights ofthe. Of the 20th century was that. The universe is fundamentally random on the quantum level and that whenever. You measure speed up or down or something like that, a new bit of information enters the history of the universe. And while I was reading that, I was already typing the response and they had to publish it because I was right, that there is no evidence, no physical evidence for that.

[00:20:43]

So there's an alternative explanation where everything that we consider random is actually pseudo random, such as the decimal expansion of PI three point one, four one and so on, which looks random but isn't surprised. Interesting because every three digit sequence, every sequence of three digits appears roughly one in a thousand times, and every five digit sequence appears roughly one in ten thousand times what you would expect if it was run random. But there's a very short algorithm, short program that computes all of that.

[00:21:27]

So it's extremely compressible and who knows, maybe tomorrow or somebody. Some grad student at CERN goes back over all these data points, better decay and whatever, and figures out, oh, it's the second billion digits of pi or something like that. We don't have any fundamental reason at the moment to believe that this is truly random and not just a deterministic videogame. If it was a deterministic video game, it would be much more beautiful because beauty, simplicity and many of the basic laws of the universe, like gravity and the other basic forces are very simple.

[00:22:09]

And so very short programs can explain what these are doing.

[00:22:15]

And and it would be awful and ugly.

[00:22:19]

The universe would be ugly. The history of the universe would be ugly. If for the extra things the Ranum, the seemingly random data points that we get all the time, that we really need a huge number of extra bits to describe all these, um, these extra bits of information.

[00:22:40]

So as long as we don't have evidence that there is no short program that computes the entire history of the entire universe, we are, as scientists compelled to look further for that Shorris program.

[00:22:59]

Your intuition says there exists a shortage, a program that can backtrack to the to the creation of the universe.

[00:23:07]

So the shortest path to the creation of the universe, including all the entanglement things and all the spin up and down measurements that have been taking place since thirteen point eight billion years ago. And so, yeah, so we don't have a proof that it is random.

[00:23:31]

We don't have a proof that it is compressible to a short program. But as long as we don't have that proof, we are obliged as scientists to keep looking for that simple explanation. Absolutely, she said.

[00:23:44]

Simplicity is beautiful or beauty is simple. Either one works, but you also work on curiosity. Discovery, you know, the romantic notion of randomness, of serendipity, of of being surprised by things that are about you. Kind of in our poetic notion of reality, we think as humans require randomness. So you don't find randomness beautiful. You you you find simple determinism beautiful. Yeah. OK, so why why? Because the explanation becomes shorter. A universe.

[00:24:30]

That is compressible to our short program, as much more elegant and much more beautiful than another one, which needs an almost infinite number of bits to be described as far as we know.

[00:24:47]

Many things that are happening in this universe are really simple in terms of short programs that compute gravity and the interaction between elementary particles and so on. So all of that seems to be very, very simple. Every electron seems to reuse the same subprogram all the time as it is interacting with other elementary particles.

[00:25:14]

If we now. Required an extra oracle, injecting new bits of information all the time for these extra things which are currently not understood, such as? Benitec. Then the whole. Description length, the data that we can observe of the history of the universe would become much longer and therefore uglier and uglier again, simplicities, elegant and beautiful.

[00:25:54]

All the history of science is a history of compression progress.

[00:25:58]

Yeah.

[00:25:58]

So you've described sort of as we build up abstractions and you talk about the idea of compression. How do you see this?

[00:26:10]

The history of science, the history of humanity, our civilization and life on Earth as some kind of path towards greater and greater compression? What do you mean by that?

[00:26:20]

How do you think that indeed the history of science is a history of compression progress? What does that mean? Hundreds of years ago, there was an astronomer whose name was Kepler, and he looked at the data points that he got by watching planets move.

[00:26:41]

And then he had all these data points and suddenly it turned out that he can greatly compress the data by.

[00:26:49]

Predicting it through an ellipse long, so it turns out that all these data points are more or less on ellipses around the sun. And another guy came along whose name was Newton and before him, Hook, and they said the same thing that is making these planets move like that is what makes the apples fall down.

[00:27:17]

And it also holds for stones and for.

[00:27:24]

All kinds of other objects. And certainly many, many of these compression of these observations became much more compressible because as long as you can predict the next thing, given what you have seen so far, you can compress it, but you don't have to store that data extra. This is called predictive coding.

[00:27:45]

And then there was still something wrong with that theory of the universe, and you had deviations from these predictions of the theory, and 300 years later, another guy came along whose name was Einstein, and he he was able to explain away all these deviations from the predictions of the old theory through a new theory, which was called the general theory of relativity, which at first glance looks a little bit more complicated.

[00:28:16]

And you have to warps space and time, but you can phrase it within one single sentence, which is.

[00:28:23]

No matter how fast you accelerate and how fast or hard you decelerate, and no matter what is the gravity in your local framework, light speed always looks the same.

[00:28:37]

And from that, you can calculate all the consequences.

[00:28:40]

So it's a very simple thing and it allows you to further compress all the observations, because certainly there are hardly any deviations any longer that you can measure from the predictions of this new theory.

[00:28:55]

So all of science is a history of compression progress.

[00:29:00]

You will never arrive immediately at the shortest explanation of the data, but you're making progress whenever you are making progress. You have an insight. You see, first, I needed so many bits of information to describe the data, to describe my falling Appel's, my video of falling Appel's. I need so many data, so many pixels have to be stored. But then suddenly I realize no, there is a very simple way of predicting the third frame and the video from the first two and and maybe not every little detail can be predicted, but more or less, most of these orange black blobs that are coming down, they accelerate in the same way, which means that I can greatly compress the video and the amount of compression.

[00:29:47]

Progress, that is the depth of the insight that you have at that moment, that's the fun that you have, the scientific fun, the fun and that discovery. And we can build artificial systems that do the same thing, that measure the depth of their insights as they are looking at the data, which is coming in through their own experiments. And we give them a reward, an intrinsic reward and proportion to this depth of insight.

[00:30:17]

And since they are trying to maximize the rewards they get, they are certainly motivated to come up with new action sequences, with new experiments that have the property, that the data that is coming in as a consequence of these experiments has the property that they can learn something about and see a pattern in there which they hadn't seen yet before.

[00:30:44]

So there's an idea of power play. You've described a training, a general problem solver in this kind of way of looking for the unsolved problems. Yeah. Can you describe that idea a little further? It's another very simple idea.

[00:30:58]

So normally what you do in computer science, you have you have some guy who gives you a problem and then there is a huge search space of potential solution candidates and you somehow try them out and you have more, less sophisticated ways of moving around in that search space until you finally found a solution, which you consider satisfactory.

[00:31:28]

That's what most of computer science is about. Power play just goes one little step further and says, let's not only search for solutions to a given problem, but let's search to parse our problems and their solutions where the system itself has the opportunity to phrase its own problem. So we are looking certainly at pairs of problems and their solutions or modifications are the problem solver that is supposed to generate a solution to that new problem.

[00:32:06]

And and this additional.

[00:32:10]

Degree of freedom allows us to build career systems that are like scientists in the sense that they not only try to solve and try to find answers to existing questions, no, they are also free to pose their own questions. So if you want to build an artificial scientist, you have to give it that freedom and power play is exactly doing that.

[00:32:35]

So that's that's a dimension of freedom that's important to have. But how do you how hard do you think that. How multidimensional and difficult the space of then coming up with your questions is, yes, as as it's one of the things that as human beings we consider to be the thing that makes us special, the intelligence that makes us special, is that brilliant insight? Yeah. That can create something totally new. Yes.

[00:33:04]

So now let's look at the extreme case. Let's look at the set of all possible problems that you can formally describe, which is infinite, which should be the next problem. That a scientist or a power play is going to solve one? It should be. The easiest problem that goes beyond what you already know. So it should be the simplest problem that the current problems all are that you have, which can already sold 100 problems that he cannot solve yet by just generalizing.

[00:33:46]

So it has to be new. So it has to require a modification of the problem solver such that the new problem solver can solve this new thing, but the old problem solver cannot do it. And in addition to that, we have to make sure that the problem solver doesn't forget any of the previous solutions. Right.

[00:34:07]

And so, by definition, Power Play is now trying always to search and this pair off and the set of pairs of problems and problems of modifications for combination that minimize the time to achieve these criteria.

[00:34:23]

So, as always, trying to find the problem, which is easiest to add to the repertoire.

[00:34:30]

So just like grad students and academics and researchers can spend their whole career in a local minima, stuck trying to come up with interesting questions, but ultimately doing very little.

[00:34:43]

Do you think it's easy or in this approach of looking for the simplest unsolvable problem, to get stuck in a local minima in that never really discovering new, you know, really jumping outside of the problems a very solved in a genuine creative way.

[00:35:03]

No, because that's the nature of power play that it's always trying to break its crown generalization abilities by coming up with a new problem which is beyond the horizon, because, say, just shifting the horizon of knowledge a little bit out there, breaking the existing rules set for the new thing become solvable, but wasn't solvable by the old thing.

[00:35:29]

So like adding a new axiom, like what Google did when he came up with these new sentences, new theorems that didn't have a proven performance system, which means you can add them to the repertoire, hoping that that they are not going to damage the consistency of the whole thing.

[00:35:51]

So in the paper with the amazing title, Formal Theory of creativity, fun and Intrinsic Motivation, you talk about discovery as intrinsic reward. So if you view humans as intelligent agents, what do you think is the purpose and meaning of life for us humans?

[00:36:12]

Is you've talked about this discovery. Do you see humans as an instance of power play agents?

[00:36:20]

Yeah.

[00:36:20]

So humans are curious and that means they behave like scientists, not only the official scientists, but even the babies behave like scientists and they play around with their toys to figure out how the world works and how it is responding to their actions.

[00:36:39]

And that's how they learn about gravity and everything.

[00:36:43]

And yeah, in 1990, we had the first systems like that would just try to to play around with the environment and come up with situations that go beyond what they knew at that time and then get a reward for creating these situations and then becoming more general problem solvers and being able to understand more of the world.

[00:37:04]

So, yeah, I think in principle that, um, that that curiosity strategy. Or Surfest, more sophisticated versions of what I just described, they are what we have built in as well because evolution discovered that's a good way of exploring the unknown world. And a guy who explores the unknown world has a higher chance of solving problems that he needs to survive in this world. On the other hand. Those guys who were too curious, they were weeded out as well.

[00:37:41]

So you have to find this trait of evolution and found a certain off. Apparently in our society, there is a certain percentage of extremely exploitative guys, and it doesn't matter if they die because many of the others are more conservative. And and so, yeah, it would be surprising to me if. If that principle of artificial curiosity. Wouldn't be present in almost exactly the same form here in our brains. So you're a bit of a musician and an artist.

[00:38:18]

So continuing on this topic of creativity. What do you think is the role of creativity and intelligence? So you've kind of implied that it's essential for intelligence if you think of intelligence as a problem solving system, its ability to solve problems. But do you think it's essential, this idea of creativity?

[00:38:44]

We never have a program, a program that is called creativity or something. It's just a side effect of what our problems always do. They are searching a space of problems, a space of candidates, of solution candidates until they hopefully find a solution to a given problem. But then there are these two types of creativity and both of them are now present in our machines.

[00:39:10]

The first one has been around for a long time, which is human gives problem to machine machine tries to find a solution to that.

[00:39:19]

And this has been happening for many decades and for many decades. Machines have found creative solutions to interesting problems where humans were not aware of these particularly creative solutions, but then appreciated that the machine found that.

[00:39:37]

The second is the pure creativity that I would call what I just mentioned. I would call the applied creativity, like applied arts, where somebody tells you now make a nice picture off of this pope and you will get money for that. OK, so here is the artist and he makes a convincing picture of the pope and the pope likes it and gives him the money.

[00:40:01]

And then there is the pure creative creativity, which is more like the power play and the artificial curiosity thing, where you have the freedom to select your own problem, like a scientist who defines his own question to study.

[00:40:19]

And so that is the pure creativity, if you will. And as opposed to the applied creativity, which serves another in that distinction, there's almost echoes of narrow AI versus generally.

[00:40:35]

So this kind of constrained painting of a pope seems like. The approaches of what people are calling narrow and pure creativity seems to be. Maybe I'm just biased as a human, but it seems to be an essential element of human level intelligence. Is that what you're implying? To a degree. If you zoom back a little bit and you just look at a general problem solving machine which is trying to solve arbitrary problems, then this machine will figure out in the course of solving problems that it's good to be curious.

[00:41:16]

So all of what I said just now about this prewired capacity and the will to invent new problems that the system doesn't know how to solve yet. Should be just a byproduct of the general search, however. Apparently, evolution has built it into us because it turned out to be so successful, a pre wiring, a bias, a very successful exploratory bias that said, we are born with it.

[00:41:49]

And you've also said that consciousness in the same kind of way may be a byproduct of problem solving. You know, do you think do you find it's an interesting byproduct?

[00:42:00]

Do you think it's a useful byproduct? What are your thoughts on consciousness in general, or is it simply a byproduct of greater and greater capabilities of problem solving? That's that's similar to creativity in that sense. Yeah, we never have a procedure called consciousness in our machines, however, we get as side effects of what these machines are doing, things that seem to be closely related to what people call consciousness. Mm hmm. So, for example, in 1990, we had simple systems which were basically recurrent networks and therefore universal computers, trying to map incoming data into actions that lead to success, maximizing reward in a given environment, always finding the charging station in time, whenever the battery's low and negative signals are coming from the battery, always find the charging station in time without bumping against painful obstacles on the way.

[00:43:06]

So complicated things, but very easily motivated.

[00:43:10]

And then we give these little guys a separate recording on network, which is just predicting what's happening. If I do that, what will happen as a consequence of these actions that I'm executing. And it's just trained on the long and long history of interactions with the world. So it becomes a predictive model of the world, basically, and therefore also compressor. Aasim, observations after a while, because whatever you can predict, you don't have to store extra or so compression as a side effect of prediction.

[00:43:46]

And how does this record that compress? Well, it's inventing little programs, little subnetwork networks that stand for everything that frequently appears and the environment like bottles and microphones and faces, maybe lots of faces in my environment. So I'm learning to create something like a prototype phase. And the new phase comes along. And all I have to encode are the deviations from the prototype. So it's compressing all the time with the stuff that frequently appears. There's one thing that appears all the time, that is present all the time when the agent is interacting with its environment, which is the agent itself.

[00:44:27]

So just for data compression reasons, it is extremely natural for this network to come up with little sub networks that stand for the properties of the agents, the hand. Now the the other actuators and all the stuff that you need to better encode the data, which is influenced by the actions of the agent. So they're just as a side effect of. Data compression during problem solving, you have internal self models. Now you can use this model of the world to plan your future, and that's what you have done since 1990.

[00:45:09]

So the recurrent network, which is the controller, which is trying to maximize reward, can use this model as a network of the wire service model network as a wireless predictive model of the world to plan ahead and say, let's not do this action sequence. Let's do this action sequence instead, because it leads to more predictable rewards. And whenever it's waking up these layers up networks that stand for itself and it's thinking about itself and its thinking about itself and its.

[00:45:42]

Exploring mentally the consequences of his own actions and. And now you tell me what is still missing. I am missing the next, the gap to consciousness. Yeah, there isn't. That's a really beautiful idea that, you know, if life is a collection of data and in life is a process of compressing that data to act efficiently in that data, you yourself appear very often.

[00:46:13]

So it's useful to form impressions of yourself.

[00:46:16]

And it's a really beautiful formulation of what consciousness is as a necessary side effect.

[00:46:21]

It's actually quite. Compelling to me, you've described Arnon's developed Elston's long short term memory networks that are the type of recurring neural networks that have gotten a lot of success recently. So these are networks that model the temporal aspects in the data, temporal patterns in the data.

[00:46:46]

And you've called them the deepest of the neural networks.

[00:46:50]

Right. So what do you think is the value of depth in the models that we use to learn? Yes, since you mentioned the long short term memory and the time I have to mention the names of the brilliant students who, of course, of course, first of all, on my first student ever separate, who had fundamental insights already and as diplomatese says then Felix Ghias, who had additional important contributions. Alex Grais, a guy from Scotland who is mostly responsible for this CTC algorithm, which is now often used to to train the LHC and to do the speech recognition on all the Google Android phones and whatever, and seriatim and so on.

[00:47:37]

So these guys without these guys, I would be nothing.

[00:47:42]

It's a lot of incredible work. What is now the depth, what is the importance of depth? Well, most problems in the real world are deep in the sense that the current input doesn't tell you all you need to know about the environment. So instead, you have to have a memory of what happened in the past. And often important parts of that memory are dated. They are pretty old. And so when you're doing speech recognition, for example, and somebody says 11.

[00:48:20]

Then that's about half a second or something like that, which means it's already 58 steps and another guy or the same guy says seven. So the ending is the same 11. But now the system has to see the distinction between seven and 11. And the only way I can see the difference is it has to store that 50 steps a ago there wasn't or an 11 or seven. So there you have already a problem of depth with you, because for each timestep you have something like a virtual Allaah and the expanded and rolled version of this recon network, which is doing the speech recognition.

[00:49:04]

So these long time lags, they translate into problem depth and Moore's problems. And this one says that you really have to look far back in time to understand what is the problem and to solve it.

[00:49:22]

But just like with ICBM's, you don't necessarily need to when you look back in time, remember every aspect. You need to remember the important aspects.

[00:49:30]

That's right. The network has to learn to put the important stuff in into memory and to ignore the unimportant noise.

[00:49:39]

So but in that sense, deeper and deeper is better or is there a limitation? Is is there I mean, Elysium is one of the great examples of architectures that do something beyond just deeper and deeper networks. This clever mechanisms for filtering data, for remembering and forgetting. So do you think that kind of thinking is necessary?

[00:50:07]

If you think about our systems as a leap, a big leap forward over traditional vanilla Arnon's, what do you think is the next leap? Hmm. It within this context, so she was a very clever improvement, but LGM still don't have the same kind of ability to see far back in the future in the past, as us humans do the credit assignment problem across way back, not just 50 timestamps or hundreds or thousands, but millions and billions.

[00:50:40]

It's not clear what are the practical limits after the election when it comes to looking back already in 2006, I think we had examples where not only look back tens of thousands of steps, but really millions of steps. And Juan Perez Ortiz in my lab, I think, was the first author of a paper where we really was in 2006 or something, had examples.

[00:51:07]

We had learned to look back for more than 10 million steps.

[00:51:13]

So. For most problems of speech recognition, it's not necessary to look that far back, but there are examples where does no looking back saying. That's rather easy because there's only one passed, but there are many possible futures, and so a reinforcement learning system which is trying to maximize its future expected rewards and doesn't know yet which of these many possible future should I select, given there's one single pass is facing problems that the LCN by itself cannot solve. So the other team is good for coming up with a compact representation of the history so far of the history and observations and actions so far, but now how do you plan in an efficient and good way among.

[00:52:09]

All these how do you select one of these many possible action sequences that a reinforcement learning system has to consider to maximize reward in this unknown future? So, again, we have this basic setup where you have one we cannot work, which gets in the video and the speech and whatever, and is executing the actions and is trying to maximize reward. So there is no teacher who tells it what to do at which point in time. And then there's the other network, which is.

[00:52:45]

Just predicting what's going to happen if I do that and then and that could be an elusive network and it us to look back all the way to make better predictions of the next time stuff.

[00:52:57]

So essentially, although it's predicting only the next TimeStep ad is motivated to learn to put into memory something that happened maybe a million steps ago, because it's important to memorize that. If you want to predict that at the next timestep, the next event you. Now, how can a model of the world like that, a predictive model of the world be used by the first guy, let's call it the controller and the model, the controller and the model, how can the model be used by the controller to efficiently select among these many possible futures?

[00:53:35]

So now way we had about 30 years ago was let's just use the model of the world as a stand in as a simulation of the world. And millisecond by millisecond, we plan the future. And that means we have to roll it out really in detail. And it will work only as the model is really good and it will still be inefficient because we have to look at all these possible futures and and there are so many of them. So instead, what we do now since 2015 and our CNN systems control among systems, we give the controller the opportunity to learn by itself how to use the potentially relevant parts of the of the model network to solve new problems more quickly.

[00:54:22]

And if it wants to, it can learn to ignore them.

[00:54:25]

And sometimes it's a good idea to ignore them because it's really bad. It's a bad predictor. And this particular situation of life where the controllers currently currently trying to maximize or one, however, it can also learn to address and exploit some of the programs that came about in the model network through compressing the data by predicting it. So it now has an opportunity to reuse that code, the algorithmic information and the model not trying to reduce its own search space so that it can solve a new problem more quickly than without the model.

[00:55:09]

Compression, so you're ultimately optimistic and excited about the power of aura of reinforcement learning in the context of real systems? Absolutely, yeah. So you see Earl as a potential having a huge impact beyond just sort of the M part is often develop on, you know, supervised learning methods. You see Earl as a way for problems of self-driving cars or any kind of applied side of robotics, if that's the correct, interesting direction for research, in your view.

[00:55:50]

I do think so, we have a company called Nascence Nations, which has applied reinforcement learning to little Audie's. There are bodies which learn to park without a teacher. The same principles were used. Of course, these are these little Audie's. They are small, maybe like that, so much smaller than their own outis. But they have all the sensors that you find in the out years. You find the cameras that lead on sensors. They go up to 120, 20 kilometers an hour if you if they want to.

[00:56:24]

And and they have pain sensors basically, and they don't want to bump against obstacles and other Audie's and so on, they must learn like little babies to pack, take the raw visual input and translate that into actions that lead to successful packing behavior, which is a rewarding thing.

[00:56:46]

And yes, they learn that. They learn. So we have examples like that. And it's only in the beginning, you know, there's just the tip of the iceberg. And I believe the next wave of I.

[00:56:58]

There's going to be all about that, so at the moment, the current wave of V.I. is about passive pattern observation and prediction and and that's what you have on your smartphone and what the major companies on the Pacific Rim are using to sell you ads to do marketing. That's the sort of profit in AIG, and that's only one or two percent of the real economy, which is big enough to make these companies pretty much the most valuable companies in the world.

[00:57:31]

But there's a much, much bigger. Fraction of the economy going to be affected by the next wave, which is really about machines that shape the data through their own actions. Do you think simulation is ultimately the biggest way that the those methods will be successful in the next 10, 20 years? We're not talking about 100 years from now. We're talking about sort of the near term impact overall. Do you think really good simulation is required or is there other techniques like imitation learning, you know, observing other humans operating in the real world?

[00:58:09]

Where do you think the success will come from?

[00:58:13]

So at the moment, we have a tendency of using. Physics simulations to learn behavior from machines that learn to solve problems that humans also do not know how to solve. However, this is not the future because the future is in what little babies do or they don't. You as a physics engine to simulate the world, know they learn a predictive model of the world, which maybe sometimes is wrong in many ways, but captures all kinds of important abstract, high level predictions which are really important to be successful.

[00:58:54]

And and that's what is what was the future 30 years ago when you started that type of research. But it's still the future. And now we don't know much about how to go there to to move there, to move forward and to really make working systems based on that, where you have a learning model out of the model of the world that learns to predict what's going to happen if I do that and that.

[00:59:17]

And then the controller uses that model to more quickly learn successful action sequences and then, of course, always this crazy thing. In the beginning, the model is stupid. So the controller should be motivated to come up with experiments, with action sequences that lead to data that improve the model.

[00:59:40]

Do you think improving the model, constructing an understanding of the world in this connection is in now?

[00:59:47]

The the popular approaches have been successful or, you know, grounded in ideas of neural networks.

[00:59:54]

But in the 80s, with expert systems or symbolic A.I. approaches, which to us humans are more intuitive in the sense that it makes sense that you build up knowledge in this knowledge representation. What kind of lessons can we draw and our current approaches in for from expert systems, from symbolic.

[01:00:16]

So I became aware of all of that in the 80s and back then, logic program logic programming was a huge thing. Was inspiring to you yourself. Did you find it compelling because a lot of your work was not so much in that realm? Is more in the learning systems, yes or no?

[01:00:35]

But we did all of that. So we we my first publication ever actually was 1987 was the implementation of genetic algorithm of a genetic programming system in Wolak. And so that's what you learned back then, which is a logic programming language. And the Japanese, they this huge fifth generation A.I. project, which was mostly about logic programming back then, although neural networks existed and were well known back then, and deep learning has existed since 1965, since this guy in the Ukraine, Evilenko, started it.

[01:01:18]

But the Japanese and many other people, they focused really on this largely programming. And I was influenced to the extent that I said, OK, let's take these biologically inspired algorithms like evolution programs and and and implement that in the language which I know, which was Prolog, for example.

[01:01:39]

And back then and then in many ways, as came back later, because the gorilla machine, for example, has approved socia on board.

[01:01:49]

And without that, it would not be optimal. While Michael SWATters universal algorithm for solving all well-defined problems has approved Saajan board. So that's very much LANTING programming. Without that, it would not be asymptotically optimal. But then on the other hand, because we are very pragmatic guys, are we?

[01:02:10]

We focused on Racaniello and that's why I am and suboptimal stuff such as gradient based search and program space rather than provably optimal things.

[01:02:25]

The logic programming does. It certainly has a usefulness in when you're trying to construct something provably optimal or probably good or something like that. But is it useful for for practical problems? It's really useful for improving the capacity of improvers today are not neural networks right now.

[01:02:44]

They are largely programming systems and they are much better theer improvers than most math students and the first or second semester, but for reasoning, for playing games of golf or chess or for robots, autonomous vehicles that operate in the real world or object manipulation.

[01:03:05]

Yeah. You think learning. Yeah.

[01:03:07]

As long as the problems have little to do with with C improving themselves then as long as that is not. The case, you you would just want to have a better pattern recognition, so Gibble as Altantuya, you want to have better pattern recognition and and pedestrian recognition and all these things, and you want to earn your minimum. You want to minimize the number of false positives, which is currently slowing down self-driving cars in many ways. And and all of that has very little to do with Lanting programming.

[01:03:42]

Yeah. What are you most excited about in terms of directions of artificial intelligence at this moment in the next few years in your own research and in the broader community? So I think in the not so distant future, we will have for the first time. Little robots that learn like kids. And I will be able to say to the robot, look here, robot, we are going to assemble a smartphone, let's take a slab of plastic and the screwdriver and let's scroll and the scroll like that.

[01:04:24]

No, not not like that. Like that. Not like that. Like that. And I don't have a data glove or something. He will see me and he will hear me and he will try to do something with his own actuators, which will be really different from mine. But he will understand the difference and will learn to. Imitate me, but not in the supervised way, where a teacher is giving target signals for all his muscles all the time.

[01:04:57]

By doing this high level imitation where he first has to learn to imitate me and then to interpret these additional noises coming from my mouth as helping helpful signals to him to do that better. And then it will. Lights out, come up with faster ways and more efficient ways of doing the same thing. And finally. I stop his learning algorithm and make a million copies and sell it and sell at the moment. This is not possible, but we already see how we are going to get them.

[01:05:33]

And you can imagine to the extent that this works economically and cheaply, it's going to change everything. Almost all of production is going to be affected by that and a much bigger wave. A much bigger wave is coming than the one that we are currently witnessing, which is mostly about passive pattern recognition on your smartphone.

[01:05:57]

This is about active machines that shapes data through the actions they are executing, and they learn to do that in a good way. So many of the traditional industries are going to be affected by that.

[01:06:12]

All the companies that are building machines. Well equipped these machines with cameras and other sensors, and they are going to learn to solve all kinds of problems. Through interaction with humans, but also a lot on their own to improve what they already can do on. And lots of old economy is going to be affected by that, and in recent years, I have seen that old economy is actually waking up and realizing that those the.

[01:06:47]

And are you optimistic about the future or are you concerned? There's a lot of people concerned and in the near term about the transformation of the nature of work, the kind of ideas you just suggested would have a significant impact of what kind of things could be automated. Are you optimistic about the future or are you nervous about that future and looking a little bit farther into the future?

[01:07:14]

There's people like Musk CEO Russell, concerned about the existential threats of their future. So in the near-term job loss, in the long term existential threat, are these concerns to you or you ultimately optimistic? So let's first address the near future. We have had predictions of job losses for many decades, for example, when industrial robots came along, many, many people predicted and lots of jobs are going to get lost. And in a sense. They were right because back then there were car factories and hundreds of people.

[01:08:04]

And these factories assembled cars, and today the same car factories have hundreds of robots and maybe three guys watching the robots.

[01:08:14]

On the other hand, those countries that have lots of robots per capita, Japan, Korea, Germany, Switzerland, a couple of other countries. They have really low unemployment rates somehow, all kinds of new jobs were created back then, nobody anticipated those jobs.

[01:08:38]

And decades ago I always said it's really easy to say which jobs are going to get lost, but it's really hard to predict the new ones 30 years ago, who would have predicted all these people making money as YouTube bloggers, for example?

[01:09:01]

200 years ago, 60 percent of all people used to work in agriculture. Today, maybe one percent. But still. Only, I don't know, five percent unemployment. Lots of new jobs were created and Homeloans, the playing man.

[01:09:23]

Is inventing new jobs all the time, most of these jobs are not existentially necessary for the survival of our species.

[01:09:35]

There are only very few existentially necessary jobs, such as farming and building houses and warming up the houses, but less than 10 percent of the population is doing that. And most of these newly invented jobs are about.

[01:09:52]

Interacting with other people in new ways, through new media and so on, getting new types of kudo's and forms of likes and whatever and even making money through that. So Honolua ends. The playing man doesn't want to be unemployed and that's why he's inventing new jobs all the time.

[01:10:12]

And he keeps considering these jobs as really important and is investing a lot of energy and hours of work. And to add to those new jobs. It's quite beautifully put. We're really nervous about the future because we can't predict what kind of new jobs will be created. But you're ultimately optimistic that we humans are so restless that we create and give meaning to new or new jobs, totally new likes and things that get likes on Facebook or whatever the social platform is.

[01:10:48]

So what about long term existential threat of A.I.? Or our whole civilization may be swallowed up by this ultra super intelligent systems. Maybe it's not going to be swallowed up, but.

[01:11:05]

I'd be surprised if we were we humans were the last step in the evolution of the universe, and you've actually had this beautiful comment somewhere that I've seen saying that artificial, quite insightful, artificial general intelligence systems, just like us humans, will likely not want to interact with humans.

[01:11:32]

They'll just interact amongst themselves just like ants, interact amongst themselves and only tangentially interact with humans. And it's quite an interesting idea that once we create ajai, they will lose interest in humans and have to compete for their own Facebook likes and their own social platforms. So within that quite elegant idea. How do we know in a hypothetical sense that there is not already intelligent systems out there? How do you think broadly of general intelligence greater than us? How do we know it's out there?

[01:12:12]

How do we know it's around us and could already be?

[01:12:17]

I'd be surprised if within the next few decades or something like that, the. We won't have eyes that really smart in every single way and better problem solvers in almost every single important way.

[01:12:34]

And I'd be surprised if they wouldn't realize what we have realized a long time ago, which is that almost all our physical resources are not here in this biosphere by far without. The rest of the solar system gets two billion times more solar energy than our little planet. That's lots of material out there that you can use to build robots and self replicating robot factories and all that stuff. And they are going to do that and they will be scientists and curious and they will explore what they can do.

[01:13:15]

And in the beginning. They will be fascinated by life and by their own origins and our civilization, they will want to understand that completely, just like people today would like to understand how life works and.

[01:13:32]

And also. The history of our own existence and civilization and also in the physical laws that created all of that. So they in the beginning, they will be fascinated by life once they understand it. I was interested. Like anybody who loses interest in things he understands. And then, as you said. The most interesting sources of. Information for them will be others of their own kind. At least in the long run. There seems to be some sort of protection through lack of interest on the other side.

[01:14:27]

And. And now it seems also clear as far as we understand physics, you need. Matter and energy to compute and to build more robots and infrastructure and more. Isolation and eye ecology is consisting of trillions of different types of Faizan, and so it seems inconceivable to me that this thing is not going to expand some A.I. ecology, not controlled by one eye, by trillions of different types of ice competing and all kinds of quickly evolving and disappearing ecological nicias in ways that we cannot fathom at the moment.

[01:15:08]

But it's going to expand limited by light speed and physics, but it's going to expand and. And now we realize that the universe is still young, it's only a thirteen point eight billion years old and it's going to be a thousand times older than that. So there's plenty of time to conquer the entire universe. And to fill it with intelligence and send us and receive such, that is trouble the way they are travelling in our laps today, which is by radio from center to receiver.

[01:15:47]

And let's call the current age of the universe one on one, a young. Now, it will take just a few eons from now and the entire visible universe is going to be full of that stuff. And let's look ahead to a time when the universe is going to be one thousand times older than it is now. They will look back and they will say, look, almost immediately after the Big Bang, only a few years later, the entire universe started to become intelligent.

[01:16:19]

Now to your question. How do we see whether anything like that has already happened or is already in a more advanced stage in some other part of the universe of a visible universe, we are trying to look out there and nothing like that has happened so far. Or is that true? Do you think we would recognize it or how do we know it's not among us? How do we know planets aren't in themselves intelligent beings? How do we know and it's seen as a collective are not much greater intelligence than our own, these kinds of ideas.

[01:16:58]

When I was a boy, I was thinking about these things and I thought maybe it has already happened, because back then I know I knew I learned from popular physics books that the structure, the large scale structure of the universe is not homogeneous. And you have these clusters of galaxies and then in between there are these huge empty spaces. And I thought maybe they aren't really empty. It's just that in the middle of that, some isolation already has expanded and then has covered a bubble of a billion light years time and using all the energy of all the stars within that bubble for its own unfathomable purposes.

[01:17:45]

And so it always has happened and we just failed to interpret the signs. But then I learned that gravity by itself explains that large scale structure of the universe and that this is not a convincing explanation. And then I thought maybe, maybe it's the dark matter, because as far as we know today, 80 percent of the measurable matter is invisible. And we know that because otherwise our galaxy or other galaxies would fall apart, say they are rotating too quickly.

[01:18:25]

And then the idea was that maybe all I see is a civilisation's and are already out there, they they.

[01:18:36]

They are just invisible because they are really efficient and using the energies of their own local systems, and that's why they appear dark to us. But this is also not a convincing explanation because. Then the question becomes, why is there? Are there still any visible stars left in our own galaxy, which also must have a lot of dark matter? So that is also not a convincing thing. And today I like to. I think it's quite plausible that maybe the first, at least in our local like home within.

[01:19:17]

It's a few hundreds of millions of light years that we can reliably. Observe and observe, is that exciting to you? It might be the first and. It would make us much more important because if we mess it up through a nuclear war. Then. Then maybe this will have an effect on the on the on the development of the entire universe. So let's not mess it up. Let's not mess it up again. Thank you so much for talking today.

[01:19:51]

I really appreciate it. It's my pleasure.