Happy Scribe Logo

Transcript

Proofread by 1 reader
Proofread
[00:00:08]

Welcome to the Knowledge Project, I'm your host, Shane Parrish, I'm the curator behind Farnam Street, which is an online intellectual hub of interesting covering topics like human misjudgment, decision making strategy, philosophy.

[00:00:23]

But today we're going to be talking about artificial intelligence and machine learning. The knowledge project allows me to interview amazing people from around the world to deconstruct why they're good at what they do. More conversation than prescription. It's about them, not about me. On this episode, I'm so happy to have Pedro Dominguez, who's a professor at the University of Washington. He's a leading researcher in machine learning and recently wrote an amazing book called The Master Algorithm. I was fortunate enough to have a long and fascinating conversation with him over dinner one night, which I hoped would never end.

[00:00:58]

But that ended up leading to this episode, which I think you will love. We're going to explore the sources of knowledge, the five major schools of thought of machine learning, why white collar jobs are easier to replace than blue collar jobs, machine wars, self-driving cars and so much more. I hope you enjoyed this conversation as much as I did.

[00:01:24]

Before I get started, here's a quick word from our sponsor. This podcast is supported by Slark, a messaging app bringing all your team's communication into one place so you can spend less time answering emails and attending meetings and spend more time being productive.

[00:01:39]

Visit Slocum's Ashburnham to create your team and get one hundred dollars in credits that you can use if you decide to switch to a paid plan.

[00:01:46]

Hey, thanks for coming on the show today, Pedro, I'm so excited to have you I read your book, The Master Algorithm, and I've been thinking about it ever since. Thanks for having me. So maybe you can just for the sake of our audience, can you give an overview of what is artificial intelligence? Sure. Artificial intelligence or it is the field of computer science that deals with getting computers to do those things that require human intelligence to do as opposed to just routine processing.

[00:02:17]

So things like reasoning, common sense, knowledge, understanding language, vision, manipulating things, navigating in the world and learning. These are all these are all fields of A.I. And if you add them all together, what you have is an intelligence entity, which in this case would be artificial instead of natural. And so normally we're used to natural. And this kind of begs the question that we had talked about over dinner, which is where does knowledge come from?

[00:02:45]

Yeah, so the knowledge that we human beings have that makes us so intelligent comes from a number of different sources. The first one, which people often don't realize is just evolution. We actually have a lot of knowledge encoded in our DNA that makes us what we are. That is the result of a very long process of weeding out the things that don't work and building on the things that do work. And then there is knowledge that just comes from experience.

[00:03:11]

That's the knowledge that you and I acquired by living in the world and that's encoded in our neurons. And then equally important, there's the knowledge that the kind of knowledge that only human beings have, which is the knowledge that comes from culture, from talking with other people, from reading books and and so on. So these are the sources of knowledge in natural intelligence. The thing that's exciting today is that there's actually a new source of knowledge on the planet, and that's computers, computers, discovering knowledge from data.

[00:03:41]

And I think this emergence of computers as a source of knowledge is going to be every bit as momentous as the previous three were. And also notice that each one of these sources of knowledge produces far greater quantities of knowledge, far faster than all the previous ones. So, for example, you learn a lot faster from experience than you do from evolution and so on. And it's going to be the same thing with computers. So in the not too distant future, the vast majority of the knowledge on earth will be discovered and will be stored in computers.

[00:04:12]

That's fascinating to think about in the sense of knowledge and application. Do you think that computers will be applying that knowledge or they'll be discovering it on their own? They will be both discovering it and applying it. And in fact, both of those things will generally be done in collaboration with human beings. In some cases, it will be the computers doing it all by themselves. So, for example, these days there are hedge funds that are completely run by machine learning algorithms.

[00:04:39]

For the most part, hedge funds will use machine learning as one of its inputs. But there are some where the machine learning outcomes, they look at the data, they make predictions, and then they make buy and sell decisions based on those predictions. So there's going to be the full spectrum. So we've delegated the actual decision making to the algorithm. Yes, in many cases. For example, there's this venture fund recently that announced that one of their directors is now going to be an algorithm out there, seven directors on the board and one of them's an algorithm.

[00:05:15]

So their algorithm doesn't decide anything all by itself, but it does have a vote as much as any one of the humans. And of course, you can easily imagine, if this tends to work well, then maybe tomorrow they'll be two votes that are algorithms and maybe then there'll be a majority and maybe eventually it'll be all algorithms or there'll be some mix of the two.

[00:05:32]

So this is a fundamentally different machine learning than traditional computer science, which is you have an input, you give it to an algorithm which generates an output. And now we have, I think, the output and the data going into the algorithm, which is creating another algorithm, or am I misunderstanding that? Exactly. So what happens in traditional computer science and really everything that we know about the information age was created that way, is that somebody has to write down an algorithm that turns the input into the desired output.

[00:06:04]

So, for example, if I want to, I don't know, diagnose x rays of of people's chest to the side, whether they have lung cancer or not, I have to write an algorithm that ticks in the pixels of that image and outputs a prediction, say here's where the tumor is or there's no tumor. And this is very, very hard to do. And in fact, for some things, we don't even know how to teach the computer to do them.

[00:06:26]

The difference with machine learning is that the computer doesn't have to be programmed by us anymore. The computer actually programs itself. You give it the examples of the input and the output, like, for example, a lot of pairs of here's the x ray is the diagnosis is the X is the diagnosis. And by looking at that data, the computer figures out what is the. Algorithm that would turn one into the other. The thing that's amazing is that often just by taking a basic machine learning algorithm and applying on a database of, for example, exercise and diagnosis, you actually wind up with something that is better at, for example, pathology than a highly trained human being would be.

[00:07:04]

Another thing that's remarkable about machine learning is that in traditional computer science, you need to write down a different algorithm for everything that you want to do. So if you want the computer to do diagnosis, you need to explain to it what are the rules of that diagnosis. If you wanted to play chess, you need to write a completely different program and you want it to drive a car or or invest in the stock market. You need to get a completely different program with machine learning the same the same learning algorithm.

[00:07:32]

A single learning algorithm can learn to do all of these different things just depending on the data that you give it so that there is chess games, it learns to play chess. If there is x rays, it learns to do diagnosis. If that that is stork's it learns to predict fluctuations and so on. Is that the master algorithm? Yes. So in essence, every major machine learning algorithm is a master algorithm in the same sense that a master keys a key that opens all doors and master algorithms, an algorithm that works for all different problems.

[00:08:03]

And that is very much the goal of all machine learning, is to is to develop such master algorithms. And there are several major search algorithms today that have mathematical proofs that if you give them enough data, they can learn any function. Now, of course, the whole question is, can you do it with realistic amounts of data and computing power? And then different algorithms tend to be better for some things than others. But what I and others believe is that we can develop a true master algorithm, meaning an algorithm that is able to solve all the different kinds of learning problems that these different algorithms can, in some sense, a grand unified theory of machine learning.

[00:08:44]

In the same way that the standard model is a grand unified theory of physics or the central dogma is a grand unified theory of biology. So before we get into the different ways of machine learning, I believe there's a couple of different schools of thought there that I really want to hone in on and get some information from you on. I want to better understand how you see this kind of propagating in the sense of we don't understand how the algorithms are working anymore.

[00:09:12]

At some point they're just self evolving. Or are they not?

[00:09:16]

Well, we it's different from traditional programs, right? With traditional programs. We understand every little detail of how they work because we created it. Then we debugged it until it did exactly what we wanted. And certainly with machine learning, things are very different because to some extent we don't fully understand what the algorithm is doing in some way that that's its power. Right. It can actually no way more than any of us could. Having said that, we the machine learning researchers and the data scientists, we actually have a good understanding of how the learning algorithm itself works.

[00:09:50]

What is it what is it that it does to learn and how could you make it learn better? And then there's there's a different issue, which is the understanding of what the algorithm produces. Right. If this argument has produced a model of how tumor cells work in cancer, can I understand what that algorithm is doing? And depending on the type of machine learning, this might be harder or easier with some types of machine learning, like, for example, neural networks and deep learning, it's very opaque.

[00:10:19]

What is learned is this big jumble of lots of parameters, unknown functions, and nobody really understands what's going on, which in fact often precludes it from being used. But then there's other types of machine learning where what the algorithm produces is very easy to understand. It's a bunch of rules or it's a decision tree or it's some kind of graph connecting variables that you can actually look at and understand. So there's a spectrum of degrees to which the results of learning are understandable or not.

[00:10:46]

Would you say that the information we learn from machine learning is more uncertain than the knowledge that we gain from, say, evolution or experience or culture? Or would you put them all in the same kind of category? Well, it's certainly quite uncertain. So any knowledge that you induce from data is necessarily uncertain because you never know if you generalize correctly or it didn't. But sometimes you can actually machine learning, machine learning, knowledge that is actually quite certain, because if you know well how that it was generated and you've seen enough data, you can say that with very high probability, the knowledge that you've extracted is correct.

[00:11:24]

Conversely, a lot of the knowledge that we have from evolution and from experience and from culture, we often tend to think of it as much more certain than it really is. We have this great tendency that's been well studied by psychologists to be overconfident in our knowledge and a lot of the things that we take for granted that it turns out that that there just ain't so. So evolution could have evolved into a local optimum when there's actually a much better one a little bit farther away and you might have learned something from your mom that told you do things this way, but actually turns out that that's wrong or it's outdated and there's a better way to do it.

[00:11:59]

So there's uncertainty on all sides of this and it could be more or less depending on the problem. Do you think that the input that we have to make, the decisions, changes in the sense of the quantity of data, the reliability of the observations we're making from machine learning would be higher or. Yes, so all of those things are factors. I think we're machine learning has a big advantage over human intelligence is that it can it can take in vastly larger quantities of data and as a result of which, it can learn more.

[00:12:32]

And it can also be more certain if that data is very consistent with this piece of knowledge. Where it has the disadvantage is that machine learning is a very good machine. Learning today is very good at learning about one thing at a time. The thing that humans have is that they can bring to bear knowledge from all sorts of directions. So, you know, take, for example, the stock market, the traditional machine learning Arlin's, and people start using neural networks to do this in the 80s.

[00:12:59]

They just learn to predict the time series from the stock itself and maybe other related Times series in a way that human beings couldn't, that human beings could know that, oh, you know, today, you know, they began a war between Russia and the Ukraine. And human beings can try to factor this in, whereas the algorithms couldn't or, you know, the Fed just said that it's going to raise interest rates or something like that. So human beings can bring can bring a lot of knowledge to bear that the algorithms don't have.

[00:13:28]

Having said that, what we see even from the 80s to now is that the machine guns are starting to lose a lot of these things. So, for example, there's this hedge funds that trade based on things like what's being said on Twitter by certain themes on Twitter, that maybe this is a sign that something is going to happen or has happened or a recession has become more likely or whatever, and they can learn from things that wouldn't occur to people like for example, I know there's there's one company that they use real time traffic data and they use satellite photos.

[00:14:01]

I'm not kidding of parking lots to think about how many people are shopping at Walmart, let's say, and other stores to decide whether their business is becoming better or worse. So I think as time goes for the machine, learning will get better at using a broad spectrum of of of information. I think for a long time they will still be types of common sense knowledge that people have. So I don't think for most things the human element is is going to become unnecessary very quickly, but maybe ultimately it will.

[00:14:32]

That's a good Segway into what the different ways of machine learning are, because I know there's multiple. Yeah, so there's five main ones, all of them quite interesting because each one of them has its origins in a different field of science. So one that is very popular today is learning by emulating the brain. It's the greatest learning algorithm on Earth. Is the one inside your skull by definition, has learned everything that you know and and everything that you remember.

[00:15:04]

So we can take inspiration from the neuroscience, see how brain circuits work, how neurons work, how they put together, how they learn, which is by strengthening synapses and then develop algorithms that try to do the same thing in a simplified form. And indeed, there are some very, very successful applications today of this type of learning, like, for example, speech recognition on Android phones and the kind of simultaneous translation that Skype can do where you speak in English and somebody might hear you in Chinese and and vice versa to things like image recognition and whatnot.

[00:15:40]

So this is one approach. Another approach is to emulate not the brain, but evolution. So your brain is great. But if you think about it, evolution made the brain in the first place and the body in May of all creatures on Earth. So that's a heck of a learning algorithm. So maybe what we can do is simulate evolution on the computer. And instead of evolving animals, our plants evolve programs. But in the same general way of having a population of individuals, you try them at the task and then the little ones get to reproduce and cross crossover and mutate and produce the next generation.

[00:16:16]

So so that's that's another approach. And again, it's have many amazing successes, like people have developed new types of radios and amplifiers and and electronic circuits using this type of machine learning. And they've actually gotten patents for them. So they work better than the ones that were developed by human engineers. They typically completely different. So the things that no one would ever think of, but they good enough that the US Patent Office actually granted them patents for these things.

[00:16:48]

So that's another approach. Both of these are inspired by biology in one way or another. Most machine learning researchers actually think that taking inspiration from biology is not a great idea, even if it's superficially appealing because biology just is random and who knows if it's actually doing the best thing. So most machine learning researchers, they believe in doing things more from first principles and one way of doing things from first principles, which gets back to this theme of uncertainty.

[00:17:18]

Is Beijing learning so the idea in learning is that you start out with a large number of hypotheses and they're always uncertain. So you quantify how much you believe in each hypothesis using probability. In the beginning, you have what's called your prior probability, which is how much you believe in each hypothesis before you see any evidence, and then as you see more evidence that your belief in the hypothesis evolves. So the hypothesis that are consistent with the evidence that you're seeing become more likely and the ones that are inconsistent become less likely.

[00:17:48]

And hopefully at the end of the day, some some one or a few hypotheses shine through. But even if they don't, you're always in a position to make decisions by violating those hypotheses, vote with a weight that's proportional to to how probable they are. So this is learning another more first principles approach is symbolic learning. Adding symbolic learning is to learn in the same way that scientists and mathematicians and logicians do induction. So you have data, you look at the data, you formulate hypothesis to explain the data.

[00:18:23]

And then and then you test it on your data. And then you either throw out those hypotheses or refine them and you keep going like that. So it's very much the way the scientific method works, except it's being done by machines instead of by human scientists. And therefore it's much, much, much faster and can discover a lot more knowledge. And in fact, one of the applications of this that's quite exciting is in the UK to develop this complete robot scientists.

[00:18:53]

It's a robot biologist. It actually does this whole process, including carrying out the experiments, using microarrays and gene sequences and whatnot, and it starts out with basic knowledge of biology, molecular biology like DNA proteins, the regulation and so on. And then it develops models of the cells that it's looking at. And in fact, a couple years ago, the robot is called if there was a previous one called Adam a couple of years ago, we've actually discovered the new malaria drug.

[00:19:22]

Oh, yeah. And once you have one robot scientists like this, there's nothing keeping you from making millions. And then science will progress correspondingly faster. And then finally, the last major school of machine learning is inspired by several fields, but probably most importantly by psychology. And this is the idea of learning by analogy. So there's a lot of evidence in this to most people is actually quite intuitive that we do a lot of learning and reasoning by analogy.

[00:19:49]

When we're faced with a new situation. What we do is we retrieve from memory similar situations that we experienced in the past, and then we try to extrapolate from one to the other the solution that applied in the previous situation. We apply it or transform it to apply the new one. So, for example, if you want to do medical diagnosis in this way, what you would do when you have a new patient to diagnosis, you look for the patient in your file with the most similar symptoms and then you assume that the diagnosis will be the same.

[00:20:17]

This sounds very naive, but it's actually quite powerful. Again, you can actually prove that if you give an approach like this enough, that it can learn anything. So those are the five main schools of machine learning.

[00:20:28]

And then the master algorithm is one view of where they all come together. Exactly.

[00:20:34]

So each of these schools has its own master algorithm in, for example, the master of the connection. This is called back propagation because it's based on propagating errors from the output back to the input. And the patient is called probabilistic inference. The evolutionary set of genetic programming. The the symbolists have inverse deduction and the analogizes have what are called kernel machines at the master rather than would actually be a single algorithm that unifies all of these into one. Again, think of the analogy with physics.

[00:21:07]

Maxwell Unified the electricity and magnetism and light into one set of equations and now the standard model is actually unified those with the strong and weak nuclear forces. So the idea is we should be able to have a single machine learning algorithm that can actually do what each of these five can. And are they working together to kind of combine them or is it going to be like a sixth one that is created in your mind that kind of supersedes these?

[00:21:35]

Well, there's certainly a lot of people working on this thing. So there's a lot of people, for example, working on combining two of these paradigms. There's a lot of work, for example, on combining symbolic learning with Bayesian learning. There's a lot of work on combining connectedness, learning and vision, learning or connection, an evolutionary in essence, all of these combinations are things that that people are working on. And these days we have gone as far as unifying three, four, maybe even all five of them.

[00:22:02]

So some people believe that we will solve the problem this way and that, in fact, we're very close to solving it this way. Others say that that yeah, no, none of these really has everything that it takes. It's going to take some new ideas. It's going to take maybe some entirely new paradigm. And my gut feeling is that is that actually it's more the latter. I do believe that we have made a lot of progress, but I think we are still missing some some important ideas.

[00:22:27]

In fact, part of my goal in writing the book was to try to get people from outside the field interested in these problems because in some sense, they are more likely, ironically, to have these new ideas than the people who are already professional machine learning researchers and are thinking along the specific track. And then it's hard to jump out of that track.

[00:22:47]

I want to come back before we move on to one of the things you said where there was a patent generated as the result of a machine learning algorithm. Yeah, not just a patent, but a whole series of them that I think they have dozens or more of patents for. Typically things like electronic devices at this point.

[00:23:05]

What do you see as the implications to algorithms being able to patent the algorithms that they effectively have created almost without human intervention or human understanding?

[00:23:16]

There's a couple of implications. One of them is is great is well, now, because of that, we can have better radios and better amplifiers and better filters and whatnot. So it's again, the other side of this is that, well, maybe we don't need all those engineers as much as we did before, which kind of touches on an interesting aspect of all this, which is as much as there's a huge shortage of computer scientists and engineers and so on today, in the long run, things like this are easier to automate than things that are more from the humanities and social science and so on.

[00:23:52]

So people often think that the easiest jobs to automate are like the blue collar ones. But actually our experience is that it's actually more the opposite. It's often white collar jobs that are easier to automate, for example, things like engineering and and lawyers, doctors, et cetera. We've already talked about medical diagnosis as an example where something like, for example, construction work is very hard to automate because that type of work takes advantage of abilities. That evolution took five hundred million years to develop.

[00:24:26]

This seem easy because we take them for granted. But things like being a doctor or an engineer or a lawyer that you have to go to college to do well, you have to go to college precisely because they do not come naturally to human beings. But but machines don't have that type of difficulty. So in some ways, the jobs that are easier to automate are different from the ones that people often think are.

[00:24:47]

I think we're at the point where we were talking at dinner where machines can actually do a better job at identifying based on x rays than the people can or with fewer errors. That's right. Yes. So machines are remarkably better than human doctors that doing all types of medical diagnosis, not just from X-rays, but from symptoms. You have a patient, you have the symptoms. What is the diagnosis? And even very simple machine learning organs running on fairly small databases of patients like maybe with only hundreds of thousands of patients typically do better than human doctors.

[00:25:27]

And part of the reason is that the algorithms are very consistent, whereas human beings are very inconsistent. They might be given the same patient in the morning and the afternoon and have different diagnosis just because they're in a better mood or they forgot something. So human beings are very noisy in that regard. And, you know, if you're the patient, that's actually not a good thing. So I think for things like this, machine learning is a very desirable thing to use.

[00:25:50]

In the particular case of medicine, it's not used more already because, of course, the doctors are also the gatekeepers of the system and they're not very interested in replacing themselves of their job, that they like Best Buy machines. But eventually it is going to happen that it is starting to happen, for example, in situations where doctors are not available and so nurses can use this or for patients that need constant monitoring or in low resource situations where people can't afford the doctors and so on.

[00:26:18]

There's a concept of freestyle chess. I think you're familiar with it are doctors. And that's where we're blending kind of the machines and the humans. And the combination of the two actually makes a better decision than either one on their own. Is that sort of thing happening in your experience in the medical field? I think exactly. This is true in the medical field and I think in most fields and as you mentioned, chess is a great example because it's not like, you know, when Deep Blue beat Kasparov.

[00:26:49]

Well, now the world chess champion is a computer. And ever since then, computers have been the world chess champions. Actually, that's not the case. The best chess players in the world today are what are called centers in the community. They're a team of a human and the computer. So a human and a computer can actually together can actually beat the computer. And precisely, this is precisely because the human and the computer have complementary strengths and weaknesses in the same thing that I think.

[00:27:17]

That is true of chess, I think is true of of medical diagnosis, it's true of a lot of other things. For example, there's more, of course, of being a doctor than just doing diagnosis. Right. There's interacting with the person. There's reading how they're feeling from how they interact with you. All of these things computers are not yet able to do today. Maybe they will in the future. And certainly the boundary between what is best done by the machines and what is this done by humans will keep changing.

[00:27:43]

But I think for the foreseeable future, in most jobs, it will be a combination of human and computer that works best. It seems like everybody's getting into artificial intelligence, artificial intelligence, machine learning from Facebook and IBM and Amazon and Google. Do you see a do you envision a world like 10 years that I have a bit of a mischievous mindset where people are trying to feed other people's algorithms for signals to change the machine learning? Or is that just crazy?

[00:28:16]

Oh, this is already happening and it's going to happen even more in the future. So what happens whenever you deploy a machine learning system is that the people who are being modeled change their behavior in response to the system, sometimes in benign ways, but sometimes in adversarial ways. A classic example of this is spam filters. The first spam filters were extremely successful. They were 99 percent accurate. They were very good at tagging an email as being spam or being a legitimate email.

[00:28:44]

But then guess what? Once those specialties were deployed, the spammers figure that was around them, they figured out how to exploit the weaknesses of the spam filters and do things that would get through. And there's been this ongoing arms race ever since then where the spammers come up with new tricks. The machine learning algorithms, together with the scientists, come up with ways to defeat those tricks. And this just keeps going. And I think the same thing is going to be true in many other areas, in fact, to other areas where you can already see things like this very much happening.

[00:29:16]

One of them is actually the stock market and the stock market is largely a bunch of algorithms trading against each other. And in fact, what these algorithms are doing, whether or not they know it is modeling each other and what typically happens when somebody deploys a neural network to predict a certain stock, like, for example, you might have three thousand networks each predicting one stock in the Russell three thousand is that it works for a few weeks. And then and then it stops working gradually because somebody else has started to model what those algorithms were doing.

[00:29:48]

And therefore those people are making the money. And this is never ending. And in fact, what some of these people are doing now is they are combining connectedness, learning with evolutionary learning, because actually, in order to be able to learn faster and more broadly than the neural networks cut. And another area where you already see this happening is is online ads. That's the whole online ad market, is there's these auctions among advertisers to put the ad in front of you when you see a page of results from Google or when you go to a Web page and Google has models of what people will click on, the advertisers have models know there's companies like Rocket Fuel who basically work for the advertisers to model the users for them.

[00:30:34]

The content providers have models of what will be clicked on and whatnot. So basically everybody is modeling everybody and all of these models are evolving in tandem. So I think we are already starting to see this, but we will see even more in the future. Another example is fraud detection, obviously. Another example is things like law enforcement and counterterrorism. So the examples are legion.

[00:30:58]

What do you think the implications of that are from country to country basis? Is the advantage that you would gain from being the first in machine learning cumulative, or is it just transitory in that it only lasts for a certain while? Like, is this something where you can just build up such a big lead that it would almost be impossible for anybody to compete with you? Or is it something where it opens competition to everybody?

[00:31:23]

I think some of the advantages preeminent in the sense that there's this network effect of data where if you have a good product and people start using it, then you have a lot of use. This, for example, how Google has built up such an unassailable position in search. It is like you use their search engine. Therefore, they have a lot of data to learn from, therefore more. Therefore, the search gets better. Therefore, more people use the search engine.

[00:31:47]

And so you have even more data to learn from. So someone coming in from scratch trying to learn from initially no data or very little data will have a very hard time competing with Google or Bing. So I think in some aspects this first mover advantage is extremely important because you have more data and also because you have, you know, your father data scientist, you develop the algorithms. There is a race to develop better machine learning algorithm and there is certainly an advantage to coming first.

[00:32:15]

Having said that, there are also lots of opportunities for for those who are just starting, for example, because you could develop something that is enough of a major innovation, that it outweighs those things in the same way that Google had enough of an innovation with Padraic and so on, that even though they were just starting, they actually did a lot better than the dominant search engines of the time, like like, for example, Alta Vista. So so this is one aspect.

[00:32:42]

And the other one is that precisely because machine learning is something that can be used just about everywhere in every single industry in. Every single part of what a company does so far, it's only been used for a small fraction of the things that it could be used for. So you could do a startup that comes in and does machine learning for X, where nobody has really done machine learning for X before and they could just run away with it, even if initially they're learning.

[00:33:06]

Algorithms are not the most advanced ones that are just picking the low hanging fruit. You could actually get a lot of mileage that way. It's amazing to think about the implications not only on people, but how we go about making decisions. And often we're the gatekeepers in a way, right? Like the doctors who are the gatekeepers to bring in inside an organization. And I think that'll be that'll be an interesting way to bring it in as you bring it in in a low level.

[00:33:31]

And then it demonstrates competence and then it almost gets promoted just like people. Yeah. And a lot of the way I think is and this is part of I think why people need to become aware of machine learning is that we should actually be our own gatekeepers. Ideally, we wouldn't rely on third parties to be our gatekeepers. And often the way a lot of change comes about is because people take on that role. So, for example, doctors initially were not very interested in the Web or in computers.

[00:33:57]

These days they have to be because patients will come to them and say, well, you said A, but I actually looked on the Web on the Web says that. So what is it? And now this forces the doctors to start looking at the web. And once, for example, these machine learning systems become more widely available as they are becoming, people will start using them and the doctors will be forced to catch the same thing in a lot of large companies.

[00:34:18]

Right. The IT department says, no, we're going to use a but then everybody starts using B, and after a while, you know, the IT department just has to face reality and start using B as well. So I think the same kind of thing can and will happen with machine learning.

[00:34:32]

What are the limitations on the decisions that will let machines make, do you think? We'll let machines do you think we'll get to a place soon where machines there's no pilots and airplanes where a machine can send someone to death, where boardroom mergers and acquisitions are made solely based on algorithms. And yeah, that's a very interesting question.

[00:34:56]

And it's really not so much a technological question as a sociological one. I think what will limit I think over time we will see more and more things being done by machines. And as we get comfortable with it, we will have no problem handing control to machines like, for example, airplanes as an example. But every commercial airline is actually a drone. It's flying itself. And in fact, it would be safer if it was completely flown by a computer.

[00:35:21]

You know, pilots tend to take the controls at landing and take off, which are actually the more dangerous moments, and they make more errors than the computers do. But people feel comfortable having a pilot in the cockpit. But we already have two people in the cockpit instead of three, and then we'll have one and eventually we'll have zero. So I think there are a lot of decisions that we will gradually become more comfortable. It's partly a matter of just psychologically adapting ourselves to this notion that the machines are making these calls and trusting them, that they are making the right calls and and that they will do what we would do if we were making the calls ourselves.

[00:35:57]

I think at the end of the day, there will be some things that we will always reserve the right to make our decisions about. And I think those those are those are the highest level decisions. And I think the decisions on how to accomplish our goals that can be taken by I want to get from here to New York. I made that decision. But now how I get flown there, well, sure. I'm perfectly OK with the plane being flown by an algorithm or maybe the car that drives me to the airport also being an with it.

[00:36:31]

And maybe I decided to go to New York because of something that some computer advised me about. I said like, oh, there's this great whatever thing that you should do in New York. There's going to be this festival that you should attend or there's these people that you need to meet. But that that decision, even though it was partly made, the recommendations from a computer, I probably will always want to make it myself. I'm not just going to go to New York because the computer told me to.

[00:36:54]

So I think, you know, what we see today is these very intricate mesh of what's decided by humans and by computers. So, you know, somebody wants to find a date while they may have a dating site to help them find the date, but then they decide to go to dinner with them and then they help. So that's their decision. But then maybe they they use Yelp to decide where to go to dinner and then they drive the car to dinner.

[00:37:18]

But it's the telling that's telling them where to turn, although it's still them driving. Right. So it's very intricate mesh of of of of the human and the machine. And I think it's only going to get more, more intricate in the future. But ultimately, I think most things will be done by machines except the really key decisions that people will always want to retain, even though they make them with advice from the machines. What is the singularity?

[00:37:43]

Yes. So the singularity is this notion that if you if machines can learn. But. Think of what we've been talking about, so you have an algorithm that makes another algorithm, a machine learning algorithm, makes an algorithm to do medical diagnosis or play chess or whatever, but by the same standards, we can actually have a learning algorithm, make another learning algorithm. And if the learning algorithm is able to make a better learning algorithm, then that learning algorithm makes an even better learning algorithm, potentially.

[00:38:12]

So what happens is that we start with machines that are not very intelligent. But if each one of them can produce a machine that's more intelligent than the previous one, then maybe the intelligence will just take off and leave human intelligence in the dust. And the first people to speculate about this were John von Neumann, who's one of the founders of computer science, and Nyjer Good, who was a statistician who worked with Turing on the project and so on.

[00:38:37]

And they first conceived of this notion of like the you know, the the technology just getting better and better until it completely leaves human capabilities in the dust. The person who actually coined the term singularity to refer to this was was very Benji, a scientist and science fiction writer back in the 80s. And then the person who really popularized it was was Ray Kurzweil. Right. He wrote a series of books about how this is going to happen. Now, the basic evidence that people like Kurts, while the Jews in support of this is they show all these curves of exponential progress.

[00:39:12]

You see, like, you know, the progress just getting faster and faster. And you listen to the future and it's just going to a singularity mathematically is a point at which a function goes to infinity. And that's their argument. I think that argument is actually very dubious because in reality, no exponential goes on forever, because there's always a limit because the world is finite. So actually, what happens with all of these technology curves is that in the beginning, they look like exponential.

[00:39:39]

But then they flatten out. So they're what are called Escriva technology. Growth curves are always curves. The first part of an escape is actually mathematically indistinguishable from exponential. So it's easy to look at that part and say, like exponential growth, we're headed for a singularity. Actually, what we have is one of these ESCOs and we head for a phase transition. Once the transition is done and things flatten out again, things could be very, very different from what they are now.

[00:40:05]

And I think that will be in the case of A.I. But I don't think we're going to see this infinite growth that goes completely beyond what humans can imagine. We think we'll just have a new baseline and then we'll adapt. Yes. So, again, just looking at just just extrapolating from the past, what happens is that there are these transitions and the transitions build on each other. Right. So one capability, it makes other capabilities possible. So electricity makes computers possible and then computers make it possible and whatnot.

[00:40:36]

And these things do build on each other. But when they happen and how large they are is extremely hard to predict. So no one knows exactly, for example, the current surge of progress in A.I., how long it's going to last before it flattens out. Will it be 10 years? Would be 20 will will it how far will it go? We can't assume that it will just run away from here without any more plateaus or interruptions. That would be very unusual, actually.

[00:41:02]

But just to be clear, we do have algorithms that are spinning out better versions of algorithms that are generating algorithms, or is that a futuristic kind of statement and not clear.

[00:41:13]

So we do have, for example, an air of machine learning. It's called the metal learning, which is precisely the learning algorithms learning to make better learning algorithms. And this type of metal learning in certain basic form is actually already widely used today. Like, for example, Netflix uses this type of thing to recommend movies. It doesn't just use one learning algorithm, it uses a whole bunch of them. And then another algorithm on top of that that is learning how to use their results.

[00:41:38]

And for example, the way IBM Watson won a Jeopardy was using this type of learning. Having said that, this is still quite limited in what it can do. And it's not we don't have enough at this point for this thing to set up this loop where it just keeps getting better and better. That hasn't happened yet, but it'll be very interesting to see if we can make it happen. For all I know, some kid in the garage today has actually invented that algorithm.

[00:42:02]

But we don't know when we make decisions as humans, we need to simplify. There might be one million variables that affect the decision and we try to distinguish or determine, you know, the five that carry most of the weight. Computers don't have that limitation. They can take into account all of these variables. How do you think that'll impact impact the way that we try to process things in the future? Like we're trying to simplify, but machines are going to they're going to be more complex and they're OK with that complexity.

[00:42:36]

And do you think that that that instills trust in them or do you think it kind of slows the seeds of distrust or how do you make of that? Well, so that is exactly the key advantage of machines is that they can take. An unlimited number of variables into account, very much unlike humans that are much more limited, but our brains are very good at things like vision with where we do take millions of variables into account and motion and whatnot.

[00:43:01]

But for other problems, we are very, very limited and the machines aren't. So what's going to happen is that the machines are going to be able to learn much more complex models of the phenomena than human beings ever could. And this is good, right? Because we those better models, we can make better decisions with a better model of the cell. We can we can cure cancer, so on and so forth. Having said that, it'll still be important for people to trust what the computers are saying.

[00:43:27]

And if they don't understand it, they won't trust it. I think what's going to happen is that partly the learning algorithms are going to have better to get better explain to people what they're doing. And again, some of them are better than others. And there's one reason why they can't be something that a lot of today's like learning organs are black boxes, which is going to have to learn to live with the matter. The clinicians don't have to be black boxes.

[00:43:49]

There's actually no reason why we shouldn't be able to say to the Amazon recommender system, why did you recommend that book to me? Or I just want to watch please don't recommend my watches because I don't want to buy a watch now, just the last thing I want to buy. So so you should be able to have this type of richer interaction with a living audience. And in fact, it's what we already do with other people. And so when you decide when your brain decides to do something and then let's say you're the doctor and just tell someone, you know, you probably should get surgery because blah, blah, and then somebody asks, why are you able to tell them?

[00:44:20]

Why your natural language? So you tell them are not a complete explanation of what went on in your brain, that you don't even know what is a complete explanation. But it's enough for the people to understand and for trust and trust the result. And you can do the same type of thing with machines. So to some degree, the machines are going to have to become better at doing this. To some extent, we humans are going to have to get more comfortable with the notion that even though we don't completely understand what's going on, it's actually good that the machines have a handle on this complex phenomena that without them we wouldn't switch gears a little bit, talk about IBM's Watson and chess and then go.

[00:44:59]

So I think it was ninety seven, ninety eight when Watson beat Kasparov to claim the chess championships. You mean the blue. Yeah, deep blue screen. And then we fast forwarded to twenty sixteen and we have deep go who beat go. And those are two completely, vastly different games. But the approaches that were taken and believe for both of those those systems are different. Maybe you can walk us through the implications and the differences and why one feat was better than the other.

[00:45:32]

Sure. So Deep Blue was very much classic A.I. There was no machine learning involved, deeply central. It was just doing a very clever and very extensive search for the best moves to make. And the way and this is the way classic game playing programs work is that they look at a position and evaluate how good it is. And then things like chess and most other games. This is not that hard to do. For example, in chess, you know that having peace is good and roughly how much each piece is worth they have in control of the center is good.

[00:46:09]

And so what the learning is trying to do is to come up with with the with features like this and with weights for them. And then based on those weights, it decides what are the best positions and makes the moves that will lead the game into those best positions, taking into account that the human opponent is doing the same, but in the opposite direction. So this is how chess works. Unfortunately, this type of approach does not work for go for a couple of reasons.

[00:46:33]

One is that the number of choices that you have at any moment of a goal match is way bigger than in chess, maybe in chess at any given point. There's a few dozen different moves that you could make. But in go there's hundreds. And for each of those moves, then there's hundreds of others that you could make in the next step. So the searching algorithms would be really ineffective. Exactly. It's completely infeasible to do that type of switch for go.

[00:47:02]

And for a long time, people actually had no idea how to solve go. And when you look at how human beings are human experts play go, it's almost like what they're doing is instead of having these few explicit features of the body doing this kind of visual pattern recognition that they look at the ball, then they again, they have a hard time explaining what it is that made them make that move. But somehow their visual system in their brain figured out what was the right thing to do.

[00:47:26]

And so what what did mine did with Africa was to actually combine some of the classic a game search with deep learning, with this type of neural network approach to do the evaluation, since the evaluation of board positions is effectively a bit like a visual pattern recognition problem. And the. Learning is very good at visual recognition, in fact, that's what it's best that then maybe you can use deep learning as the pattern recognizer and then put that into the game search.

[00:48:00]

And this is in essence, what what it did. And it worked amazingly well. And that's like light years ahead of deep blue. It's it's just. Or is it just different? Yes. So no one. Right, as I said, deep blue use no machine learning effort to use machine learning extensively. So I figure the first thing that I forgot did was it learned from all the existing the entire existing database of matches played by human masters.

[00:48:27]

That was the first thing it was learned from those 30 million moves or something like that is this is the figure I think I heard. And then the next thing that it did was actually it learned by playing against itself two versions of Africa with play and then Africa would learn from which one had one. This is actually a very, very old idea in machine learning. It's one of the oldest ideas. It goes all the way back to the to the fifties.

[00:48:50]

And this researcher at IBM called Arthur Samwell, who actually wrote the first machine learning system to learn to play a game, and the system learned to play checkers by playing against itself. So I figured I was doing the same thing. So no one alpha using machine learning, whereas Diplo wasn't without machine learning. I don't think we would ever figure out how to how to win. And the second one is the amount of computing power that was used by Alpha go completely dwarfs the amount of computing power that was that was used by deep blue.

[00:49:27]

So first of all, that type of computing power was not available back then. But the deep mind use the enormous amounts of computing, not just while it was playing life, you know, Insull against Lucedale, but in the months that it spent learning by playing against itself. I don't know exactly how much how many servers Google use, but I've heard that it was a very large number continuously playing for months on end. Do you think we ever get to a point where we as a culture sit down and watch Google's I play Facebook, say I go.

[00:50:01]

I think we will if we find it interesting. So, you know, just like these robots, the contests and these days, they're starting to have done contests. And I could imagine Facebook challenging Alpha go for a game. I'm not sure that people will be interested in that. I think more likely what will happen is that people will still be playing each other, even though the best players are computers in the same way that know those racecars that give way faster than people.

[00:50:33]

But but we still have people doing well in the Olympics. What you have is people competing against each other to see who can do whatever one hundred meters faster or the marathon fastest. And people are interested in race cars because they are driven by human pilots. Right. If those race cars were self-driving, we probably would be less interested. But that kind of begs the question, why haven't we had a self-driving race car in, like the Indy five hundred?

[00:51:01]

Well, I think we could at this point, actually, and it might actually win. I think that in the past the technology wasn't ready. And then once the technology is ready, that people have to let it happen. Right. So the Indy 500 will have to let a self-driving car compete. I actually wouldn't be surprised if that happened in the next few or several years. I think we're at the point where it could. But, you know, sometimes people don't want the computers to be or the machines to be competing with them on a field like this.

[00:51:33]

They're actually games where the humans actually refused to play against the computers because in some way, it used to be that humans wouldn't play computers because the humans were sure to win. There's also other areas where the humans won't play the computers because the computers are sure to win. So it depends with self-driving cars were there? No. Do you think from a technological point of view, just not a psychological one, or do you think we're a couple of years away from that?

[00:52:00]

It depends. So here's the crucial question is how uncontrolled can the environment in which you're driving be? Right. So why was it that the first thing that we have was self flying planes long before there was self-driving cars that were already autopilots? It's because in the era there's very little expected that can happen so fast that you don't even need the eye. Just the classic control systems and software engineering will actually do that for you. And then the next step is, well, what about driving a car on the freeway?

[00:52:31]

Driving the car on the freeway is something that I think the technology to do that is there a freeway is less control than the air, but it's much more controlled than driving in the city. Once you're driving in the city, boy, you know, all sorts of stuff can happen. People will cross. You know, people will make strange maneuvers. So that's harder and. I think this is where the frontier is right now. The cars are starting to drive in the city and I think we are going to get to the point where they are widely deployed, not necessarily because they have become as good as people are dealing with unexpected situations, but because we will also adapt to them.

[00:53:05]

I think what's going to happen is that self-driving cars will have to clearly be identified as self-driving cars and that we human beings will will know how to deal with them differently than we do with cars driven by people. Again, we will expect different eras, also different ways in which the reliable self-driving cars probably won't do stupid things that people do because they're drunk. But they might do stupid things because they just don't recognize something that is happening in front of them.

[00:53:33]

But I think the whole system will evolve towards accommodating self-driving cars. So it's going to be an adaptation on both sides. And then I think eventually the self-driving cars will be good enough that they can drive in any environment and that they will eventually replace everybody. How far away do you think? Yeah, exactly. That part is hard to predict because again, it involves so many unpredictable things. Again, the cars don't have to be perfect before we start using them instead of people.

[00:54:02]

They just have to get better than people. I think when that will happen, it's hard to predict. But my guess is that five years from now that there will be a lot of self-driving cars around and, you know, maybe 10 years from now, 15 most cars will be sold, will be self-driving. The other thing is that what really makes and ironically, if we took everybody off the roads today and it was all just self-driving cars, it would be much easier for self-driving cars.

[00:54:31]

I have no trouble at all dealing with each other. In fact, they can coordinate much better than we human beings can. As a result of which we can put many of them on the road. The spacing can be smaller, we can have fewer traffic jams and so on and so forth. What really makes life hard for self-driving cars is us, the humans. So it'll be interesting to see when this happens in like the first time that the Google car got into an accident, it was because it stopped at a red light and they got rear ended.

[00:54:58]

Right. So it was actually doing the wrong thing, except that nobody ever stopped at that red light. And that's the things that the cars have a little bit of trouble dealing with. So Tesla and Google are taking two radically different approaches when it comes to self-driving cars. Seems like Tesla is taking me or maybe I'm wrong, but it seems like they're taking the incremental approach where we'll have human drivers and eventually phase into, whereas Google has removed the steering wheel effectively from their cars or am I not?

[00:55:28]

That's right. And in fact, it's not just Tesla. If you look at the big automakers like Toyota and BMW and whatnot, I think they're all following this morning criminal approach. And I think it makes sense for them because they're selling cars today and they're not ready to deploy a fully self-driving car. So what they're going to do is they're going to introduce these things one by one. However, I think Google also has an important point, which is born of their experience, which is this notion of a mixed of mixed control between the human and the car is actually very problematic if someone is not driving the car.

[00:56:04]

And then suddenly the computer says, like, oh, shoot, unconfuse, take over, then the person will not be very well able to take over. And in fact, they might they might well crash the car because they are not in the frame of mind to drive the car. So Google's reaction to this was to say, no, we just have to go all the way and have the car be completely self driving and becomes less and less novel.

[00:56:25]

We start to pay less and less attention to things. Yeah, exactly. And know if you're not if if you're not driving the car, you don't you know, you can't take over in a fraction of a second because you need to build context by driving the car over over a certain amount of time also. And this happens with pilots, is that if you're not piloting very often, you start to get rusty. And then when the time comes for you to take over, you're actually not not as good at doing it as used to be before.

[00:56:51]

It was a computer doing the driving or the piloting most of the time. So but I think that in the short term, the approach of the Teslas and the Toyotas and whatnot will be the prevalent one. I think in the long run it will be the Google approach that prevails. Right. There's another interesting thing in all this, which is what is the business model of each of these companies? It's one thing to be the companies that are selling cars today.

[00:57:14]

It's another thing to be Google, which actually doesn't have a business model for cars. Right. Maybe they're going to sell their software to car companies. It's yet another thing to be. For example, Uber is a company that has also invested heavily into self-driving cars. And they, of course, have a very different and very clear business model, which is they just want to take what they're doing and have the cars not not need human drivers anymore.

[00:57:37]

And all of these different models imply different ways to go about doing it. This is fascinating. I could go on for hours. I know we're coming up to our time limit here. So I just have two questions left. Switching subjects a little bit here, what books would you say had the most impact on you in your life in terms of do you keep coming back to or that changed the way you see the world?

[00:58:00]

Well, one book that influenced me a lot was just the first text book that I ever saw. You know, I saw a book in the bookstore called Artificial Intelligence, and I was very intrigued by what that might be. And reading that book is actually what what set me on the path to A.I. another book that I've read that is related to A.I. and that I know has influenced a lot of people into becoming researchers. In my case, I was already on that path before I read it.

[00:58:25]

Is is that was Hofstadter's good, Elizabeth? It's an amazing book and a very thought provoking and speculates about all sorts of things that have to do with the AI and computers, including things that we've been talking about. So that has been another very influential book. Another book that I've read more recently that I think is really amazing and really important is Jared Diamond's guns, germs and Steel. I think it gives this large, you know, picture of of human history that is very absent from most of what history does today.

[00:59:01]

And I think it's very important to have books like that. And you and I could go on, but I think those are three of the main ones. What's on your nightstand right now? What is on my nightstand right now? Right now, I'm reading another book by The House, which is a book about how analogy we were talking about the rise of machine learning. He has actually written this book about how it's really all just an analogy. Right.

[00:59:22]

So Douglas is the ultimate analogizing fact. He coined the term and I have actually not completely read the book to date. So that's one book that I'm that I'm reading. I'm also reading a book on Symmetry Group theory, because I think this is something that has not been exploited in machine learning and might be the origin of that sixth paradigm. It's something that is used a lot in physics and in mathematics. In fact, some people say physics and mathematics are really just semantic group theory and and a few other things.

[00:59:51]

But that's the center. So so that's another thing that I'm reading right now. And final question, if you could close your eyes and kind of just envision the next five years, what do you see happening in terms of our interaction with machines and machine learning? Yeah, I think one of the things that will happen is that we will go from what we have today where every company has its own little recommender system for you. Right. So Amazon has won for books and Netflix is one for movies and so on to where all of the debt is pooled.

[01:00:22]

And there is this recommender system that has a three hundred and sixty degree view of you and helps you with every decision that you make it every step of your life. And in fact, you can already see the companies competing to create this type of virtual assistant. You know, the content in the series and Google now and Facebook and and the Amazon echo and whatnot. So I think five years from now we will all have these things and we will run our lives using them.

[01:00:51]

They'll be even more indispensable than than smartphones are right now. We will really not know how to live with them, how to deal with information overload and with all the choices without them. So I think this is one big change that we're going to see. Another one is that we're going to see a lot more things automated than we see today. So we've been talking about self-driving cars, but we're going to see self-driving X where X is just about anything that you can imagine.

[01:01:16]

There'll be a lot more autonomy. There will be a lot more things that adapt themselves to you. Like, for example, you know, with the nest thermostat adapts itself to your schedule. But that's just one variable about your house and about your life. And I think we're going to see the same thing with more and more things. That's amazing. Pedro, thank you so much. This has been fascinating. Really appreciate you taking the time. Yeah, thanks.

[01:01:40]

This was this was great.

[01:01:45]

Hey, guys, this is Shane again, just a few more things before we wrap up. You can find show notes at Farnam Street blog, dot com slash podcast. That's fair. And I am s t r e e t blog dotcom slash podcast. You can also find information there on how to get a transcript.

[01:02:05]

And if you'd like to receive a weekly email from me filled with all sorts of brain food, go to Farm Street blog, dotcom slash newsletter. This is all the good stuff I found on the Web that week that I've read and shared with close friends, books I'm reading and so much more. Thank you for listening.