Transcribe your podcast
[00:00:00]

You yourself are not going to sort of solve the obesity epidemic. You yourself are not going to sort of create world peace. You yourself are not going to solve the climate issue. So your brain just is going to be big enough collections of people making it larger and some of them will actually have a hope of addressing this problem.

[00:00:25]

Hello and welcome. I'm Shane Parrish, and this is another episode of the Knowledge Project, which is a podcast exploring the ideas, methods and mental models that help you learn from the best of what other people have already figured out.

[00:00:38]

You can learn more and set up to date F-stop Blogs podcast before we get to today's guest. I get emails all the time from people saying I never knew you had a newsletter. We do. It's called Brain Food and it comes out every Sunday morning, usually five thirty a.m. Eastern Time. It's short, contains recommendations for articles we found online, books, quotes and more. It's become one of the most popular things we've ever done.

[00:01:02]

There's hundreds of thousands of subscribers and it's free. And you can learn more at F-stop Blogs newsletter. That's EFS Dot blog slash newsletter. Most of the guests on this podcast, The Knowledge Project, are subscribers to the email, so make sure you check it. It on today's show is Scott Page, professor of Complex Systems, Political Science and Economics at the University of Michigan.

[00:01:26]

I reached out to Scott because over Christmas I wrote a book that he wrote called The Model Thinker, which is all about how mental models can help you think better. And as you can imagine, this podcast is a deep dive into mental models, thinking tools and developing your cognitive ability. It's time to listen and learn.

[00:01:48]

Before we get started, here's a quick word from our sponsor. Farnam Street is sponsored by Medlab for a decade, Medlab has helped some of the world's top companies and entrepreneurs build products that millions of people use every day. You probably didn't realize that at the time, but odds are you've used an app that they've helped design or build apps like Slack, Coinbase, Facebook Messenger, Oculus, Lonely Planet and so many more. Medlab wants to bring the unique design philosophy to your project.

[00:02:16]

Let them take your brainstorm and turn it into the next billion dollar app from ideas sketched on the back of a napkin to a final ship product. Check them out at Medlab Dutko. That's Medlab Dutko. And when you get in touch, tell them chainsawing you. Scott, I'm so happy to have you on the show here. It's great to be on. It's a thrill for me.

[00:02:39]

You just wrote a book called The Model Thinker, and I want to explore that with you. What are mental models?

[00:02:46]

So what a mental model is, is that really just a framework that you use to make sense of the world? So with the model Thinker is the book. It's a book that contains really three things. One is sort of a general philosophy of models. One is a collection of models that you can sort of play with and understand. And then a third thing is sort of this, some examples of how one in practice would apply a variety of models to a problem.

[00:03:12]

So when I think about a mental model as opposed to maybe a standard mathematical model, is it a mental model? What you have to do is you have to sort of map reality to the mathematics. So I may say this would be one thing if I'm going to say, well, you should use a linear model here to decide who to hire, take your data and just put it on a linear model. The thing is, you have to decide what are the variables.

[00:03:37]

So it is a linear model contains things like grade point, average work experience, the personality test. So you have to think about what are the things that the variables that I use to sort of attach reality into reality to the sort of mathematical framework that exist out there. So what I try and do in the book, but also in my work is think about the mathematics is beautiful because it's logical. It's right. But reality is kind of messy and confusing and complex.

[00:04:09]

And so what I see mental models and doing in some sense is mapping reality to the sort of clean, logical structures of mathematics.

[00:04:17]

And we all have mental models, whether we're conscious about it or not. How did how did you land on this approach?

[00:04:24]

So what the approach is, is this is that, you know, when I was trained in school, I mean, even starting in sixth, seventh, eighth grade, you learn a bunch of very simple models like force equals mass times acceleration or PV equals K in physics and in economics, you learn things like S equals D supply demand. And these models are very simple. And sort of the whole idea was I can explain patterns in the real world or I can sort of make sense of the variation we see in the real world using a single simple equation.

[00:04:57]

Then what happened is sometime in the nineteen nineties I went and visited and visited the Santa Fe Institute, which is a think tank on complexity. And this is a place where they've been trying to encourage my advisors at the time who were very good game theorists, that Roger Myerson won the Nobel Prize in game theory, love with her and other advisors. And Stan Reiter was in that group as well. And they were sort of these people who studied rational choice.

[00:05:23]

And how do people sort of optimize in social situations? And the set up institute was all about the fact that the world was so complex there was going to be hard to optimize. And so I wouldn't say that I had some sort of intellectual crisis. It was more the case. It's intellectually fascinating. There was this disconnect and the disconnect is that I'm trying to make sense of an extremely complex world using very simple models. And so what social science has done, I think typically sort of said the world is really complex.

[00:05:54]

Here's my model and I can explain 30 percent of the variation. I can just make 10 percent of the variation, or I can explain why these stocks went up in values. But that means you're missing the other 70 percent or 90 percent and so would not be a little bit other. A bunch of us have kind of happened on to this notion of collective intelligence. Is the idea that one way you can make sense of complexity is by throwing ensembles of models.

[00:06:19]

So one of them may explain 20 percent, another 15 percent, and they add up to one hundred percent that they're explaining everything. In fact, there's overlap. There's there's even sometimes contradictions and explain with them. But by looking at the world through as an ensemble of logically coherent lenses, you can actually make sense of the complex world. And what's fascinating about this to me is there's a group of people who were some philosophers, some economists, some statisticians, some biologists kind of playing in this space of collective intelligence.

[00:06:52]

You might think biologists were biologists doing in this case. But think of ants. Each individual ant has a mental model. So it's a map of the terrain of where the food sources are. And they can sort of aggregate that collectively within the nest. And bees can do the same thing within the hive by doing these things called waggle dances, which sort of explain where the food is. So people come back and dance and say, look, I think there's food in another bee will come back and dance, think there's food here, and they can kind of aggregate their sort of maps of the world at the same time that people were thinking about collective intelligence.

[00:07:23]

For purely theoretical perspective, there was a set of people in computer science who were creating things like random forest algorithms and. These giant sort of artificial intelligence algorithms that also we're sort of constructing, creating collective intelligence by combining all sorts of very simple filters. And so and I think you've learned I think there's sort of a growing consensus that our heads aren't big enough. No individual's head is big enough to sort of make sense of the complexity of the world.

[00:07:52]

So you're going to have a set of models of how you think the world works. I'm going to have a set of models that I think the world works. But collectively, in any one of us, this is too small to make sense of the craziness, the complexity, the just sheer dimensionality of the world that sits in front of us. But collectively, we can kind of make sense of it. So let's let's take something outside of finance for a second.

[00:08:12]

Let's look at the obesity epidemic. And you could blame that on infrastructure. You could blame it on food. You could blame it on the bacteria. In our gut, you could make changes in work life balance, the lack of physical work, all sorts of things. And to understand any one of the dimensions that contributes to obesity, you probably need to have not just graduate, you graduate. You might take five, 10 years of study to just understand one piece of it.

[00:08:39]

But if you tried to fix the obesity epidemic by just changing that one piece, by just sort of climbing that one little hill, you're not going to get very far because there's going to be probably system several feedbacks. And so there's going to be no silver bullet that's going to fix something. That that's what you can do is by having a collection of people who each know kind of different parts and his knowledge overlaps with different models of how things work.

[00:09:01]

You can get a much deeper understanding and you might be able to get just sort of think a more holistic approach. And we can talk about this later. I think it leads to sort of a different way of thinking about policy. When you think about going at these problems from a multiple model perspective.

[00:09:14]

So to what extent is it fair to say that cognitive diversity is then a group of people who have different models in their head about how the world works?

[00:09:25]

It is. I think that, you know, this is where I occupy kind of this strange is the book I wrote before. This was called The Diversity Bonus. And that book talks about the value of having diverse people in the room. And the reason you want diverse people in the room is because different people bring different basic assumptions about how the world works. They construct different mental models of how the world works, and they're going to see different parts of a problem.

[00:09:51]

So if you want to just look at the sort of fluctuations in the stock market or if you look at the valuation of any particular company, there's so many dimensions to a company like Amazon or Disney. Right. That there's no way any one person can understand it. And so what you want is you want cognitive diversity. And what that cognitive diversity means is people who have different literally different sets of models or different information. And so one of the things that sort of leads off the book, and I use a lot when I teach this under graduates or general audience, is something called the wisdom hierarchy.

[00:10:24]

And you want to think at the bottom. And there's all this data, all this just what you want to call it, a firehose of data or a hairball of data. You choose your favorite metaphor. It's all just floating out there. On top of the data is information. The information is, is this isn't how we structure the world. So you may say unemployment is up. What? You're shaking tons and tons of data about people having jobs and you're putting that into a single number.

[00:10:49]

You sort of categorize unemployment's up, inflation's up, and you're using those as your variables where someone else might have a very geographic things and say, boy, Los Angeles is doing well. Right. Texas is doing well or something like that. There seems to be the Midwest. The economy's not doing so well. Then what you do on top of information is knowledge and knowledge is is understanding either correlative or causal relationships between those pieces of information. So if you give a little piece of information is mass and a piece of information is acceleration, the knowledge is that force equals mass.

[00:11:21]

Acceleration of a piece of information is unemployment and a piece of information is inflation. Then you might understand that unemployment is very, very low. You often get wage inflation and that's a piece of knowledge. What wisdom is, is wisdom is understanding which knowledge is to bring to bear on a particular problem. And sometimes that can be selecting among the knowledge. Other times that can be a case where what you're doing is sort of combining and coalescing the knowledge.

[00:11:52]

Two examples from finance or one of the developed, my college roommate, Eric Ball, who is Treasurer Oracle. And this is one of my favorite stories in the book where he was someone comes into his office and says, Iceland just collapse. And two models sort of model sort of came and said. One is you can think of the international financial system as a network of loans and deposits across banks and across countries. Another model is just a simple supply and demand model.

[00:12:18]

And so Mongar has this wonderful quote about you want to sort of array your experiences on a lattice work of models and when you're in the situation, those are his two models, complicated network of loans and promises to pay. And simple supply and demand, and he looked at the person who worked in his office and Iceland is smaller than Fresno, go back to work. So that's his experience. It's a tidy country. It's not going to matter whether the person who worked in his office in Blakroc just failed.

[00:12:47]

He would have said, oh, my goodness, I'm not going to use the supply demand model. I'm going to use this network of contracts and promises to pay more. And so what you want to think about is you as an individual. And one of the fabulous things about your site, Farnam Street, is that it's all about all these mental models, all these ways that people have sort of making sense of the world and what one of the reasons people go to your site.

[00:13:09]

One of the reasons people read business books. One of the reasons we gather not just the symptoms accumulate knowledge in the form of ways of taking information, understanding relationships to it. And what we hope to gain is wisdom by having more knowledge to draw from. But the point of the sort of core philosophy of the modern thinker is even if you do the best you can, even if you're a lifelong learner, even if you're constantly amassing models, you're still not going to be up to the task of solving any one.

[00:13:38]

You're not going to you yourself are not going to sort of solve the obesity epidemic. You yourself are not going to sort of create world peace. You yourself are not going to solve climate issues. So what it's going to do because you're just not going to eat your brain just is going to be big enough collections of people by having different ensembles and can make it larger. And some of them will actually have a hope of addressing this problem.

[00:14:05]

OK, so much I want to dive into there. Let's start with in the hierarchy. From data to information to knowledge to wisdom, it sounds like we're applying sort of mental models at the knowledge stage and then wisdom is discerning which models are more relevant than not. Is that an accurate view of that? And if not, correct me if I puzzle over this a lot. Every time I think I have an accurate view of it, I then reframe how I think about it.

[00:14:35]

So let's give you a talk the other day and someone said, I think the real space for mental models come in is in this move. It's very subtle between data and information, which is true. Right. Because when you think about how I might think about like if I visit a city for the first time and somebody says, tell me about Stockholm, I immediately start putting it in categories. They might say, well, it's a lot like London or something where you might you might sort of say, well, the people are friendly but reserved or something.

[00:15:05]

And so, again, you're taking all of these sort of experiences and putting them in boxes. So there is a sense in which just the act of going from your raw experiences into the information does. It's almost leaning on the models you already know. And so this is a thing that I've been puzzling over the last few weeks, which has been fun to think about, is that if I have a set of, like models in my head that we think of it, that knowledge space does that then bias.

[00:15:28]

How I filter the data into information probably does, of course, because I mean, the models are helping you pick out which variables are which variables you think will be more relevant and how those variables will interact with one another.

[00:15:42]

Right. So here's a here's a really great example of sort of of that. So this phenomenon called The Wisdom of crowds, the civic book, we can have groups of people make accurate predictions. So the reality is that sometimes groups would be successful and sometimes they won't. And one of the reasons we write down models is to figure out what types of diversity we use and what slumpy. But there's been work by a number of people catchin at HP Labs such as Michigan, where they sort of compared, how do you suppose lots of actual data out there.

[00:16:13]

And I run a linear regression to try and predict something and I have that compete against people. What you find is the regression does a lot better than any one person because of the fact that the delayed reaction can include wait a lot more data, it doesn't support for biases, all sorts of stuff. But oftentimes when you have groups of people compete against the linear models, the groups of people can beat the latest models. And when they do what you find out where they beat them is when the linear model, the person constructing a linear model is actually constructed by some sort of person typically doesn't have a way of including something in the model.

[00:16:49]

So one example involves the consumer product, a printer, and that when your model said this printer is going to sell, let's say, four hundred thousand units, and when they use the crowd, the crowd is like now it's going to sell like two hundred thousand. So it's a huge difference. And so they went back and they interrogated people in the crowd saying, why do you think this is not going to sell candles to Sony Pictures paper print quality this good?

[00:17:12]

The toner cartridge is easy to change all the sort of attributes that would go along. And the first word on the person's mouth was butt ugly. That is a butt ugly printer. But there was no butt ugly, very regression, because that's sort of like a design feature. And it wasn't a very attractive. But the difficulty with data in those situations in the form of the linear model is that it only looks backwards. It can only look at sort of what's happened in the past, whereas people in constructing models often a kind of forward looking back, how are people going to respond to this new design?

[00:17:49]

Now, what's going to work best, ironically, in all these situations is a combination of the linear model and the model with people. And this gets to this sort of step from knowledge to wisdom that I find really fun. You could say, oh, so what you should do is you should average the linear model and the people. That seems to not be true. What you should do instead is if if the linear model, the people are close, the predictions, they probably should go to the linear model because it's really well calibrated.

[00:18:16]

Right. It's probably going to be better. But if they're far apart, if the linear model and the Cubans are giving very predictions, then you want to go talk to them. I mean, talk to the people and talk to the linear model. Now, you could say, how do you talk to them about what you look and you say, what variables are in there, what variables the people using that the linear model is not what are the efficiency look like?

[00:18:40]

Has the environment changed? Is that right? That's the key thing. The state, the linear models assuming stationary. And what's funny, this is like MIT just started this this new school. Right. This is new sort of data science that there they get their first because the first one I started like 30, 40 years ago raised a billion dollars for this. And one of the things is they want people who are bilingual who can communicate between these sort of really sophisticated artificial intelligence models and the real world, because the thing is, people are afraid of sort of just throwing all this information into this giant spit something out.

[00:19:16]

If you're using relatively simple models, you actually can. It is easy to be bilingual. It's easy to sort of look deeply at this whatever model and say, why is the model thing like that a lot?

[00:19:28]

I want to sort of explore something with you, which is when we were talking about models and and how we apply them, whether we're applying them at the information sort of like filtering state or data and information stage or the knowledge stage or the knowledge to wisdom stage, it seems to me like we can probably agree that the more models you have is a good thing in general, but only to the point where they're relevant for the specific problem that you're facing.

[00:19:56]

Having extra models, if they're not useful, is not good. But the more tools you have in your toolbox, the more likely you can accommodate a wide variety of jobs.

[00:20:04]

I think that's right. But I also think it's there becomes it's interesting when you think about building teams, they're also building your own careers. So your interview with Atul Gawande, he made this fabulous point about his method of making a contribution to the world, was sort of being able to communicate across different types of people in different areas. So he brought sort of a he'd been trained by doctors. So he had his parents were doctors. And so he sort of absorbed what the medical profession was all up about.

[00:20:33]

But at the same time, he had this deep interest in science in the interest and sort of political philosophy and literature and public policy. And that elite enabled him to feel what Ron Bird calls a structural hole in terms of like there's a network of people studying medicine. There's a network of people studying sort of politics and public policy. And he can sort of stand between those two things and make sense of them. Right. So one of the things I talk about in both the diversity and stance in the model thinkers, you could of yourself is like this tool box and you've got some capacity to accumulate tools, mental models, ways of thinking.

[00:21:08]

And what you could decide to do is you could decide to go really deep. You could be the world's expert on or one of the world experts on random forest moms or goals or the optimal function. Or you could be one of the world's leading practitioners of sort of signaling models and economics. Alternatively, what you could do is you could go deep on a handful of models. So it took me three or four things. You put it, or you could sort of be someone who I think in in the financial space, I think a lot of people are really successful, like Bill Miller, a friend of mine, is by having just an awareness of a whole bunch of models, having 20 goals that you have at your disposal that you can think about.

[00:21:47]

And then when you think when you realize like this one may be important, then you dig a little bit deeper. But also those that variety of models gives you, I think, two things. One is it gives you sort of a robustness that you think of it as sort of a portfolio sense that you're not going to make a mistake. But it also can give you this sort of incredible bonus in the sense that two models, rather than giving you the average of the performance of the two, often give you much, much better than the average.

[00:22:13]

Right. You get to sort of bonus from thinking about the variety models. And so the book, what was super fun about the book and what's been really rewarding about it is what I do is I lay out this philosophy like, OK, this is why to confront the complexity of the modern world, you need a variety of models sort of using the state and. Knowledge, wisdom, then what I do is I take what I think are the 30 most important models that you might know, Markov models, linear models, turnblad of models, the optimal functions, systems, dynamic models, simple signalling models and just a whole variety.

[00:22:47]

And it was just a great exercise was how do you write these in each one of these and seven to 12 pages in a way that everybody can understand it and then use them. And that's a real challenge. And the people at basic Elizaveta, who's my the editor of the challenge that are the book that she was the person instead of wordsmith, this was me. That was a real challenge because there were times when she would just say, no one is going to understand this.

[00:23:14]

And that was a fun thing to do in my book. And pick up Markov models, for example, models of these models for the states of the world and the transitions between the states. And you read the book, it's like better to 10, 12 pages. And I think most people can understand it. And all the math is in a box. If you go to the Wikipedia page and you type in back, you just go, wow, I should like get a Ph.D. in statistics.

[00:23:38]

That's your only hope of understanding. And so so I think that what I'm trying to do in the book in some sense, is the same thing that the tools time to do is it is work life is to say here's a way to get at a knee deep in these things to understand where they work. And none no one is going to master all 30 models in this book because you could write a PhD on each one of them, do a 20 Ph.D. then each one.

[00:24:04]

But the thing is, the awareness is really useful because you might say this Colonel Blotto game or he's Markov models or these signaling models or these power distribution. So this is really interesting to me. And then you can go a little bit deeper. And so I think that it's it's really meant to be, you know, just kind of a reference in a way, but also just sort of an awareness document. We can sort of say, hey, well, I was just looking and here's a here's a Super Sigma.

[00:24:32]

Like, I never even heard of it. It's really fun to think of that.

[00:24:34]

So I want to dive into some of those in a little bit.

[00:24:37]

But before we get there, I just want to I want to talk a little bit more about acquiring mental models like how do you pick which models to learn if you're working in an organization or your student, or how would you go about having a conversation with somebody about which mental models to prioritize and why?

[00:24:58]

Yes, I think one of the first things I want to ask is who are the relevant actors? So is this a single actor who's sort of just making a decision or is this a strategic situation where someone is taking an action and they've really got to take into account what the action is of someone else, what strategy which somebody else is going to take. So if you're thinking of like what action I'm going to take in a soccer game they're taking in investing, I've got to think a lot about it.

[00:25:24]

You might want to ask, is it the case that I'm taking some sort of action and I'm embedded in a much larger structure where things are sort of moving and I'm taking cues from that larger social system. So stop and think about which book do I buy? You can think of that is or which album do I think of that is like, oh, it's going to be making a decision, but it's not because you're really making that within a much larger sort of social, cultural, economic value where you're not even maybe aware of the fact that you're drawing signals about what other people are doing.

[00:25:50]

So you want to think, well, what am I more like a person making an individual sort of isolated choice by modeling a strategic situation or modeling something that's much more sort of social ecological like? The second thing you want to ask is how rational is the person making the decision with the alternative not being irrational, but the alternative being sort of rule of thumb based. So there's a guy, Gigerenzer, who's a German social scientist, and Peter taught of this book, of this work instead of adaptive toolboxes, which is this idea.

[00:26:21]

Like, I've just got a set of sort of tools or tools and I apply those kind of like, oh, this is a may apply to this is probably about to be. So there's a lot of stuff when I think about like where I'm going to go do my laundry or what coffee shop I'm going to go to, but I probably don't sit around and rationally think about I just kind of just follow some sort of routine and maybe I adapt that routine slowly.

[00:26:44]

Maybe I learn a little bit, but for the most part I might just follow. So you want to ask how are people then you can ask yourself, is my logic correct? So Colin Camerer, Richard Thaler, people study behavioral economics would say if it's repeated a lot, that should move you a little bit more towards the rational behavior because people should learn. And if the stakes are huge, that should move you towards rationality. If it's being made by an organization versus an individual, that could move you in either way, if it's a big decision by an organization, you could imagine like, OK, this is going to be done rationally, right?

[00:27:20]

You have a committee of people. Think about it a very careful way. But if it's some sort of standard operating procedure within within a large organization, then it could be way over at the other extreme. This is just how we do it. How we've always done a this is how we get into it. So now you've got this idea of like how how do I you have to think about how do I model the person following the rule or optimizing or maybe suffering from some sort of human bias if it's a human.

[00:27:46]

And then is it just a decision? Is it a game or is some sort of social process? So those are sort of some sense the two main questions. What context is the action taking place? Who's making it? And then I think the real challenge is if it's not a decision, if it's a game or if it's some sort of social process, is making yourself really aware of the challenges of aggregation in the sense that oftentimes things don't add up the way they're supposed to, or there can be just a fundamental paradox in the assumptions you're making.

[00:28:22]

So, for example, Barabas has established a new book out called The Formula. And in this book, The Formula, he talks about how these are the lessons for success by looking at tons and tons of data. And some of it is about some of the things we were talking about before. Go on to you want to make sure you use your network really well. You want to seize opportunities, those sorts of things. But there's a there's a circular reasoning in there in the sense of everybody followed that formula.

[00:28:47]

It's not clear that everybody would be successful. Right. And so the thing is, oftentimes these systems contain feedbacks within them that make them logically inconsistent at the level of the whole. And in fact, his book, which is again, it's a fabulous book, I'm not denigrating because I think he's right. And I think if you read his book, you'll be able to be more successful. But if everybody read his book, which you would like, then people would be expensive.

[00:29:10]

What do you think about constructing a model? It depends on the context you want to think through. Is the whole thing sort of one of the things I love about studies of sort of moving when I when you ask me the first question, what do I think it was a mental model, I think of it is again, I'm a real outlier here. I think of it as a way of saying I'm going to use this mathematical model to make sense of reality.

[00:29:32]

So what are the great examples of this thing is that we think the value of the form of mathematics is Markov model. So Markov model is there's a set of states that you could be happy or sad or whatever in those transitions between those states. Right. Or market could be volatile, not followed. And then if those transition probabilities are fixed and if it's possible to get from any state to any other state, then that system goes to an equilibrium and it's a unique equilibrium.

[00:30:02]

So what that means is history doesn't matter, right? One time interventions don't matter. There's just this vortex drawing it to one thing. What that model forces you to do then is if you want to argue the world is complex, if you want to argue for that, depends. If you want to argue that a policy intervention is going to make a difference in some way, you then have to you have to either be saying I'm creating a new state that didn't exist before or I'm fundamentally changing these transition probabilities.

[00:30:29]

And so you get this idea that so when somebody is constructing a model, if this oftentimes they'll say, well, I'm a systems thinker, and then if you have them write down their model, you'll say, wait a minute, that's a Markov model. That is a unique equilibrium. Do you think your system has a unique equilibrium? No, no. It's very contingent that dependent. And then you say, well, then your model has got to be wrong, right?

[00:30:50]

I mean, you're missing something. There's got to be some way of changing this transition. So so I think that I view it as a very deliberative process within yourself of constructing a model. Right. First you kind of ask what is the general class A model? So there's a system in the decisions of the game. And then once you write it down, then you can kind of go to the mathematics and the memetics often. Oh, you given your assumptions, what must be true about the world?

[00:31:13]

And then if it's not you can you then have to kind of go back and say, well, let me rethink my model.

[00:31:18]

I mean, to models become a way of servicing assumptions?

[00:31:22]

Oh, absolutely. No, I think models are models for you. They force you to get the logic right. They force you to sort of say what really matters here in terms of driving people's behaviours or firms behaviours. How do those behaviours interact? Right. In terms of how do they aggregate and then how should people respond to that? There's a famous quote by Murray Gelman where he said, Imagine how hard physics would be if electrons could think. And I'd written in the paper and I was attributed to Meridiani and someone said, I don't think he's ever said that.

[00:31:54]

And so I went to Marie and I said, I've never said this. And he read it and said they just said, imagine how difficult physics would be if electrons could instead of go there. I've said it yourself. But one of the things about modelling and especially like I think in the space a lot of your listeners are in, because they're people who who in some sense define the world is imagine how difficult physics would be if electrons could think and if electrons could decide on their own laws of physics.

[00:32:20]

So if you're running a large organization or if you're a secretary of the Treasury or if you're if you're in any sort of policy position, you get to decide the laws of physics. You get to decide. What's legal, what's not legal, what's the strategy, spacesuits, so do you think about using when someone comes to inspect the model, what do they decide their assumptions? One of the things you have to keep in mind is one reason people construct models is to build things right, to build buildings, to build policies, to build strategies.

[00:32:46]

When you do that, you're defining in some sense the states you're defining reality. So if you tell your trainers what you're looking at these ratios, you're defining the game for them. Right. And so I think that it's I think the design aspect of models is often overlooked, underappreciated. So within the field, I mean, I'm interested in economics, political science, business, whatever there's been, because there's so much data, there's this huge shift toward empirical research.

[00:33:17]

So if you count the number of papers in the leading journals, they're empirical versus theoretical physicist. There's massive shift towards empirical research, which in some sense is plotted to work as much better. But there's much more data. We can get a causality, huge fan of it. But I think there's a cost to that because what a lot of that work is doing is really nailing down exactly what's the what's the size of this effect? What's the slope of that line?

[00:33:41]

What's the size of the coefficient? How significant is that? Right. And so we can suss out whether improving teacher quality matters more than reducing class size and by exactly how much. So that's great. I'm all for, however, that's taking the world as it is. And one of the really cool things about models are trained by these people who did mechanism design is thinking about can we, based on our understanding of how people act, redefine the construct mechanisms, institutions that work better.

[00:34:08]

So if you look at the American government at the moment, it's kind of a mess. Everything from like sort of gerrymandering to the fact that we have this electoral college that made a lot of sense when states are all equal sized or roughly equal size. Now, some states that are tiny and still have the same number of senators, the states that are 50 times as many people. But even how we vote on things, what what law is under the purview of Congress like, why do we have a separate in some states, like a financial system?

[00:34:36]

We think of, like the Federal Open Market Committee and the Federal Reserve System, that's quasi governmental. The FDIC is a quasi governmental, but NASA and the NIH are not quite as you'd like. There's the deep question about what institutions we use, where that is underappreciated. Davis, Janet Bodnar and J.J. Prescott are running a thing in February of Michigan called Markets Hierarchy's Democracy's Algorithms and ecology's are saying, look at all this stuff we have to do. We used to sort of think, okay, look, we can use markets, hierarchies, democracies.

[00:35:11]

Work is kind of let her go to see what happens with the roads. We just got to let it go. You decide to go somewhere, decide to go somewhere. And then it's a total mess for the most part. Right. But when we made these decisions about where we have markets, hierarchies in democracies that was made in a world where there was no data and no information technology, where we were exchanging beads as opposed to sending bits through the mail.

[00:35:33]

But now there's this thing right? There's these algorithms and a lot of stuff. A lot of things can be done by algorithms as opposed to market targets and democracies. And there's a question because the sort of the cost of change for these institutions, should we be reallocating problems right across these different institutional forms? That's a question you can't touch really with by running regressions necessarily, other than to identify the places where it's not working. Right. But you can use models to help you kind of think through what if we made this a market, but if we made this a democracy?

[00:36:07]

But what if we handed this up to an algorithm?

[00:36:09]

Yeah, it sounds like we're using multiple models to sort of construct a more accurate view of reality. We might never, ever be able to understand reality completely, but the better we understand it, the better we sort of know what to do. And yet it strikes me as odd that we're often one of the ways that we learn to apply models unconsciously is through school. And it's usually like a single model, right? Like you're reading a chapter in your Grade ten physics book on gravity, and then you get gravity problems and then you know that I will apply, you know, this equation to this problem.

[00:36:45]

It's almost an algorithm, right? I know what the variables are that the school is going to give me the variables and I'm just going to apply this. And we're taught with this sort of like one model view of the world. Why are we taught that way and why is that wrong?

[00:37:01]

I think it was right when we had a much simpler world, but I think it was right when we thought, let's let's take in the context of a business decision, like you might think, OK, here's how you make it a business decision. You figure out what the cost is going to be, and then you think about the net inflow of profits and you take your case to the profits outweigh the costs. The revenues outweigh the cost.

[00:37:24]

Is it going to be a positive cash flow then now when you make it a business decision? There's a recognition that there's environmental impact, there's an understanding that it's going to affect your ability to attract talent is going to be an interesting problem. There's a question of how does that position you strategically in the long run? There's a question of what it does to your capacity. There's a question of what it does for your brand. And so these decisions are just so much more complex than they were, just an increased awareness of the complexity of all these decisions that there's no no single model is going to work.

[00:37:57]

So when you're when you're in seventh grade, we're teaching you very simple things. And we're trying to teach you that there's some structure to the to the world. So we want to say, look, here's the power of these physics models. They explain not only explain things that you see every day, like, you know, why objects fall to the floor. They also sort of explain things that you wouldn't have predicted beforehand, like the two things have different weight for the exact same time where they can predict things like the existence of the planet Uranus.

[00:38:26]

Right. Which they know they didn't know is was out there. And so you can so I think the simple models we teach people because we thought like Plato in Plato's famous quote, about carving nature at its joints, I think there was a belief that we could carve nature at its joints. And then for each one of those little pieces, you could get sort of apply this model. And some people will sometimes say, oh, many models think it's like the parts of the elephant.

[00:38:52]

And I'm like, no, it's almost exactly wrong in the sense that you want each model. There is a sense in which you have different models, look at different parts, but you need that overlap because you can't carve nature to joints. That's what we've learned over the last 50 hundred years, is that it's complex. The world is a complex place. And so I think that the challenges to become a more nimble thinker is to is to be able to sort of move across these models.

[00:39:24]

But at the same time, if you can't like if that's just not your style, that doesn't mean there's no place for you in the modern economy. To the contrary means that maybe you should be one of those people who goes deep specialise. Yeah. So you need this weird balance of specialists, super generalist, quasi specialist, generalist. There's even people who describe themselves as having that their human capital is in the shape of a T.. Right. In the sense that there's a lot of there's a whole bunch of things they know a decent amount about.

[00:39:50]

And then one thing they don't do or other people describe themselves as like a symbol for PI, right. Where there's two things they know pretty deep, not as deep as the T person, and then a range of things that sort of connect those two areas of knowledge and then a little bit out to each side. And I think that it's worth having a discussion with yourself. I mean, not you, your listeners is to think, OK, what are my capacities and my someone who's able to learn things really, really deeply if I go to learn a lot of stuff and then think about a strategy for what sort of human capital you develop, because I think you can't make a difference in the world.

[00:40:26]

You can't go out there and do good. You can't take this knowledge and this wisdom and make the world a better place unless you've sort of acquired a set of useful tools, not only individually, but also sort of you've got to be collectively useful because you can learn 50 different models that are disconnected, applied in different cases and never sort of have any sort of just on any sort of role. And that might make it hard for you to sort of make a contribution or you could say, I'm going to be someone who learns 30 different models.

[00:40:54]

But if you're not someone who's nimble and able to move across them, that may be more frustrating for you.

[00:41:00]

I think as we're talking, one of the things that strikes me is if you're going to prioritize which models to learn, obviously the ones in your domain or discipline, the common ones are good to have an understanding of. And then these general knowledge sort of models that apply across disciplines because those are less likely to be other people are less likely to bring those to the table so you can become your own sort of in a way, and not to the extent that other people would, but your own cognitive diversity machine, almost, if you will.

[00:41:32]

How do you go about iterating these models once you have them? Like, how do you put them into use? And would you are you recommending a checklist sort of approach? Are you how do you mentally store them, walk through them, pick out which ones are relevant and not?

[00:41:50]

So this gets back to a really pressing question earlier, which was. How do you know how to model something and how do you think about what assumptions to make? And I think what you do when you think about which models to use and how do you play them off one another? You want to go and ask, what is the nature of the thing I'm looking at? And then not so much. I'm sort of a checklist being sort of like pigs to the book or pigs to your collection of models and think which ones here might be relevant.

[00:42:17]

Let me let me give an example that I find my students love to sit around and play with, which is there's two models that have to do with sort of competition between two high dimensional things. So one of them is a spatial model in a spatial model. There's an ideal point. So let's suppose that you have like your ideal bolita, which that it's like weighs about a pound and a half. It's got disproportion of sort of rice and it's hot, but not so hot that you've got to like, you know, you have a giant cup of water right next to you.

[00:42:47]

So you can think of that as a sort of four dimensional space. But there's a size, there's a heat, there's a lot of beef, an amount of rice there. So that's your perfect burrito. Well, then you can imagine all the Brito's that are for sale in Toronto or in AABA. You could put each of those same space and then you're going to sort of choose the burrito that's closest to you. So then you're going to say, oh, this is the best burrito in Chicago.

[00:43:11]

With my deal points different than your ideal point, then I may think a different thing is the best. Well, that same model you can use for, it's actually the workhorse model of political science for thinking about which candidate do I vote for. And then if we aggregate them like nobody's happy.

[00:43:25]

Yeah. So nobody might be happy. But but then there's another model. That model is the same thing called the Colonel Blotto game and the Colonel Blotto game. There's a whole set of fronts. You can think of those dimensions, but instead of it being a spatial characteristic, it's hedonic in the sense more is better. So I think about buying a car because like more miles per gallon is better. More leg room is better, higher crash test scores are better, less environmental damage is so and so now.

[00:43:53]

And I think about comparing two things I could just sort of say which one is closer to me like this? Britos Better because it's near my ideal point. I can sort of go across all these different dimensions and say, well, what's cool about both those models is that if there's a big set of people deciding in the first one, there's a whole bunch of people have different ideal points and they're trying to decide there's generally no winner. So there's those sort of best things.

[00:44:17]

So you think about, OK, I'm going to go to Michigan, I'm going to go to Northwestern, I'm going to go to western Ontario and get a degree. And I'm going to apply for and I'm competing at seven other people or maybe I'm up for a role. But I think how did I not win this? I'm so great. I think it's it may be that that's you could think that's a spatial model. And I just wasn't what people liked.

[00:44:38]

Or you could think that hedonic like Rowbottom on like somebody just happened to be on some collection of friends. But one of the nice things about those models sort of tell us is that there's kind of no best answer because, like, you're going to win relative to how someone else is. So it's it's a strategy. It's more like a game. It's strategic. And there's no best thing you can do unless you happen to know where the other person was.

[00:45:02]

And so the nice thing that comes out of that, there's sort of this calming sensation, my undergrad. So I feel like if you don't get a job, if you don't get a scholarship, you don't get to grad school. It's not because somebody was better that, you know, it just happens to be that they were positioned. But if you. And that's fine. Right? It's just going to happen. But typically, when you think about maximizing your chance of getting one of those things you need to think about, is this a spatial thing where I want to sort of make sure I look I'm got the characteristics that I need or what they're looking for, or is it a Doneck where I want to beat my competitors on as many things as possible?

[00:45:40]

So I would have like going to have the most undergraduate research done. I don't have the strongest letters. And so and a lot of things are sort of the combination of the two. But what's really useful is having both those models in your head is what you're thinking about in seems like if I'm an advertising firm and I'm pitching and advertising, but if I'm trying to be a supplier to a large auto company, it's multidimensional competition. And so what you'd like to do is have both these models in your heads and say, let's think about this as a spatial model.

[00:46:07]

So let's think about this is a purely hedonic competitive model and think about how would we position ourselves, where are our competitors? And it gives you I think I think it it's it's calming in a way, because it gives you a way to structure your thinking. And it also lets you know that if you lose, it's not because you are necessarily worse. And if you win, it's not necessarily because you better. So it's also it's coming. It's also humbling.

[00:46:32]

It's easy to think one of the things I deal with a lot and trying to present diversity, the value of diversity is people who are successful by definition have have are they've always won and they're in power. And they think I'm good because I'm here because I'm good. And they typically are. And they tend to think they're there because they've had a lot of ability, they have a lot of ability, which means that they've got. Flexibility in terms of what tools they required, but the point is getting them done recognizes for the group to be better, right?

[00:47:03]

You want people at the door with other tools. So it's tricky because these people think that the people in success will think because they've they've won, you know, because they're good when in fact maybe they've won because they just happen to have the right combinations of talents at the top.

[00:47:19]

I kind of think of that in an evolutionary sense, right. Where we have considered a gene mutation today that might be selected as valuable. But a million years ago, the same gene mutation might have been negatively selected or filtered, if you will, because the environment has changed, the situation has changed, and we apply stories to the these sort of random gyrations. And that's not to say that success is completely random, but there is an element of luck to everything.

[00:47:49]

But how that's weighted varies depending on the circumstances. So you get into this really complicated view of the world. And I find that really interesting when we're thinking about how to learn models and how to make better decisions and how to how to.

[00:48:05]

How do you teach your kids about complexity? Like, how do you teach your kids not not necessarily university students, but them as well. But like, how do you teach them, hey, the world isn't really this simple place. And, you know, here are some general thinking concepts that you need to learn about. And how do you go about instilling that in children?

[00:48:25]

It's such a fascinating question. And I think that especially article The New York Times last week about how the upper sort of upper quintile is spending so much more money on their children than those below with the idea of them being economically successful. So let's go back to a question you asked me about in school. You learn equals mass times acceleration in the economy of one hundred years ago. It almost depended it depended a lot more on sort of you yourself being really, really good.

[00:48:53]

You're a good lawyer. You're really good furniture maker. Go back. Two hundred years ago, you were successful. You ran your farm. Well, it was all about your individual ability and hard work, which is this fabulous book called The Rise of the Meritocracy, which is an old book like fifty years ago. It talks about sort of like successes, intelligence plus effort. Right. And it's it's it's actually where the word meritocracy came from. If you imagine the world as a collection of individual silos and a certain amount of grain in your silo depends on sort of how intelligent you are and how hard you work, then it is all of about your ability.

[00:49:28]

Work hard, get a write in class, develop these skills. This a very instrumental view of the world. But in a complex world, your ability to contribute to go back to the your amazing interview, the two of the one day. In a complex world, your ability to succeed is going to depend on you sort of filling a niche that's valuable, right? Which, you know, as Underbosses book, it could be connecting things that could be pulled resources and ideas from different places.

[00:49:56]

But it's going to be filling a niche. That niche in the next could take all sorts of different forms. And so I think when I when I undergraduate's about this, I talk to my two sons about this. What you want to think about is finding something that combines three things. You have to really love it. If you've got to discover your passion, you've got to you've got to love the practice. So if you're a great basketball player, isn't someone of great abilities, someone who loves practicing basketball, a great musician is someone who's got some ability there, but they love practicing.

[00:50:30]

So you've got to you've really got to enjoy the practice of the thing you do. Second thing is you've got to have some innate ability. So my younger son is actually a reasonably good dancer as a younger kid. And there's not many adult male dancers. And the guy who runs the dance studio is that after I dropped him off and he chased me down and said, is that your son? And I said, then he goes, well, we need adult male dancers.

[00:50:50]

And I said, Yeah, that comes from the other side of the family. He says, No, no, it can't be coming. And he watch me dance for like 30 seconds. He's like, you're right. It comes from the other side of the fence.

[00:50:59]

And even if I love dancing, my upper bar dancing is going to be pretty low. So you've got to have some ability there. And then the third thing is you have to be able, in some sense, connect those things to something useful. Meaningful. Right. Some of you think it is going to make the world a better place. So the question in and of itself is the thing you're going after has to have some meaning and purpose or value to be able to convince yourself of that and convince others of that.

[00:51:29]

Because otherwise, one of the things I find fascinating about the academy is that where people would be in small departments and they'll study something and it gets really interesting to them and they're the world's expert in that. And that's great because we're advancing knowledge. But outside of their small circle, no one may find that interesting. And I think that it's incumbent upon them to sort of think about are they using their talents in a way to I think you are making that interesting to other people or at least intriguing to everybody, because I don't think you're adding that much value.

[00:51:58]

Least 30 people need to work when that's a great conversation to have with maybe a 14 to 20 year old. We go younger. Right. Like, how do we teach an eight year old about not only compounding but power, law, distribution? And how do we we might not use those names and we may not use the mathematics behind them, but how do we start instilling models that are? The way that I think about this is if if the world is changing, there's a core set of models that are probably unchanging mathematical ones that other cross sort of human history and biology and perhaps sort of like all existence, reciprocation is a great example of one.

[00:52:41]

It works on human and social systems. It's also a physics concept like how do we teach our kids or should we teach our kids as maybe even a different question on this? But should those models be learned in school as models so that you start developing this latticework or this mental sort of like book in your head where you're flipping through pages and going, oh, this model might apply here, it doesn't go to the next model. How do we instill that in our children, even if they don't understand the mathematics behind them so that we start understanding the world is more complex than single models?

[00:53:15]

And part of your goal is to just what you said, right, to to fill this niche. But one of the ways that you're going to fill that niche is your aggregation of these models and how you apply them is going to be more valuable or less valuable in a group setting in a particular company. And your understanding of how other people are applying models is also going to be a key element of strategy in the future. So if we we can anticipate that our competitors are following models that they learned in business school, well, now we know how they're likely to respond to what we're doing and we're not likely to be surprised.

[00:53:48]

And we can use that information to make our business or our company more competitive.

[00:53:52]

Yeah, it's a great question. I think two things come to mind. One is, I think we could do a little bit more of sort of meta teaching in the sense that one of the one of the things that people really like about I an online course called the Model Thinking, which is a movie and one of the sort of troops in there, is that when you there's something that's called it borrowed from my colleague Mark Neumann when he talks about Distribution's, which is logic structure function.

[00:54:19]

So if you see some sort of structure pattern out there in the world, there has to be some logic as to how that came to be. And then you also want to ask yourself, is there some functionality? That structure doesn't matter when you talked about like normal distributions versus power distributions. So we'll teach kids the bell curve, but we won't do is sort of say. Here's the bell curve, and this is a structure, think about all the other structures you can draw, but now we want to ask which structures do we actually see in nature?

[00:54:49]

And we don't see that many. We see bell curves. We see sort of stretched out curves, which are more normal. You see power things, but we rarely see things that have like five peaks to the. So why is that? So that we need a logic that explains the structures. We see it. So what logic underpins normal distributions? Which are you adding things up? What logic underpins logged on the distributions you're multiplying and what logic gives you power laws on the book?

[00:55:14]

So there's a bunch of preferential attachment. There's stuff like criticality, but there's there's logics that'll give us those powers. Then you want to ask, does it matter? Do we care? And with the and then that's sort of an easy thing to get done, because you can say, well, if if incomes like heights are distributed normally. So that's nice and predictable and seems fair. But if heights were distributed by a power law, there'd be ten thousand people as tall as giraffe's, there'd be someone as tall as the Burj Tower, and there'd be one hundred and seventy million people in the United States seven inches tall.

[00:55:48]

And they're like, whoa, that would be pretty bad. It'd be really hard to design buildings as well. Right. You should need a little tiny people. And so I think this logic structure function thing is so important. I think the other thing that we need to do is give them experiences of using the same broad idea across a variety of disciplines. So one of the things I class that I'm hoping to teach again, because the students just absolutely loved it, but it just didn't work out this year called collective intelligence, where we just sort of did a whole bunch of different sort of in class exercises to sort of explore where collective intelligence comes from.

[00:56:27]

So here's a here's one example that was just just go on. What is collective intelligence? Collective intelligence is where sort of the whole is sort of smarter than any one individual. So you can think of that in a predictive context. This could be the wisdom of crowds sort of thing where people guessing the weight of a steer, the average the crowds guess is going to be better than the average guess of the person. And that's just a mathematical fact.

[00:56:51]

But here what you're doing is you're looking at sort of collective intelligence in terms of solving a problem. So here's the setup. Really fun. I had a graduate student make up a bunch of problems defined over one hundred by a hundred grids, submitted a checkerboard that's one hundred by one hundred. And each one of those cells has a value. So one of the problems is really simple, like what we call Mount Fuji problem is just one big peak right in the center, which just kind of in the upper right, a huge peak and then at the highest and everything kind of fell off from that.

[00:57:21]

Another problem had like five or six little peaks over the landscape, but with one being higher and another, it was really, really rugged. So you guys created a bunch of problems and I didn't know what they were. That was part of the key. No one knew what the value is. So I created three teams. One team was the physicist and the physicists did is they got to sit around and first say, OK, which eight points to check, and then they would get the value some of those points.

[00:57:45]

So it's kind of like the game battleship when they would say like D seven and they'd come in and we'd say, this is the you said and then they'd get another. So they got like five rounds. So they got to check who's ten point so five rounds. So they get to check ten points. And the goal was to find the highest point. Another group was the the Haken so decentralized market where each person went to pick a point and then they just come back and they would say, here's the point I picked and here's the value, but no coordination.

[00:58:13]

So you could the idea was that you could see value by comparing those two because you could kind of see where other people picked it. You might want to go near where they were. So you also wanted to build information for yourself and the group by trying other points. Right. So there was all sorts of cooperation and competition that the third was the bees. The bees would point to the square. They couldn't give a number, but they couldn't say a twenty six.

[00:58:36]

They just point somewhere on this big square. We would approximate we thought that was we would show value and they had to go back and waggle dance. Right now, the thing is, it turns out undergrads won't Weigelt, they won't go dance, are just too insecure. So we had to just dance with their hands. What they do do is they had to kind of like point the direction it was in. And then the longer they waggle, that was kind of the better the value.

[00:58:59]

And then we compared the waggle dancing bees to the heck into the physicist. And on that, too, is not an easy problem. And on the problem with five peaks, the bees did the five peaks that it's ironically just a tiny bit better than the physicist. And so we're talking about this afterwards. And someone says, well, that's because the bees can take a derivative if nobody's like what because, well, don't like to solve this, you just got to get a derivatives that they could sort of find the highest.

[00:59:28]

They could find the highest point, and then they could take derivatives because they could see who was Wagman. Right. And it was only on the really hard problem. Physicists did the best. And so what you learn from that is that. These markets and problem solving teens are all due with high dimensional problems would make things so right if it's not super hard and finding people finding food isn't super hard, then these are just as good as physicists because it's as if derivatives and markets are just as good.

[00:59:59]

But when it gets super hard, the market's not going to work because you need all sorts of coordination. So that's going back then to this thing you talked about before, about when do you see a market? When does a democracy, when you see a hierarchy, when you just kind of let it rip. That probably depends on the difficulty of the problem. But what's cool about that and this is where you can do things with young kids is they see, well, here's this idea of collective intelligence that spans disciplines.

[01:00:30]

If you want to teach like the same class, I guess my example was just so fun. There's this amazing game called Rush Hour. And if you ever seen it, we have little cars and trucks and you slide them around and you've got to get this red car out. Oh, yeah. Yeah. So you get that kind of gives you configuration. And these configurations are like easy, medium, hard and very hard. And what happens is so here's what here's the experiment I do in class.

[01:00:55]

And again, the numbers are too small to say this is any sort of scientific result, but it's it's always work that's always been really fun, is that people play rush hour and they play like an easy, a medium or hard and reason. And we time how long it takes them to do each one on average, right in the heart of take a lot longer than I have them write down models for how to play Russia. So one model might be Sopot backwards, which is think about how that car is going to get out.

[01:01:22]

Another model is like get the big trucks out of the way. Right. Another model is move forward, then move backward. So move forward as far as you can and then move it backwards. And then what I do is I have another set of people read the mental models from the first set of people and then play not the same games, but different games that have the same difficulty and compute how long it takes them. And what happens is they're just a lot better.

[01:01:48]

And what you what you see is that this is this is something where it's it's not tacit knowledge. It's actually learnable knowledge in Russia. When I've been struggling with eyewitness's I'm trying to come up with a gamer. Can't it's purely tacit, like I can't communicate. So my friend John Norris jokes that, like this weekend he's going to read a couple of books on tennis and then go become a professional tennis player. You know, you can. Yeah.

[01:02:14]

So Fiacre. Right, right, right. Yes. I'm trying to fight like a really cool example to juxtapose with rush hour. What are your listeners to email it and you could do it assumes that some new game where people can learn it, but then there's nothing they can. It'd be nice if it didn't involve physical skills to just mental skills. That's what that's what makes it right.

[01:02:37]

Other things it was Adam Robinson actually told me that rush hour was one of the best games that he knew of to teach thinking skills to young children. Oh, really? Yeah. And we spent the summer playing that this year on vacation and we would take it. It's a great game to take to like restaurants and stuff. And my kids were at the time eight, nine. And we would sit there for, you know, they would sit through a whole two hours and just play this.

[01:03:07]

And it was a totally awesome game with it was fascinating. I mean, and as a parent, I just promised them 30 minutes of iPad if they got through all 40 problems in like three weeks and they were like, oh, my God, this is amazing.

[01:03:21]

It's amazing how hard they'll work for that.

[01:03:22]

Thirty minutes.

[01:03:24]

So I get the incentives right now. But it's it is funny, though, how I think it's because it's a physical game. When I'm doing this in class, I'll say you have ten minutes. Sometimes I'm just extracting the game from my students. Hey, it's like, look, you could take it home the next day because it's so much fun. Let's talk about a few of the models that you have in your book before we finish up here.

[01:03:47]

I want it.

[01:03:48]

Can you actually how about I'm going to mention three of them and you walk me through sort of how you present them and how you use them.

[01:03:56]

Right. Let's start with power law distributions.

[01:04:00]

OK, so power distributions are distributions that have so to start with, but they're not. So a normal distribution of something like human height where the average person's five, ten, there's some people five, eight, there's some people six. When it falls off really fast upon our distribution, most events are very, very, very small and there's a huge event. So we get earthquakes. There's thousands and thousands of tiny, tiny earthquakes. There's an occasional huge earthquake.

[01:04:26]

If you look at the distribution of city sizes, in most countries, there's tons and tons of small towns. There's an occasional New York, London, Tokyo. If you look at book sales, music sales, most books sell three or four hundred copies. There's occasional books and some millions of. And there's a question of what causes these what causes power lines, so unlike normal distributions, which come from just kind of adding things up or averaging things power, let's have a bunch of causes.

[01:04:51]

So what I do in the book is I go back let's go back to this logic structure function. It's a structure we see a lot, right, this long term distribution. The question is what causes it? And so I talk about three models in the book. One is something called a preferential attachment model. Imagine things kind of like what? Imagine that. Like there's a set of sitting's or there's a set of books. And the probability I move to a city with a property.

[01:05:16]

A by book is proportional to the number of other people living in that city or by that book we can see right away. There's positive feedbacks. There's more people moved to New York, more people move to New York. Smart move by the tipping point, ironically, by the tipping point. So the tipping point, seven million copies is. If nobody buys somebody's boring book, then nobody buys the book. But another way that power form is do is through random.

[01:05:43]

So imagine that each firm, a firm starts by somebody, you know, joining their firms and one person. Now, suppose that they're equally likely to sort of fail or hire a second person, and I suppose they're equally likely to go back down to one person or a third person with a life of that firm. The firms that exist as long as there's a positive number of workers. So if it's a completely random walk, like a coin flip, you can imagine that most firms are going to die really quickly.

[01:06:06]

But you add an employee yourself and you fold you two employees and you go down, one up, one down, one down when you die. So that would say that the life of the lifespan of firms can be short. But if you happen to get really big, you're going to last a long time. That should be a power. And it is. It's also true that the lifespan of species, phylogenetic and ecology you can think of that is perfectly random, also satisfies the power.

[01:06:29]

And then a third way to get these problems is from something called self is criticality. So if I drop's grains of sand over a desk, then I get a big sandpile. And then if I look at how many grains of sand fall off the floor most of the time when I got the grain of sand, what's the pile form? It'll be very few, but occasionally I'll get these giant avalanches. And so what's happening there is the system is sort of aggregating to this critical state so they can like traffic in Los Angeles or traffic in Toronto or New York.

[01:06:56]

What happens is it gets itself organized despite where cars are spaced pretty close and all a sudden there's one accident, boom, there's a three hour delay. So most of the time things are kind of fine. But one accident can lead to life. So now that you've got it, you've got now we have a logic that explains the structure. Why does it matter? Well, it it clearly matters. In the case of things you think of things like book sales, music sales, those sorts of things, it means that there's going to be some people who are wildly successful at which people are not that successful.

[01:07:26]

And that may not be we may decide that's not fair. Right. We may decide that, like, if I'm Malcolm Gladwell, I shouldn't necessarily think, wow, I'm amazing for my books just because you just happen to be The New Yorker books because you've benefited from those positive feedback. So it actually could change how we think about how we tax people. If you thought, no, this person totally sucks because they're just so much better. Right.

[01:07:48]

That's a very different story than if you say no. Just the natural process of people buying books leads to big winners. Then you start realizing the big winners are as much luck as they are skill. That's really interesting. Let's go to the next model I want to talk about, which is something that when I was reading it in your book, I was like first year physics, which was concave and convex.

[01:08:10]

Yeah.

[01:08:13]

So what this is I get these wrong.

[01:08:17]

Like the first assignment, I got them mixed up and my like, it was hilarious. It was all these memories came back. Yeah. This is a challenging thing because that there's certain things you almost you have to cover. Otherwise you're sort of it's a disservice. Right. And so the basic idea of sort of linearity. Right. Is that something is the same slope always. So the next the next thing is actually the biggest and fundamental to so many models throughout the book is some assumptions of either concavity, which is sort of diminishing returns or convexity, which is increasing, Richard.

[01:08:52]

So we just talked about preferential attachment. That's a form of convexity to sort of the odds as somebody buys your book increase as more and more people buy your book. So the odds that it's the first person by the tipping point are low, but then the odds that the million in first buyers are much higher because so many people. So convexity just means that the odds of something happening to the payoff from something increases. More people do it. So many things in the world are the opposite.

[01:09:17]

They're concave. So if you think about like so concavity means that the added value of the next thing diminishes. So for example, I have chocolate cake. Yeah. The next bite of chocolate cake, the next scoop of ice cream read. There's just diminishing returns to the surface, like adding workers to affirm as you keep adding workers like the value to those additional workers go down. And that's to keep. When teams as well. So when you think about suppose I decide I've got an important decision to make, the second person is going to add a lot to the first the third person on left, to the second right and so on.

[01:09:50]

But at some point, you're just not going to add much value. And so there tends to be in sort of team performance on a specific task, a certain level of concavity vs. I think the challenge for me writing that was like, how do you make Concavity Vecsey even remotely exciting? Right. And because it's it's just kind of like mainstream math. And the easiest way to teach it is almost in terms of derivative. Right. So linear function as a constant derivative of the concave function.

[01:10:18]

But so you try and make that the case that these are in some sense fundamental to not recognizing in particular concavity can lead to really flawed assumptions. In the 1970s, Japan had this really fast growth and all these articles saying Japan is going to overtake the United States in eight years. But the thing is, if you construct a model, you realize that as you sort of industrialize, you can go pretty fast, but there's going to be diminishing returns to that industrialization.

[01:10:45]

The same is true of China. But if you do a linear projection of China, five years ago you said, oh, my gosh, you know, by 2040, China's economy is going to be just enormous. But the reality is growth is going to fall off because what the model shows, in order to maintain anything even close to a linear growth, you have to innovate like crazy, massive levels of innovation. So I think that the idea behind the cabin convexity chapter was to try and get people to recognize that there's just diminishing returns.

[01:11:13]

There's diminishing returns to so many things that linear thinking can be dangerous. So your projections can be really dangerous.

[01:11:21]

And the last model I want to talk about, I guess it's actually more than one model, but local interaction models.

[01:11:26]

Yes. And these are fun. These are like super fun. So local interaction models, the there's some simple computer models that the convexity concavity aren't fun, but they're fun for a small set of people. So these local interaction models are models where you like to go first off, like maybe on a checkerboard, but eventually can put them on a network. And what you imagine is, is the set of things. My behavior depends on the people around me.

[01:11:52]

So one of the examples, a simple example I give often is sort of like, how do you greet people? So do you shake hands, Dubow? Do you fist bump? Right. It doesn't matter what you do, but you do the same thing that other people do. Right. So if you go to Baute, I go to shake hands, I'm going to poke your eye out. But it's not it's not going to work. So what you want is these are in some sense what we call a pure coordination game.

[01:12:19]

What I'm trying to do is I'm trying to do is coordinate with the people I'm interacting with. This happens on so many dimensions. So in an earlier book I wrote called The Difference, I talk about where you store your ketchup, but you started ketchup in the fridge, but you started ketchup in the cupboard again.

[01:12:34]

It doesn't matter what you do know, it does matter just always the fridge and the cupboard. People think the fridge people are crazy at once. A doctor said to me, Scott, I think you may think this is funny, but you have to store ketchup in the fridge because it has vinegar in it. And I said, where do you start your vinegar? He said, in the fridge. And like, the whole room is like, what are you, a crazy person?

[01:12:57]

Like, you know, you don't vinegar the same to a soy sauce. There's soy in the fridge and in the cupboard people. And it it doesn't matter what you do, but whatever you do takes on a lot of importance. It defines who you are. So what are the fun things I do in class? I also talk about in the book, as you can imagine, you're actually playing a whole series of local interaction and that collection of local interaction solutions you can think of as comprising a part of culture.

[01:13:24]

So my wife, Janet Bodnar mentioned before, is a political scientist. She'd have some papers on this where you can think of cultures like a set of sort of coordinated behaviors across a variety of settings. And so I'll do this in class. I'll say, OK, do people read their use their phones at the dinner table? Do people take their shoes off in your house? Is the TV on? Do you hug your family? Just a whole set of things.

[01:13:46]

And then I'll have people sort of I have the students vote using a Google form, which ones they do. And we sort of have like, you know, here's the modal response across all these things. Here's the people who are correct, which are the people who do what I do. Right. And these are by people if you could move to the front and your kids and but the best part about teaching this is one time this is like 10 years ago.

[01:14:10]

This kid comes up after class and he goes, oh, my God, oh, my God. This explains it and explains what it's like my girlfriend's family. And I'm like, what?

[01:14:20]

And he goes, Everything I do, they do the opposite. And what's weird about it, what's great about this local traction models is that prior to that, he had thought intrinsically they were just weird people.

[01:14:31]

Right. Right. He just thought these are crazy people who have egos.

[01:14:35]

They have their own napkins, their own napkin with a napkin holder. They take their shoes off like they always have the radio out of the house, nobody's got the whole set of things that they did like, they hugged each other and he's like, well, out of all these things that they did. He thought that they were just like part of their genetic makeup, some essential part of their character, when in fact, it was just a series of coordination problems that their family had.

[01:15:00]

Of the other example I have in this space that was great is somebody told me this story about at New Year's Eve one year she'd been married into this family for 20 years. She said, look, I love I love the family. They're great, but I hate the boiled cabbage and beat soup on New Year's Eve, you know, 20 years. And I think I can say that. Turned out everybody hated it.

[01:15:23]

Nobody mentioned it. It turns out like, I guess some somebody like in bed dead for like 15 years, supposedly like that, they think. Right. Yeah. And then they decided, like, going forward that they would make like one ceremonial beach or something, but then not.

[01:15:39]

So I think that you don't realize those that you can a lot of who we are and what we do comes to this local interaction not. Now, let's make this serious for a moment, away from the ketchup and avowing when I go work for a firm or if I'm working an organization, stock analysts, psychologist, whatever I'm doing, mental models that we use are like local. And I mean, like it's like, oh, you're using that mental model.

[01:16:04]

It's easier for me to use that, my love. And that that then works against this diversity. Right. So it really becomes a super important thing. That and it's also very funny is that. Your mental model is better than mine, but it's still worth it for me to hang on to my mental model because it's giving that diversity so collectively it's worthwhile, but there's going to be. So, again, back to the point you raised earlier about evolution.

[01:16:27]

And this is where the many model thinking the fun is civilized like so. And then I go work in some organization, I'm working in some community practice, and I've got a collection of mental models. I'm using it. It just becomes easy for me to start coordinating on other people's mental health. Right, using other people's terminology. That's where it's it's more efficient in how to appeal to them, how to persuade them, how to interact with them, how they see the world.

[01:16:51]

And then they're predictable. This kind of goes back to have you read Ender's Game know, one of the key moments in Ender's Game is like Ender, who's this kid who ends up saving the world? Totally fictional book by Orson Scott Card. We just read it with my kids. And one of the key moments is like he's like, I can defeat my enemy, but only when I really understand them and how they think and how they view the world.

[01:17:16]

And I always thought that that was really interesting because I'm trying to teach my kids that part one model that I want them to have, if you want to call it a meta model, is perspective taking. Right. What does this problem look like through the lens of this person? What does it look like through the lens of this person and sort of like mentally walk around the table and then sort of like have a higher key to what does it look like to shareholders?

[01:17:38]

Like, what does it look like to the government or what does it look like to all the people that sort of interact with the system? And through that, you can get this more nuanced view of reality. And if you see the problem through everybody else's lens, you know how to talk to them in their sort of language or in a way that might be more able to appeal to them.

[01:17:57]

That's such a great point, because what are the things that I, I struggle with in the social space and I think it's a good place to struggle is as you move from sort of very formal models like Unfitting, some sort of hierarchical linear model versus some abstract perspective taking versus sort of some notion of sort of like a disciplinary approach to a problem. So we give a very specific example of if I'm cool to think about, which is the drug approval process.

[01:18:25]

So if you look at a company like Genentech, somebody constructs a molecule, then they've got to decide, OK, is this molecule something that we can use to improve people's health? One perspective to take on that is just purely from a logical perspective of body chemistry. How does it work? We're just pure science. But then there's also sort of a sociological perspective in terms of like, you know, will people how will people take this? How will this get passed?

[01:18:51]

I'm going to get abused. Could it be abused? What you of there's also sort of a purely almost organisational science business school perspective in terms of how do we if it's complicated to explain, how do we educate the doctors in terms of how to use this? Right. Then there's also people who understand just the political process, which is like, you know, what's the likelihood it'll get approved even if it works on all these other dimensions? Can we get this through the government approval process if it's somehow something different, given the big boxes that they use?

[01:19:19]

So what you've got is you bring all these different disciplines to bear and you've got it, just like you're saying in this book. If I'm the CEO of and I've got to make the call, do we take this drug to market? I actually have to hire people who can take all those different perspectives. Otherwise I probably won't be CEO because I'm not going to do that. But then you realize just let's make things just a tiny bit less abstract for a moment and think about traditional arguments for a liberal arts education.

[01:19:48]

Right. The reason you want to read literature from a whole bunch of different vantage points. So the reason you don't want to just read sort of the great man view of Canadian history or US economic history or something like that is because there's all these other people who experience that same thing and saw it from a very different perspective. But so what's funny here is that that I'm I'm kind of making this point. When you think about many models, you could think like making this.

[01:20:12]

But like, I guess people should be spending more time learning technical stuff and that that's kind of typical should be learning technical stuff. But the core argument I'm making is very similar to the argument that people at the other extreme are making in terms of like the reason why the liberal arts education is so important is the ability to do perspective take to sort of learn to see the world through different eyes. I think where the difference is, is that I'm I think I'm a pragmatist in a way.

[01:20:43]

Right. I mean, I just see so many opportunities. And so I feel like I'm coming from a much more sort of pragmatic perspective in terms of going out there and making a difference in the world, as opposed to just purely appreciating all these different ways of seeing things. And the reason that distinction matters is if in literature it could be that every perspective is worth considering and engaging in thinking about, because there's no there's no endgame, ironically, given the name of the story, it.

[01:21:13]

But the idea that if. I'm making an investment decision if I'm worried about drug approval, if I'm trying to write a policy to reduce inequality. I'm trying to think about how do we teach people? There is an engagement. There are things we can measure. There is performance characteristics. And so it could very well be the case that you can say, I think we should think about it from this perspective. I think we could use this model and then we can kind of beta test that perspective, that model, and think, no, we shouldn't.

[01:21:38]

So so there's a difference, I think, in the approach I'm promoting. And yeah, you throw out a whole bunch of models, but if the spaghetti doesn't stick to the fridge, the spaghetti doesn't stick to the fridge and you let it go. Right. It may be something you it's probably something you keep practicing to be other cases where it does work. But the point is there can be cases where it doesn't. And so you don't want to force it.

[01:22:00]

No, you don't want to. So there's a limit of inclusion in the sense that, like, you only want to be inclusive to things that are actually going to help you to do whatever it is you're trying to do.

[01:22:07]

I think that's a great place to sort of end this conversation. I feel like we could go on for another few hours. But I want to thank you so much for your time, Scott. This has been fascinating. Thanks. This was it's really fun to have these open ended conversations, and I really appreciate the format as suppose to simple answer respond. But to give give me time to elaborate on the book and what I've been thinking. Thank you.

[01:22:31]

Awesome. We'll have to do part two at some point. Thanks.

[01:22:39]

Hey, guys, this is Shane again, just a few more things before we wrap up. You can find show notes at Farnam Street blog, dotcom slash podcast. That's fair. And I am s t r e t blog. Dotcom slash podcast. You can also find information there on how to get a transcript.

[01:22:59]

And if you'd like to receive a weekly email from me filled with all sorts of brain food, go to Furnham Street blog, dotcom slash newsletter. This is all the good stuff I've found on the Web that week that I've read and shared with close friends, books I'm reading and so much more. Thank you for listening.