Happy Scribe Logo

Transcript

Proofread by 1 reader
Proofread
[00:00:09]

Welcome to the Knowledge Project on Mir Hossein Parrish, editor in chief curator of the Farnam Street Blog, a site with over 70000 readers dedicated to mastering the best of what other people have already figured out. The knowledge project allows me to interview amazing people from around the world to deconstruct why they're good at what they do. It's more conversation than prescription.

[00:00:32]

On this episode, Venkatesh Rao is a writer, independent researcher and consultant. He's the founder of the blog Driven Firm, the technology analysis site Breaking Smart, and the author of a book on decision making called TEMCO. We talk about a host of fascinating subjects, including the three types of decision makers, mental models, the implications of the free agent economy and how to process information. I hope you enjoy the conversation as much as I did. Before I get started, here's a quick word from our sponsor.

[00:01:11]

This podcast is supported by Slark, a messaging app bringing all your team's communications into one place. Slack integrates with other tools and services you already use, like Google Drive, Dropbox and more.

[00:01:22]

Visit Slocomb Furnham to create your team and get one hundred dollars in credits you can use if you decide to switch to a paid plan.

[00:01:31]

So let's get started. I want to talk about your book, Tampoe, which is on decision making to start with, and that's about the narrative we frame around decision making.

[00:01:41]

Can you walk us through that a little bit?

[00:01:44]

So Tampa was kind of an interesting project compared to a lot of my other writing projects, because I think it was the one thing I've done that was almost entirely for myself. So the work it's based on was based on was things I did in grad school and my postdoc. And it was it was kind of unsatisfying to do that work and just have it published in the sort of academic literature without sort of exploring the parts of the world that really interested me, which is how does it apply to thinking like a lot of the things I learned along the way to applying the ideas and temporal things like command and control and military situations?

[00:02:29]

That was the actual context where the work was done. It was unsatisfying because what I personally enjoyed the most from it was just a chance to sit and reflect and think about how we actually make decisions, how we frame questions and so forth. So Temple was an effort to capture that part, which was sort of not the sort of thing you'd published in academic journals. So I needed to put it out in some other form. So that's how I ended up writing it as a book.

[00:02:56]

And I think it shows because it's not it's very idiosyncratic. I didn't really bother to explore how other people thought of the same topics a great deal. I just sort of wrote down my own conclusions from that work. I thought it was. Yeah, I thought it was fascinating.

[00:03:15]

Can we can we dive into that a little bit? I mean, if you were to synthesize today the knowledge that you had learned and that was the culmination of it and how you apply it today, how would you do then?

[00:03:27]

I would say that writing that down actually helped me move beyond that, because right after I wrote the book, some of the more interesting criticisms that I encountered actually helped me see what the book didn't cover. Well, and exploring that ended up being a very fruitful thing for me. So I'll just to point out a couple of those things. One was the idea that there are a couple of major categories of decision making style, so to speak, and the book really is strongly focused on one of them.

[00:04:05]

And I was sort of unaware of the sort of structure of the other approaches to decision making that were equally big. So the big one that I've concluded in my mind is, does this approach to decision making that you could say is based on not reasoning per say, but it's a very conceptual approach where you think in terms of, say, mental models, of what frames am I looking through, what metaphors am I using? What is the significance of my decisive actions versus my random actions?

[00:04:37]

What narrative is sort of framing the unfolding of events? So that's an approach that's very natural to me. It's why I think I ended up becoming an engineer. It's also an approach that's very natural to you. Farnam Street, I think, explores that in a great deal of detail from a variety of different sources. So that's I don't know what to call it, but let's call it sort of conceptual reasoning as a framework for decision making. And I would say maybe a third of humanity operates that way.

[00:05:05]

It's their operating system for life, but the other two thirds does not operate that way. And the two categories that have realized that I'm very, very unlike are one is what I would call ethical reasoners and ethical reasoners very sincerely and honestly start with a very deep and intuitive sense of right and wrong. And these people, in my opinion, right and wrong in the sense of good and evil, not in the sense of true or false. These people who start with the framework of good and evil, not only do I not resonate with them, I often really struggle to understand why they're thinking the way they're thinking.

[00:05:46]

And invariably, when I disagree with people very strongly about something, it's usually the fact that they're starting from good and evil premises. And it's not as sophisticated as we tend to think that these are just like not very sophisticated religious thinkers. For example, that's a subcategory of people who use ethical reasoning as a framework, but does it's much broader and it can get much more sophisticated. So I think that's a big blind spot in my own thinking that I only slowly become aware of and explored a lot more.

[00:06:17]

And the other, which I would say the second third of the category of people I don't truly get is people whose entire decision making process and framing is based not on something that goes on in their own heads, but in the sort of collective consciousness of the group they belong to. So these are what a friend of mine, Greg Rayder, he phrased as affiliation. No thinkers, people for whom every decision basically boils down to which group do I want to be like, which group do I want to belong to?

[00:06:53]

And they do that by saying, all right, let's take an issue like abortion or is Trump good for being president? And I'm not going to process that through examination of the issue itself. But which group can I belong to that their views on the topic are comfortable for me to be socially? Does that make sense if your social skin consideration? So those are my three big buckets of types of human decision making. And you and I, I think, represent the first kind, which is explored quite a lot in Tempo and your blog.

[00:07:26]

The second is these people to start with good and evil, who I understand a little bit now a little better four years down the line. And a couple of additional thinkers are tribal thinkers, whatever you want to call them. These are the people I understand the least, because in a way, understanding these people individually is the wrong thing. To even attempt. You have to understand sort of how their groups or tribes think and think of these individuals only in terms of tribes that choose to join.

[00:07:56]

So that's the only decision that ever matters in their life, which is which tribe should I join? Every other decision, our thought process they go through is really somewhere in the collective consciousness.

[00:08:08]

Do you think those tribe decisions are based on the particular decision that you're making, or are they based on the tribe writ large? So if I am I gravitating towards a tribe on particular issues, or is it that I want to be like that tribe in all issues?

[00:08:23]

I think it's the latter because if you're talking about being like the tribe on particular issues, you're very too individualistic. You're making decisions based on the merits of a particular case. Like you might say that on capital punishment I'm that liberals and I'm against it. I'm referring to a US context, of course. So you might feel capital punishment, I thought through the issues and I'm against it. Therefore, I'm with the Democrats. But on gun control, I've thought through it and I'm with the Republicans on that.

[00:08:56]

That's too much thinking. If it shows that you're not a tribal thinker for a tribal thinker, it would be who do I want to be with as sort of the operating system of my life, who I want to have barbecues, but who I want to hang out with? Who do I want? Sort of as my friends in my bowling league, that sort of issue? It's not explicit. They don't sit through and say already that the 50 activities and habits that define my life, therefore I'm going to pick the optimal tribe to join.

[00:09:24]

No, it's not like that. It's it's more a process of emotional resonance. And after that, you basically are partisan in a predictable way on all issues. And to people who are more individualistic and discriminating thinkers, this seems kind of stupid. It's like, how could you possibly take this huge set of like 50 different issues with very different contexts and considerations and basically agree with one tribe on all 50 of those issues. But if you look at just how tribal reasoning works, it is possible.

[00:10:09]

Do you think that I mean, between the framing of those, I mean, it leads us to the obvious conclusion that the first one is better, not putting it in a good versus evil context. Do you think people approach the good versus evil in terms of I'm the hero and I'm trying to write this evil? Or do you think it's more nuanced than that?

[00:10:29]

Uh, well, first, I would resist the temptation to conclude that the first approach to thinking in decision making is the best. It's the one most suited to certain personality, certainly. And in certain situations, it makes for a much higher probability of survival and success and thriving. Right.

[00:10:48]

Like in, say, the American context, because of whatever the social operating system for the country at large is, whether for better or worse, Americans tend to believe in individualism, the myth of individualism, even though Americans are not super individualistic. But if you believe in the myth of individualism, the first approach to decision making, where you kind of maintain the fiction of processing everything on your own and staying away from tribal pandering and so forth, that tends to work very well.

[00:11:21]

Whereas if you go over to like strongly traditional Asian cultures, the reverse might be true, where everything is framed with respect to the context of the social environment. And that might be a much better survival strategy if you want to actually succeed in that environment. So I'd say whether or not one is better than the other is. A question of context and what you mean by better, but to your other question of good and evil types, I don't know.

[00:11:52]

I've been thinking about it for quite a while yet. And at a philosophical and a practical and a reasoning kind of level, an epistemological level of are they actually exploring the truth about the way the world works? I think they're kind of full of shit on all those fronts. But there's something hardwired, deep in human nature that seems to work very well with good versus evil reasoning frames. And I think here's my hypothesis on why that is case. Why is it that this is buried so deep in our forum where if you think about most species of animals, their survival concerns all have to do with their material environment, which is can I get in water, can I get food?

[00:12:36]

Can I hunt my prey? Right. But as humans, a great deal of their survival depends on other humans. It's how do I get along with the group, the leader of the group of monkeys like me or not? What will happen to me in a tropical environment if I'm kicked out of the monkey troop versus a temperate environment versus an Arctic environment? So 90 percent of our consequential survival behaviors as a human social species depends on things having to do with other people.

[00:13:09]

And even if you think about it, is a very, very good way of simplifying that whole area of decision making, where if you simply decide that a certain group is good and other groups are evil, everything else gets massively simplified. So that's how you get, I think, the abstract, good and evil approach to thinking it's a very fine form of tribal affiliation, of thinking. So if you want to stack them in sort of evolutionary primacy order, I think tribal affiliation is the sort of most ancient of our decision making frameworks.

[00:13:41]

The good and evil framework is slightly more recent in evolutionary history because you have you need a certain capacity for abstract thought before you can frame good and evil of categories. And then the kind that you and I try to promote in our writing and thinking is the most recent of all. It might be, I don't know, more than five hundred years old.

[00:14:00]

I was just thinking that there were almost inverse from the way that you had mentioned them from an evolutionary perspective. And the two, the two and three. So the good versus evil and the tribal tend to blend in more so than the other.

[00:14:13]

It was mental models that first drove me to your book. I mean, one of my friends read it and they pointed out that you were talking about mental models in your book and at the time. And I mean, to a large extent today, so few people are talking about that. What's your definition of a mental model?

[00:14:31]

Well, I have a sort of technically inspired definition in the book, as you may recall. So I use something called a belief, desire, intention model of Michael Brockmann, who is a philosopher at Stanford.

[00:14:43]

And it's been the basis of a lot of artificial intelligence research. So that's one sort of effective way to get at defining what a mental model is. It's a set of beliefs, desires and intentions. But I think that's sort of definition is useful for certain narrow technical needs. Other people have similar sorts of technical definitions, politics. People have similar definitions. Like you mentioned George Lakoff a couple of times in your writing, I think. And Lakoff has one based on conceptual metaphor, all these narrower technical definitions of mental models, they are useful for certain certain questions that are honestly a little too detailed and deep for a general mass audience.

[00:15:27]

They're not interesting. So for a mass audience, I would say the best definition of a mental model is a world in the sense of science fiction or fantasy. Right. So you've got a universe like the Harry Potter universe or the Lord of the Rings universe and so does the world. And then there's the story that's told in the world. And look at the way the most popular science fiction and fantasy is written. You're told the story, but through learning the story, you also learn about the world and stories differ in the ability to do that elegantly.

[00:16:02]

So Lord of the Rings is sort of has a poetic elegance to it in that you don't feel like you're learning about the world. But by the time you're done with the trilogy, you actually know a lot about it. Whereas Harry Potter is a little bit more heavy handed where a lot of it is very clearly world building. And you've got to get the clear sense of reading a geography book or being asked to memorize a list of countries that's sort of explicitly learning a world in which story fit.

[00:16:32]

But if you look at the movie versions, you realize that in a way, J.K. Rowling is a product of her time where she's not really the author of the book so much as a. Media property that she was, at least at some level aware, turned into a movie and online world, a game and so forth, right. So it could be that she's just a product of our times. And there are two very different works. But that's basically my idea of what a mental model is.

[00:16:59]

It's your sort of implicit understanding of what the world is. And it's very easy to see in the case of like fictional universes with a few rules that are different from our own. But the same thing is true for much more realistic world as well. So like take the Law and Order franchise, you've got I don't know if it is popular around the world as much as it is here, but you've got this franchise of TV shows, a criminal intent, special victims unit and so forth.

[00:17:32]

And this sort of gives you a sense of an entire universe of police work and crime and a sense of the world as a very dangerous place where you've got these brave defenders protecting you. And that's a mental model. Right. So while you're watching a Law and Order episodes, that mental model is active in your head and it allows you to make sense of the stories you're being told very efficiently. So mental models allow you to very efficiently make sense of stories.

[00:18:00]

And if you are not familiar with the mental model, then the author must build a mental model. And that's what happens with science fiction and fantasy. But the interesting thing is when you read, say, extremely foreign fiction, so fiction that's very alien to you in terms of mental models. The author may assume you understand the mental models, but you may not. So, for example, Japanese comic books, a couple of times I've tried to read them, they just feel so bizarre to me.

[00:18:28]

They're sort of a convention for indicating emotions and actions and so forth. They're just so unintuitive to me that the world kind of that should be in the background and implicit and I should just be able to reference it like an operating system. It sort of becomes a little too visible for me to read the fiction seamlessly. So it's like I'm trying to run a Windows program on a Mac computer without realizing it.

[00:18:52]

So I think that sort of is the best way to understand mental models for people who don't need to deal with them in any technical way.

[00:19:01]

So when they're presented to you in these ways, how do you go about validating that they are, in fact, the way that the world works?

[00:19:08]

Or is it trying to hit on something almost at a subconscious level in a way that elicits a recognition?

[00:19:15]

I don't think really mental models are so much about how the world works as much as they're about internal consistency. So think of the universe you live in as an extraordinarily confusing place that's throwing huge amounts of information at you in an extremely high bandwidth way.

[00:19:37]

I think I read somewhere I think it was in Daniel Dennett's consciousness, explained where he actually looked at the raw information coming through your eyes alone, for example. So your pair of eyes, the retinas, the amount of raw betrayed information they can taken across the frequency band in which eyes are sensitive. That's a certain amount of raw information. And it turns out if we actually had to process that amount of raw information, it would make our heads explode basically.

[00:20:04]

So there isn't enough processing power in the brain to handle that input raw. So our brain is basically layers and layers of processing that throughout most of it and mapping to a sort of total universe inside our heads. And it's the star universe that we actually play with. And the only thing we ask of the star universe inside our head is that it'd be much, much simpler than the world itself. And that would be internally consistent, which means if you close your eyes and shut off input and sort of wind up your mental universe and run from simulations in your head, things should not fall apart.

[00:20:41]

There should be coherently put together so that you can say close your eyes and analyse the decision, like which university should I attend and what major should I pick? You're not processing that. In the real world of information and bits about universities and majors and careers, you're processing that in a little toy universe in your head. That's like a billion times simpler. So that's sort of a function of mental models, simplicity and coherence. And I think that's the only way they can work, because the real world is there's just way too much information.

[00:21:14]

And the best we can hope for is that our mental models don't become these perfect, idealized, leakproof buckets inside which we live in, like Taliban or religious fundamentalists, to which no reality data can leak at all. And if we sort of have slightly looser mental models and universes, there's hope that reality can occasionally seep through the cracks of your perception and disturb your mind and causing disruption. And then you learn, I want to come back to the information and information processing, but before we get to that, what role do you think that mental models play in decision making?

[00:21:54]

Or I mean, to what extent do they play a role in that in your your categories or just in terms of how we process you and I? Maybe in that type one?

[00:22:04]

My views have evolved a little bit on this since I wrote Tempo. And I would say the easiest way to understand what mental models do in our thinking is they act like the blinders they put on horses. You've seen those things like little sideline. Does that prevent the horse from looking on the sides and getting distracted? So your mental models job is basically to blind you. It's to blind you to ninety nine point nine nine nine nine nine percent of all the pertinent reality data that could possibly be salient to a decision so that you're paying attention to an extremely narrow stream of information.

[00:22:39]

That's the purpose of mental models to blind you. That's awesome.

[00:22:43]

I like the way of thinking of that. And then the problem is, if the world is shifted or changed, then you're blind to that change.

[00:22:51]

Exactly. And you have to hope that that change happens to lead to one of the cracks you've left open.

[00:22:57]

And so does that go to information processing when you think about it and how we filter and how we process? How. How do you leave those cracks open in your life?

[00:23:07]

I'll give you a short answer and a long one. The short answer is basically mindfulness. Just paying attention to the world itself, which is shut down the inner dialogue. And look at the look outside your window. Right now. There's a magnolia tree that's about the flower outside my window there. I'm sitting here. So the real world is actually this made sound like stating the obvious, but it actually needs to be stated and people need to repeat themselves frequently.

[00:23:34]

But there is, in fact, a real world out there that you can stop and pause and actually take a look at. It's not all abstract categories inside your head. Any time you just pay attention to what your eyes are seeing or your hearing, that's how you sort of make sure the cracks to open. So that's the short answer. And I know it wasn't super short. The longer answer, which is sort of perhaps more helpful, has to do with something that took me a long time to realize, which is a lot of people think that creative and imaginative thinking has to do with the marketing ideas from different domains.

[00:24:12]

That's the function of, for example, metaphor. That's the function of certain types of creative pattern recognition in academic research where people say, oh, I'm going to take this idea I learned in mathematics and combine it with this other idea I learned from art history. And I'm going to come up with this new way of doing mathematical art. Right. So a lot of people fall in love with that idea of forming connections, of the foundation of thinking the commendatory play, right?

[00:24:42]

Yeah. Yeah. And I think that actually is a very dangerous process. That's how mental models sort of snowball and complexity and connectedness and increasing in their ability to blind you, because if you close your eyes. So let's take this hypothetical person who spends 15 years in college and grad school becoming like the world's most erudite academic. They've read all the books, they've read the watch, all the movies, they've viewed all the paintings and read all the criticism about everything.

[00:25:16]

So their head is completely full of information from mediated sources, so to speak, not direct reality data, looking at the world itself. But lots of these process, it's like processed food for the brain. So this hypothetical person closes their eyes and it's totally sane and their head is full of like fifteen years of such data and then they live their life without any more data. Now, I think two things could happen to such a person. One is that information in their head that's already there, it can erode or depreciate like a bank account.

[00:25:52]

You can start to lose value. But the other thing that can happen is it can get more and more interconnected internally. That's what happens when you close your eyes to reality data. Information that's already inside your head has a tendency to sort of get wired up in more and more complex ways. And people love that. It's an addictive process. It's like, oh, this idea from the Bible is actually very similar to quantum mechanics and therefore quantum mechanics was predicted by the Bible.

[00:26:19]

That sort of process sort of snowballed. And your head becomes full of this richly interconnected web of ideas. But the basis for the interconnection is blindest.

[00:26:29]

I mean, it's not supported by reality data. It's if there's like three objects and you've tried to connect them up, there's three different ways to do it. If there's four objects in your head, the six different ways to connect them up, it's almost a brute force kind of.

[00:26:44]

But the cautionary tale here is if you don't have constant reality data coming in into your mental model, you'll make all possible connections and they won't actually contain any information. They'll just be connections. They'll just be factoids that sort of become evident like a map. And a person who is just a head full of information and being wired up creates a set of mental models in their head that becomes increasingly leakproof over time. So it becomes leakproof by becoming more interconnected internally.

[00:27:18]

You have an explanation for any possible thing you can think of any way your thoughts and decisions are done. There's like some sort of connections and ideas that sort of frames the decisions for you and makes it.

[00:27:28]

But the actual value of the information is depreciating slowly, like a bank account that's sort of divorced from the reality of the outside world. So it's almost like doesn't the equivalent of a financial bubble going on inside your head? So that's the that's what a mental model that's divorced from reality data is. It's a financial bubble inside your head where valuations are going weirdly and internal dynamics are overwhelming connection with the real economy.

[00:27:55]

So what would you say to some extent that the knowledge in your head, there's almost a Red Queen effect going on where you have to do something to maintain and constantly update just to stay in the same relative position? That's a very good analogy.

[00:28:09]

Yes. So I think it's it's probably a sign of a very healthy mental process if you are able to create and sustain a red queen's arms race between sort of your mindful engagement of external reality data and sort of your eyes shut process of like making your mental model more coherent and useful in terms of simulations and seeing ahead. You actually mentioned something like this in one of your recent posts, the two by two of engaged versus tinking versus bad. This is roughly that distinction and how to keep that process in balance because you let one overwhelm the other, it gets really unhealthy.

[00:28:53]

And the kind of decision making that you and I seem to enjoy has a bias. It tends to preferentially sort of create momentum and inertia in the internal part, the thinking part, and not so much in the engagement part. And there's other kinds of people with the opposite bias where their engagement is overwhelmingly strong and their internal processes aren't keeping up. So their arms race is imbalanced in a different way.

[00:29:22]

Hmm. What's your process for reading? Do you have a process? Do you walk me through this? You you can have so many things.

[00:29:32]

You I don't really have a process if I'm working on a particular well-defined project, like right now, I'm working on breaking smartcards, the second season work. Then there's a set of books that obviously I have to sort of get through and read and process. So that's somewhat like academic work where you have to do a literature survey and understand what some people call the idea of a domain. This is a phrase in the startup world that was coined by Baloji Srinivasan.

[00:30:05]

It's the idea that you need to understand the map of the area you exploring. So that kind of reading is relatively well defined where you have a goal you want to get to when you have a rough idea of the path. But you have to go about assembling a map so that you can actually navigate your way there. So that's one kind of reading. And I think traditional education teaches that kind of reading very well.

[00:30:29]

The other kind is probably 80 percent of my reading, which is honestly pretty much completely random. I follow trails on Twitter, on Facebook, people send me stuff. I bought things on Amazon and it's I don't read as much as I used to. My sort of stamina for that is going down. But this process is, yeah, pretty much random. But that isn't to say that the effect of the process is random. The effect of the process is this is how you discover new stuff like connecting back to an earlier conversation about blindness and so forth.

[00:31:08]

Exploratory reading is what creates cracks in your sort of Sekiyama models and allows you to see new things.

[00:31:14]

There's a serendipitous aspect to it. So when you're reading, are you taking notes or are you highlighting? I mean, what is that, what does that nuts and bolts kind of look like for you?

[00:31:25]

I saw in the first kind of reading that I'm reading towards a very deliberate end. I might take notes or more recently take pictures of a particular paragraph with my phone, because I might call that a bit later and so forth. So that's for again. More academic citations, focused way of reading, but the rest of the time, the random part, which I think is more crucial to my way of thinking that I don't attempt to control it in any way, does no matter process at all.

[00:31:57]

I just read and there's a reason for this, because if there's an actual idea that needs to pop into consciousness in a serendipitous way, as you point out, you can't force it. And if you try to sort of I don't encourage it. AIMS are structured ways where you say, oh, I found this one idea in this one book.

[00:32:17]

And it looks interesting that for a little clip it and put it in my Evernote file and if something else comes along, I connected and so forth, that sort of kills the golden goose of serendipity, where, on the other hand, if you just read and sort of trust the universe that you're exploring to surface the connections that are interesting, that happens a lot more naturally and the really high value insights and connections that you can spot, they only happen if you don't try to have too much of a meta process, but you also have to filter quickly and rapidly in that world, do you not?

[00:32:57]

Not so much.

[00:32:58]

I mean, so long as I'm being entertained, I don't ask for productivity or like a certain rate of insights. So to me, it's actually the very natural heuristic that I used to practice as a kid, which is just continue reading if something is interesting and keeping engaging my interest. It's it's it's important not to overthink this stuff where you get so obsessed with the productivity of your reading that you're not able to enjoy it because enjoyment is actually not a nice to have peripheral feature of the process of reading.

[00:33:38]

Enjoyment is actually an important part of how you do the filtering that you're talking about, which is if if you're enjoying it, continue reading. If you don't enjoy it, set it aside. And that's your brain's natural juristic and it does a very good job actually. So there's not that much of a reason, as you might think, to add more filtering criteria now, sometimes better because you can get into this addictive trap where your enjoyment filters are basically some sort of mind candy filters and you sort of filter out anything that's threatening or upsetting to you.

[00:34:11]

Now, if you notice such a bias, then you can have to rewire your habits so that you have developed a tolerance or appreciation for a kind of content that used to upset you, like say, let's take this back to being a kid. And maybe you like adventure stories a lot, but horror stories or romance stories upset you or embarrass you or something. And you have to sort of become aware of the emotional reaction, start to manage it and learn how to enjoy that kind of content.

[00:34:40]

So it's like I would reframe the problem of filtering information as learning how to enjoy information I like.

[00:34:48]

Then to what extent is your reading physical versus digital and say books versus articles?

[00:34:55]

It's increasingly digital and increasingly articles or books because books are now a very big investment. And yeah, I have to really and I don't enjoy it a lot or have it be very much on the critical part of a project I'm involved in for me to finish a book these days.

[00:35:15]

So let's talk about breaking season one. Do you want to maybe just give us a brief introduction to that?

[00:35:21]

And yeah, that's that is an interesting writing project because it's not like the kind of writing that put me on the blogosphere map. So ribbon form is very much sort of my exploratory laboratory of thinking where I basically do my own thing. And Breaking Smart was much more of a write his universe of ideas that certain people in Silicon Valley understand extremely well and other people outside it feels like this alien idea space. That's a new way of living in the world that I don't understand at all.

[00:35:56]

So there's this sort of the intellectual equivalent of a digital divide. And so it was it was very deliberate, focused process of. All right, can I explain fairly clearly to smart people what it means that software is eating the world? And working with Andreessen Horowitz for a year sort of gave me an opportunity to spend a lot of time interacting with native thinkers in that world, people whose everyday work involves basically software eating the world and what to do about it, how to take advantage of it, how to do startups in that world, how to invest in that world is very interesting.

[00:36:31]

It kind of like embedded anthropology. And I'm personally I don't consider myself part of that world. I don't know what world I'm part of, but I'm not part of that one directly, so. A very interesting spectator experience for me and I try to capture that, and that's where those athletes came from. And it's been interesting because they're much more accessible than most of my regular writing and a very different kind of audience has responded in a very different way than I'm used to.

[00:37:01]

You spent a year doing that in partnership with a 16. What do you think that has changed or what are you taking with you after this year other than, I mean, the notoriety that it's drawn to thinking and the workshops that you're doing, but what is it that's changed your approach from what you've learned?

[00:37:23]

That's a difficult question. It's it's really hard to see change in yourself until it's like 10 years down the line. And then you think back and say, hey, that was a turning point. And I radically changed my personality, like sometimes does like you that say that your personality has changed suddenly and radically.

[00:37:39]

Other times it's like the parable of the frog being boiled and not realizing it's yeah, it's Galilean relativity, one on one, which I think it's more of the second case being immersed in the thought space for a year and then writing about it carefully for about four months and then spending another eight months doing workshops and explaining those ideas to people. It's it's a very gradual process. There's no Thupten moment where you say, oh, I used to be this other kind of guy and now I'm this kind of startup culture evangelist guy.

[00:38:12]

It's not that overnight kind of transition. It's more like a lot of ideas that might have been fringe for you suddenly becoming more and more normalized. So, for example, I would say one obvious effect that this work and project has had on me is it's made me a little bit more libertarian than I used to be. So previously I would say politically I was pretty nonpartisan. I would have described myself in 2013 as, say, a business conservative where my economic and business thinking was mainly conservative, social thinking was liberal, and libertarianism was the sort of a little bit of a fringe of nuts that I used to laugh at, especially the grand fringe of it.

[00:38:58]

But through this year and a half, I kind of learned to separate the interesting aspects of the growing world of libertarianism from the what I think of as the loony fringe of libertarianism, which I associate honestly with that rand. So that's that's one explicit sort of slow shift in my own thinking that I've been able to detect. But other stuff. Yeah, I guess it'll become more visible as the years go by. I mean, I'm only now beginning to understand transitions that I went through when I was twenty five or fifteen.

[00:39:33]

So I'm used to it taking a really long time for me to make sense of myself.

[00:39:39]

So it was season two. What are you trying to do with breaking isman? I'm going to be looking at the future of organizations, which is it's a topic that's been really interesting to me for almost 10 years now and since the beginning of my blogging on Drebin Farm. In fact, what put me on the map as a blogger was the responsible series, which I started in 2009 and finished, I think in 2013. So six part series that's now in ebook.

[00:40:09]

And that was all about organizational psychology and how organizations really work.

[00:40:13]

So that's one element of the thread that I want to develop and more careful and complete ways. The other motivation, of course, is that there's just things happening in the environment now that are causing a big change in how the operating system of how organizations are conceived and grown and run and operationalized.

[00:40:41]

And you've got everything from the extreme fringe of smart contracts on cryptocurrency block chains where there is no organization per say, but everybody has the sort of digitally mediated Peer-to-peer relationship with everybody else that they're economically engaged with. So it's like the smart network of contracts that's doing economic work to the other extreme, where you might have like a really ancient organization like the Catholic Church, which might adopt digital realities in a very sort of measured and slow way. And it's probably not going to go away because it's not becoming a blockin based organization or something.

[00:41:21]

And most of the world is somewhere in between those two extremes. And it would be, I think, fascinating to really sit back and think about, all right, what's happening to organizations because of the impact of digital technologies on the one hand, and because of just our growing understanding of what organizations are and how they work on the other. We are we've had now a couple of centuries of experience running corporate entities and various sorts of modern organizations, and we have like 30 to 50 years of management science that have given us a lot of insight into how organizations work.

[00:41:55]

Can we put that together and sort of paint a portrait of the world of organizations that's emerging now? So that's kind of my team.

[00:42:03]

So what effect do you think, broadly speaking, of course, will technology have on the way that we run organizations, the way that we manage people, the way that we interact with colleagues, employees? So I haven't yet.

[00:42:19]

So I'm just getting started on framing my hypotheses and ideas here. So let me give you the start of frame and then I'm hoping to go from there. So my starting frame is this idea from Alfred Chandler in the nineteen sixties called Structure Follow Strategy. You may have heard that phrase.

[00:42:38]

We tried to put that in my MBA. Oh OK.

[00:42:41]

So he's written a couple of books, structure and strategy, the visible hand and so forth. And at the beginning of I think structure and strategy does a list of hypotheses he lists about the new kind of organizations that were emerging in his time. So remember when Chandler wrote his works, it was the robber baron. Corporations of the 80s and 90s had become established big companies and had created a whole way of life around them, populations that moved from the agrarian hinterland to the evolving new cities.

[00:43:14]

A second generation of companies had come up and Chandler made a whole bunch of observations. I think there was a list of about 10 or 12 hypotheses in Chapter one about the nature of this new kind of organization that had emerged.

[00:43:29]

And among them, for example, was the idea that middle managers, managers of other managers who were not CEOs or senior leaders were the defining feature of the new kind of organization that had emerged by the 50s and 60s and modeled the culture of modernity, the structure of cities, the structure of education. Everything could be sort of inferred from the single fact of the rise of the middle manager, professional, large organization. And that's, of course, being reversed today.

[00:44:02]

That's the layer of the working world, at least that's increasingly being automated. We no longer go to like four layers of approvals to get like a travel expense form reimbursed. We go to a piece of software and some details. Maybe one person takes a look at it and clicks, okay. And then you get your reimbursement in your next paycheck. Right. So that entire population, that was the sort of anchor element of an entire way of life that persisted for 40, 50 years, that's slowly shrinking and disappearing and with it the middle class and so forth.

[00:44:35]

So my hypothesis is that not all of Chandler's sort of hypotheses about the 1950s era are being slowly flipped one by one. And we've seen that for the last 20, 30 years. But perhaps the biggest flip is that the defining archetype of the new world of organizations is no longer the middle manager, but in fact the free agent. So people like you and me, the people who don't actually live in organizations at all, but live in the ecosystems of organizations or as intermediaries between organizations.

[00:45:07]

So it's like in the nineteen fifties, ninety percent of I think it was about almost 90 percent at that point of humanity, lived a paycheck lifestyle. And this was the end of like a two hundred year historical process that I've written about, like in 70 and 80 that were less than 20 percent of the American workforce was paycheck employees and by nineteen eighty it was close to 80 to 90 percent. So that was a very long two century long trajectory of increasing paycheck employment.

[00:45:36]

And if you flip that around, you see that what it means is the number of free agents, the number of people living in the interstices of the economy slowly shrank in the developed world. Now, that process is being really sort of inverted. And now I would say, depending on whose estimates you believe, it's somewhere between 30 to 40 percent. And of course, some very naive ways of counting lead to the conclusion that it's no more than five or 10 percent, which I think is bullshit.

[00:46:00]

So the number I tend to believe is somewhere between 30 to 40 percent appropriately defined are free agents. And these people may not work inside organizations because these people are sort of this new emerging growing class, just like middle managers were in the nineteen fifties. What they do, the patterns of life to choose, where they choose to work and live, how they choose to educate themselves, how long they stay and projects their work styles like in a remote working, working from home, balancing multiple gigs at once, all these patterns of life that they're sort of improvising and establishing right now, whether it's lifestyle design sitting in Bali and doing it the.

[00:46:41]

At marketing for a big Fortune 500 companies here are somebody like me whose main real working is writing blogs and getting consulting gigs. We are not the sort of inner core of companies, but we kind of define the economy now. And I think that's a huge sort of framing idea that's emerged in the last 15, 20 years. And it's interesting for one reason, which is if you look at the structure of modern tech companies like Facebook and Google, their market cap is huge relative to their headcount.

[00:47:17]

So one useful metric for thinking about this is divide the market capitalization by the number of employees and it's the fastest growing unicorns and young companies. It's very, very high. So you might have like a billion dollar company with just two hundred employees and so forth.

[00:47:33]

So it's clear that these people well, they're very talented, very successful people who are going to get very powerful and rich because they're doing the few remaining valuable long term jobs that remain in the economy. But for the rest of us who don't have seen ninja level coding skills where we can be one of the privileged few to be in one of these companies as a core employee inside. The question is, what does it mean to survive in the ecosystem created by such companies?

[00:48:02]

What does it mean to be a citizen of, say, the Amazon ecosystem or the Google ecosystem or an app developer for Apple or a driver for Lyft or Uber? You know, these people are kind of defining the structure of the world at the moment. And if you don't normally think of this part of society as even the I don't need the raw material of organizations. These are by definition, the people who are not in organizations. I think actually they're the ones who are going to define the organizational landscape of the future.

[00:48:32]

And how do you think we're going to manage that? I mean, one of the key things that I keep hearing over and over again, which I don't have an opinion on, or well-formed one anyway, is like matrix management. And how do you think the structure of management plays and what do you think about Matrix management or Matrix management is a very old idea, actually.

[00:48:50]

It's at least as old as I would say early eighties is when it became popular. So with the first wave of deregulation and the creation of a lot of outsourced and subcontracting kind of economy, the manufacturing sector. So the Reagan Tatiara Matrix management came about when people realized that you needed a line management access, which was traditional companies, along with the project management access, because so many needs were transient. So that is a very old idea. And that's, I would say, actually the incumbent traditional management.

[00:49:26]

Now, it's not the that's not the new stuff. Matrix management is defunct in the old economy right now. That's how projects get managed. What is new is managing projects through multiple circles of contingent labor. So think of a typical project.

[00:49:44]

So no, no good successful project in a software even world can be really, really big at most. You might have. Let's talk about a software project as sort of a prototype, because that's a new kind of core work. You might have, say, overall extend a team of, say, one hundred and fifty building a product and you might have a structure where the core team of employees with stock options and careers and the ability to buy houses and all these super talented people who've gotten the golden ticket, that might be a core group of 15 to 20 inside the company, then you might have another ring off like a longer term contract workers of, say, 15 to 30 who are doing less critical tasks.

[00:50:29]

Then you've got like another layer of small boutique firms handling things like social media, marketing, doing a little bit of focus group research, things like that. Then you've got another big layer of, say, a developer community that's beta testing and you're trying to woo them to use your technology. Beyond that, you might have a much broader ring of, say, one hundred people who are really not even part of your producer team, that part of your early adopter consumer team.

[00:50:58]

But because they're like power users, understand your technology well and may do a little bit of hacking. They're the ones who are going to discover the use cases that will actually work and establish a product. So think of the outermost ring as sort of the prosumer, the people who are partly producing in addition to consuming in return for things like discounts at the very core, being like the best compensated people who have a chance of becoming stock option millionaires or something.

[00:51:25]

That's the structure of a team that make something happen in today's economy. It's not matrix management or any of these old ideas. It's just, I don't know, tribal, say, one hundred and fifty people with various levels of belonging in a fuzzy set that make something big. So as you were saying that, and let's make the assumption that we should pay people based on the value they bring to the table, how do we compensate people in a system like that in a system where you might be part of a team that creates a three hundred billion dollar product, but your role in that team and your value and your contribution versus somebody else's value and contribution in there.

[00:52:02]

How do you think about that?

[00:52:03]

I don't think there's a single universal answer. It's a very sort of contentious debate right now. And people are sort of exploring different answers. So it's useful to think that the general principle level but of the example level. So two examples I think, that are driving the conversation forward. The most are Kickstarter and the Ritcher economy. So Kickstarter, if you think about it, the early group of backers that gets a project off the ground, they're actually not just paying with their money to support a project.

[00:52:37]

They're paying with their intelligence. They're doing work. They're reviewing like lots and lots of projects that they might be scanning on Kickstarter and sort of making reasoned decisions about, hey, this is an interesting new innovation that deserves support. Now, they might be using any of the three decision making processes we talked about before. They might be conceptual thinkers. They might be good versus evil thinkers, or they might be I need to be part of this tribe that makes this happen, kind of thinkers, whatever their approach to contributing intelligence in the form of information, in addition to money.

[00:53:10]

And this group typically gets compensated with a bunch of, like, I don't know, gift economy type artifacts. You might get a T-shirt, you might get a shout out on Twitter. You might get like an early advance instance of whatever it is that's being produced. Like if it's a book or a little manufactured widget, you might be one of the first to get one of those things. So that's the way competition works and that particular example. And of course, it leaves a lot of people very unhappy because they want more compensation for what they see as more valuable input.

[00:53:43]

And that's one of the reasons we have this legislative process right now. That's about opening up a crowdfunding to equity ownership. So very soon, we might see in a very limited and regulated ways the ability of crowdfunding backers to own equity in the things they bacchanalian. So that's one example of how people might get compensated. Another is so there's a dull conversation about Uber and an interesting conversation, but that conversation is simply traditional. 19 20 is labor. Thinking of all these people are simply not being paid enough and effectively they might be even being paid less than minimum wage, if you like, account for their hours properly and they have a precarious income, they need a safety net.

[00:54:24]

I think that's an uninteresting conversation that will go nowhere. It's applying 19, 20 lenses to twenty fifteen. But the interesting conversation is these people are actually participating in innovation. They're obviously what everybody's seen coming down the road is automation, driverless cars. And how are these driverless cars being trained? Well, all the data that's coming from thousands of rides being taken by people and drivers navigating different routes, it's not it's having two effects. So when an Uber driver picks a passenger from point and drops him off at point B, that passenger gets the ride and the driver gets paid some money.

[00:55:05]

But the data that's generated that goes to machine learning algorithms that improve everything from our understanding of safety to navigation to following traffic rules, that stuff is really them contributing to the R&D of the next generation of product in which they have no producer role at all.

[00:55:22]

They want they're almost unknowingly sowing the seeds of their own destruction. So one sort of interesting argument I've heard is these people are the equivalent of laboratory researchers and they should be paid for the research function as well. And this sort of ties into the larger argument that anybody who's involved in producing large amounts of data that going into these big platform type industries, they really should be compensated for the value of the intelligence they're pouring into the platform. And this is why people, of course, are talking a lot about data monopolies and the new algorithmic monopolies, because that's what these companies are doing.

[00:56:02]

They're generating a huge amount of data through their operations that is feeding the next generation of machine learning based technology and automation. And they're in the position to, of course, reap the benefits of that. And to some people, it seems fair that the people involved in generating the data should be compensated for it. And you're seeing limited versions of that where you can now buy a little device that you plug into your car that your insurance company will use to give you a lower premium.

[00:56:32]

So you'll get a discount on your premiums based on transmitting exchange for your information.

[00:56:37]

So the beginnings of such an information for money economy. Starting to emerge, but it's going to take hundreds of examples, probably a dozen or more court cases and lots of regulation before we sort of figure this thing out.

[00:56:51]

Isn't Google kind of an example that I go on for searching? I'm not paying for it. They're providing me service and I'm giving them information that they can then use to sell?

[00:57:01]

Yeah, definitely. It's a case of that. The thing that trips people up about this conversation is there are too many variables in the equation where you are getting information that would in terms of like substitute products, would cost you thousands of dollars if you had to do it through a traditional network of paper libraries. Right. Like, I can just type in a search term and get an answer I want. And if I didn't have Google, I'd have to go to my local library, look for the reference there, go to the interlibrary loan system to get a book.

[00:57:32]

If they don't have it, it would be instead of 15 minutes of work processing search results, I would do 20 hours of work. So the value to me is 20 hours of my own time. But it's clearly ridiculous to value things that way because the economy is not a closed system where you can value things of value, the cost of a substitute sort of in a vacuum. So because a lot of these things, when you actually let the market decide what the cost is, the cost is so close to zero that we round it down to zero.

[00:58:06]

That makes it very hard to actually do meaningful computations here. And that's one of the things that people think crypto currencies might be helpful there. But it becomes possible to meter even the tiniest of cash flows where each individual transaction might be worth like a fraction of a penny. But if you put on enough automated circuitry in all your transactions, it might build up to something more meaningful. So that's a vision that some people have.

[00:58:32]

What do you think the future holds that no one's talking about? If I knew that I'd be out there making money off, I wouldn't die. Any guesses?

[00:58:43]

No, I don't play the games.

[00:58:45]

This is something that have that's almost become a philosophy of mine where I say to some breaking smart, where I say I'm not going to attempt to predict what and when of the future. I'm only going to try and predict the how of the future, what ways of working are going to be more effective in the future as opposed to in the present because.

[00:59:06]

The moment you get sucked into this game of trying to predict what's in the future and you're not doing it in a systematic way, like being part of a hedge fund that's doing very reasoned, calculated bets on your predictions, it sort of sucks you into a sort of utopianism of the new and I talked about this in breaking smart as well, that you get attached to a particular vision when you say, oh, the thing that's going to happen and must happen is flying cars with bio nanotech circuitry.

[00:59:36]

And then that doesn't happen. And then you go through some logical thought process of mourning for that lost utopia. A lot of people are going to that right now where they're upset that we didn't get our nineteen fifty flying car. So that's the reason I don't play the game of trying to predict the future.

[00:59:51]

OK, so a different version of a similar kind of question is what do you think that people are focusing on today? That's a waste of time.

[01:00:00]

That's a dangerous question because the moment you answer 10 years later, it turns out to have been a very productive thing to have been doing. What are they wasting time on? Honestly, I don't know. That's a tough one.

[01:00:15]

What book would you say is had the greatest influence on your life? That's another game I don't play.

[01:00:21]

I think yeah. If you had to pick a few, like, what would you say?

[01:00:26]

Well, I have a page on my blog. It's Ruban form dot com slash now reading. So now Dasht reading where I maintain kind of an active reader of things in my pipeline so to speak. And I do have a list of books under top in the sense that I reference them a lot. They're like foundational mental models for lack of is one of the metaphors we live by. We've got Garreth Morgens images of organization, James Scar's finite and infinite games.

[01:00:57]

So there's a bunch of books that I reference a lot and I sort of bread and butter frameworks that I use a lot, but I wouldn't say that they're the books that have influenced me the most, because that's that's actually a question that's kind of problematic, because influence is a very hard quantity to measure where you might say that when you were going through like a very angsty teen crisis at the age of 17 and you read The Little Prince or Catcher in the Rye and that totally I don't know if it saved your sanity back when you were 17 and kicked your life on a 90 degree different course.

[01:01:36]

That's very influential. It might have steered your life that way. But is another book might be the sort of little trickle of a drop at a time seeping into your brain because you reread it every three years like the Lord of the Rings or a lot of people on the Rings or something that they reviewed every two years. I'm not one of them, but there's a lot of people who do that. So does that kind of book. So for me, an example of both kinds would be a catch.

[01:02:03]

Twenty two is a book that I read as a teenager and I've never read since. And that was kind of a sharp tongued influence. I don't know what influence it's had on my life, but it has had an influence. But as Douglas Adams Hitchhiker's Guide to the Galaxy is more of the typical influence. But I definitely reread it every three years or so. And each time I unpack a new layer of sort of philosophical cleverness and humor in the book, and it sort of reshapes my thinking all over again.

[01:02:29]

So does that kind of influence and then does other stuff like I mean, school is underrated. The things we learn in school, like I spent two years in high school becoming very good at solving trigonometry and calculus problems and that sort of really shaped and forged my brain. And these are not books that you would typically put on a list of the sort you are suggesting where the books would be. Let me try and see if I can even remember.

[01:02:56]

There's a series of books by a British mathematician called Saloni, and these are books written in the thirties. And all they are is huge books of trigonometric identities that you can solve like several hundred problems. And this is like you, a mental weight training. And I've looked through like several such books.

[01:03:13]

Another is a book by a Soviet scientist called IHI are all the Ovie. It's an obscure little book of math and physics problems that was very popular in Soviet Russia and was very popular in India when I was studying, for example, to get into university.

[01:03:30]

And it is a book that has been massively influential in my thinking because I spent two years of my life, probably some of my smartest years, hours and hours a day, simply sitting and solving calculus and physics problems from the book and beating my head against the wall with that book. And that's obviously been a huge influence, but I barely ever think of it. I don't have a copy now. I don't go back and reference it. It comes up maybe in conversation once every ten years.

[01:03:57]

But I might be reminiscing with an old friend and we say, Oh, I remember when we beat our heads against that difficult book. So influence is a very hard thing to call. Quantify, and I think what we end up doing when we talk about influential books is almost a social signaling game where you are trying to, like, advertise the identity you most want to inhabit right now to others. So right now, I might be thinking of myself as, I don't know, a career blogger, slash management consultant.

[01:04:27]

And I want to come across as a wise and somebody who has everything together. And then I might list three books that sort of reflect that perception I want to project. And that would be kind of a bad fit exercise in hypocrisy, which is why I don't like this question of what books have influenced you, because influence is like we discussed, a very complicated phenomenon.

[01:04:49]

Are there any other books that you've read that you want to mention books that have read?

[01:04:54]

Douglas Adams is probably the most consistent kind of reading. Oddly enough, TV has had a very weird effect on my reading. Like Agatha Christie, I love her mystery novels and I used to reread at least a handful of them every few years. But once we got streaming, Netflix and all the television versions of the Poirot mysteries have been available. I just read Watch them and.

[01:05:20]

Thanks so much, Venkatesh. This has been great fun. I really appreciate you taking the time. The conversation was amazing because a lot of fun to be.

[01:05:27]

Thank you. Thank you.

[01:05:32]

Hey, guys, this is Shane again, just a few more things before we wrap up. You can find show notes at Farnam Street blog, dotcom slash podcast. That's fair. And s-t r e t blog. Dotcom slash podcast. You can also find information there on how to get a transcript. And if you'd like to receive a weekly email from me filled with all sorts of brain food, go to Furnham Street blog, dotcom slash newsletter. This is all the good stuff I found on the Web that week that I've read and shared with close friends, books and reading and so much more.

[01:06:06]

Thank you for listening.