Transcribe your podcast
[00:00:00]

Before we begin today's episode of Rationally Speaking, I have a short announcement if you've been following my former co-host, Professor Massimo Plushie, you may have noticed that he's been speaking and writing and thinking a lot about the philosophy of stoicism and how to lead a stoic life. Well, if that piques your interest next month, Massimo is hosting an annual conference of stoicism in New York City. That's on October 15th and it's called Stoveken. So if you're interested in acquiring an unconquerable mind and a will of steel or at least a little bit of tranquility, I suggest you check it out.

[00:00:33]

The site is how to be a stoic WordPress dotcom stoveken. We'll link to that on the podcast website. And at that site, you can read the full lineup of the day's events and register stats stock on October 15th in New York.

[00:01:03]

Rationally speaking, is a presentation of New York City skeptics dedicated to promoting critical thinking, skeptical inquiry and science education. For more information, please visit us at NYC Skeptic's Doug. Welcome to, rationally speaking, the podcast, where we explore the borderlands between reason and nonsense. I'm your host, Julia Gillard. And with me is today's guest, Sam Ebersman. Sam is a complexity scientist and currently the scientist in residence at Lux Capital. He's also the author of the books The Half-Life of Facts, which we discussed on this show a few years ago.

[00:01:46]

And more recently, the book overcomplicated technology at the limits of comprehension. Sam, welcome back to rationally speaking.

[00:01:53]

Thanks so much. Great to be back on the show. So a complexity scientist is what is a complexity scientist, so complexity science is essentially it's a scientist who is focused on studying complex systems.

[00:02:07]

And so complex systems are really it's any any sort of system that has a huge number of diverse parts that all interact and often complicated ways. And so that sounds super abstract. And so really what it is, it turns out it can be it can kind of cut across many different domains or there's biological systems, these things like the parts within a cell or even many organisms interacting within an ecosystem. These are all complex systems, computers and a large network.

[00:02:35]

That's a complex system, people interacting in an entire society or within within a single city. This is also a complex system. And so it turns out that there are a variety of mathematical and computational tools that you can use to understand complex systems, both the the details of each specific type of system, as well as often kind of stripping away all of the specific features of the kind of domain specific aspects of them and say, OK, these things are all specific.

[00:03:03]

They're all just kind of networks. They all have pieces. They're all interacting this way. Maybe there's some kind of mad abstract away.

[00:03:09]

The specifics.

[00:03:10]

Yeah, yeah, yeah. Maybe there's some kind of mathematical framework for understanding all these different things. And so complexity science essentially takes these tools and apply them to a whole variety of complex systems. Great.

[00:03:24]

I appreciate that you gave examples of complex systems because I think complexity science is one of those things where it's sort of hard to define it without using the word complexity in the definition of good examples in the world is is helpful.

[00:03:37]

Right. And frankly, I think a lot of the most interesting aspects of our world are complex systems. And so it does kind of touch upon pretty much everything that we see around us. Right.

[00:03:46]

Especially as someone who loves the idea that that there is kind of an underlying order or pattern or, you know, connections between seemingly disparate fields or disparate things. Complexity, science is very appealing to me. The idea that we can learn how to think about, you know, our economy from studying bird populations in the forest, that sort of thing, it's it's like a very appealing way of looking at the world, I think. Oh, yeah.

[00:04:11]

Know, it's very exciting. And I think that being said, you kind of have to caveat it with the fact that oftentimes these kind of models might give you maybe kind of like a first order approximation of what's going on. And just because you can kind of write a single equation that explains maybe how the the metabolic metabolic processes within organisms, kind of how they scale with the sizes of different different creatures, these kinds of things are very powerful and there's a lot of explanatory power in them.

[00:04:36]

But at the same time, it's, of course, going to strip away a lot of the details of why living things are the way they are and why different creatures kind of have all the details behind them. And so I and I think there's often there's I think complexity science, when done right, is always this balance between recognizing that there are details which, of course, a simple model might not capture, but at the same time recognizing that there are these deep similarities and and kind of analogous way like modes of behavior and ways of being between all these different areas.

[00:05:04]

And I think that's a really, really exciting thing and kind of shows that there are certain models and modes of modes of analysis that actually cut across lots of different domains.

[00:05:14]

Right, right. And that is a tension that you touch on in the book and which hopefully we'll get back to in our conversation. So moving on to the thesis of your book, Overcomplicated, what kinds of systems are you concerned with in the book, like, concerned that they are or are becoming too complicated? And what does it mean for a system to be too complicated? Sure.

[00:05:37]

And so I'm speaking about essentially all the technologies that we've built as a society. And I'm using technology fairly broadly to kind of mean anything that has been built by people for a specific purpose.

[00:05:47]

And so it can be it can include the traditional sorts of technologies that we think about. So large machinery or pieces of software or the computer on your desktop. But it can also be the entirety of the Internet. It can be our urban infrastructure. It can even be our legal systems. Our legal guarantees are technologies like these are technology of a certain purpose. And in the book, I shy away for the most part. I'm kind of like the socio technical systems, like things more like bureaucracies.

[00:06:15]

But but I definitely actually discuss legal codes quite a bit because I think there's some very interesting similarities between how those technologies grow and evolve and how more traditionally we think about technologies and how those things grow and change.

[00:06:28]

And so essentially, when the argument I'm making the book is that technology very broadly is becoming more and more complicated, which I think intuitively people recognize. But increasingly, not only is it just too complex for a layperson to understand. So it's it's one thing to say I don't understand how my iPhone works, but there's somewhere like an Apple genius who understands what's going on. But increasingly, many of the technologies that we're surrounded by and that we use on a daily basis, they're actually so complex that the No.

[00:06:55]

One. Whether in whether you're an expert or otherwise fully understands these things, and it's because these systems, they've essentially become very, very complex systems, they have enormous number of parts that are all interacting in highly nonlinear ways that are subject to kind of emergent phenomena. And this is it's not just biological things kind of exhibit this behavior where it's really hard to understand what's going on within the human body. Increasingly, the systems that we ourselves have built and we think of ourselves as fairly rational individuals, we should be making logical constructions.

[00:07:26]

And increasingly these things that we build are not fully understandable. And so the book looks at what are the forces that have led us to this point of incomprehensibility, like the kind of things where, for example, on the one hand you want to continue adding sophistication to a technology over time, adding features to, let's say, a piece of software that's great and each individual piece is good, but over time they accumulate and you have a create a great deal of complexity.

[00:07:50]

And so that force, as well as other kinds of forces, lead us ever, ever closer to increasing complexity. And then on the other side, you have the fact that when these systems become complex enough, our brains really have not evolved to handle the kinds of systems that we're increasingly building. And so they're going to be they're becoming more and more incomprehensible where we don't fully understand these things. And of course, I mean, understanding a system is not a binary condition.

[00:08:13]

And you can kind of have different levels of understanding. But increasingly, we are having a reduced amount of understanding of these systems.

[00:08:19]

And so the book looks at kind of what are the forces that have led us to this point, why our brains kind of breakdown in the face of all this incomprehensibility and what should we do? Should we kind of freak out and say we're screwed or are there more productive responses? And I'm a fairly optimistic person by disposition. And I think there are ways of actually meeting these technologies even halfway.

[00:08:39]

Great, great. And just to emphasize, when you talk about our technology increasingly becoming non understandable, you're there's sort of three levels of strength of that claim where the lowest level is, you know, the people, we the individuals who use the technology don't understand it. And then the next level up would be No. One. There's no person who understands this technology thoroughly, you know, 100 percent. And then the higher level up, the third level is it is not possible for a human to understand the system.

[00:09:10]

I guess there could even be a fourth level where you could say it is inherently not understandable that even some, you know, being with with more working memory and computational power and other forms of intelligence would not be able to understand it because it is inherently there's not an order to be found there. So there are four levels that I just sketched out there. Which of those claims are you meaning to make?

[00:09:32]

So I think I'm trying I would say depending on the specific technology, we're somewhere between two and four. I think there are depending on how you define understanding, there are certain situations where you like the ability to trace out all the different possible pathways within a piece of software, within like a computer program, like all the different potential if then statements and kind of all the ramifications, those kinds of things. Actually, once we get software of a certain size, even if we find we have far greater memory and far greater processing power than kind of baseline humans, I think those things are actually impossible kind of in the life span of the universe.

[00:10:09]

So I think then, depending on the type of understanding, you can actually kind of go all the way up to a level four. For the most part, I'm talking about levels two and three. But increasingly, I think we it's not just kind of practically we're getting where it's like these systems and no one fully understands them. I think we are verging more and more to the level of three simply because the massive size and intricacy of these systems, as well as often the amount of expertise in multiple different domains that is required to understand these things fully.

[00:10:40]

It's just it's too much for a single person to know. It might even just take more than several lifetimes to actually gain all that mastery. And so I think we are getting closer and closer to that level three, where these things are, for all intents and purposes, impossible to fully understand. Right. Let's talk about why that's potentially a bad thing.

[00:11:01]

So I would say why it could be a bad thing is if we don't fully understand the systems that we are kind of building to to order our world and kind of increasingly living within, we're going to have bugs and glitches and failures. And if we think we understand these things well and we don't and there's kind of there's going to be a constant gap between how we think we understand the system and how it actually does behave. There's going to be these failures, I think.

[00:11:27]

And so for kind of the negative side, I think that can be very worrying for many people. The idea that there is this constant mismatch between what we can understand and the complexity and the power and the actual behavior of these systems. And I think and so I think for many people, when that when that happens, you kind of just if there's this concern and there's this mismatch, you immediately jump to essentially like fear in the face of these systems like.

[00:11:56]

Super intelligent computers, they're going to they're going to kill us all self-driving cars are going to crash uncontrollably. All these bad things are going to happen. Sometimes people actually kind of go the opposite direction to kind of the other extreme where there's a system they can't fully understand. And they almost have this like a new generation, almost kind of like a religious sense towards sort of like like, oh, my God, this thing is so, so wonderful, so complex.

[00:12:15]

I can never fully understand it. It must be perfect. Kind of like the mind of Google, like the algorithm knows all. And I think that's also a very dangerous, extreme perspective. Both of them are not so great because they they end up cutting off, clashing and actually like trying to inquire how these systems work. I think the better way of thinking about it is, yeah, there's always going to be this mismatch between how we think a system works and how it actually does work.

[00:12:41]

But but there's this constant iterative approach to actually better, better understanding the system, as well as more effectively trying to reduce the gap between those things.

[00:12:53]

And often and often that's the kind of and that's what happens when you like you root out bugs and glitches and and you kind of make the system closer and closer to how you think it should operate. And you'll probably never get there. And I think the the I think that the healthy the healthy attitude is rather than kind of like a company saying, I like our software is perfect. Oh, wait, there's a little bug. Like, we kind of like sweep it under the rug, we fix it.

[00:13:17]

And now the system is perfect, recognizing that these things like they're the best we can do, there's always going to be errors and but we can but we can constantly try to improve our understanding of the system and actually improve the performance of the technology.

[00:13:29]

Right. So it sounds like there's a few at least a couple of different paths to this increasing complexity because of the increasing complexity where one of them is just that we keep adding on to the systems that already existed and we keep sort of patching the bugs that crop up instead of saying, well, you know, it's buggy, let's start afresh and create a system that won't have these bugs in it. And then a separate a different path is that it seems like complex systems just to do a better job of a lot of important problems than simple, understandable systems like the field of artificial intelligence, which you mentioned has been moving over the last few decades away from these sort of top down approaches where we program rules into the artificial intelligence to follow for it to determine whether something is a cat or is not a cat, you know, for example, to instead a system where that that's more bottom up, where the AI learns rules from from mining data that we give it.

[00:14:30]

And they're not rules that we could have even known about ourselves or even that the algorithm could articulate in a way that we would understand. And that just does a better job. It's just like a more effective form of artificial intelligence than the top down understandable systems of rules. Does that dichotomy seem to capture most of the cause to you, or is there another path? I'm not pointing out?

[00:14:54]

So I think that's another important factor. And I think this kind of speaks to the idea in the world. The world is messy and complex. And so therefore, often in order to capture all that messiness and complexity, you need a system that effectively is often of equal level of messiness and complexity, whether or not it's like explicitly including all the rules and exceptions and kind of the edge cases or a system that learns these kinds of things in some sort of kind of probabilistic, somewhat counterintuitive manner, like the kind of thing where it might be hard to understand all the logic and kind of the underlying machine learning system, but it still captures a lot of that messiness.

[00:15:29]

I think you kind of you can kind of see the situation where in machine learning, the the algorithm, like the learning algorithm, might be fairly understandable. But then the end result, like let's say you have like some sort of neural network with like millions of parameters and a complex relationship between the input, the input of data and kind of the actual output. These are the kind of you might be able to kind of say theoretically, I can kind of step through the mathematical logic in each individual piece of the resulting system.

[00:15:58]

But effectively, there's no way to really understand what's going on. And I think that often is a result of the fact that the the input the world is actually very, very messy and complex. And you can see this even kind on a small level when you think about like like if you want to build, like a calendar application for your iPhone, it's one thing to say, okay, like there's three or sixty five days in the year. But suddenly you realize you have to deal with the leap years and then you also deal with time zones and then like daily daylight savings time and suddenly you realize, oh, wait, this is actually a fairly complex thing.

[00:16:30]

And I think it's because the world is complex. And you see this also unlike self-driving cars, it's one thing to say I want to build a self-driving car that drives in like low traffic conditions on highways on sunny days. It's another thing to say, well, I have to deal with kind of the messiness, the real world, like One-Way, streets and like pedestrians and like maybe like a dog jumping out into the middle of the road or like a whole bunch of irrational drivers that are that you're surrounded by.

[00:16:53]

And suddenly everything becomes much, much more complex, whether or not you're manually hard coding it in or allowing a system to learn all these things organically. The end result is a massively, massively complex system. I would tell you like that that is a really kind of like this, like the need for a system to capture all the edge cases in the real world. That is a very, very important driver for how these systems become very complex and increasingly incomprehensible, related to kind of the the whole like accretion, kind of like adding things over and over is this idea of like interconnection that when you add new bits and pieces and they're not they're not existing kind of in a vacuum, that everything interacts with everything that came before it.

[00:17:34]

And oftentimes the pieces that have come before it, like they're they're actually foundational and they might have existed for years or decades prior to the kind of thing you're trying to patch or fix as you have the system. You have situations with like with legacy code and like and and legacy systems where like I think the IRS uses computer systems, systems that were developed. I think during the Kennedy administration, you have like many decades old machinery that's still involved in a lot of very important technologies that we use on a daily basis.

[00:18:06]

And and these things are so they're so kind of embedded within our technology. We can never really root it out. And to start over would often be very, very prohibitive in time and resources. But in addition to that, kind of as a related factor, sometimes the people who built these original systems that we still rely upon, they might have been long retired, they might even be dead. And so therefore, we can't really talk to the people anymore who are involved in building these kinds of things and seeing how those foundational systems interact with the more complex, newer pieces, it becomes effectively impossible to understand what's going on.

[00:18:39]

And in fact, I'm I'm reminded of some of these cautionary tales where people looked at a pre-existing system that had kind of developed somewhat organically with pieces getting added on over time to suit, you know, new needs that cropped up, maybe that the city maybe that's a social system. And these people looked at these complicated, messy systems and said, oh, well, I can do much better than that. Let's let's raise this neighborhood to the ground and build a nice, orderly city plan.

[00:19:13]

Instead, let's design a new social system that actually makes sense or new language. And in fact, that messy organic thing was was serving a lot of important functions that were kind of invisible from the top down. And the nice, neat, orderly new system actually failed in a bunch of hard to predict ways like, you know, it didn't like have spaces for four communities to organically meet each other and gather and create that sense of community. For example, it was all too sterile, that sort of.

[00:19:48]

And so there are all these sort of cautionary tales of people trying to get rid of messy systems that are hard to understand and put in these legible systems instead. And I wonder if it might, in fact, be sensible to not try to get rid of the complexity and cloudiness and put in place a new system that we think will work better.

[00:20:12]

Yeah, no, I, I totally agree with that.

[00:20:14]

And I think oftentimes when when you build a new system, like if you do it effectively and I kind of try to sweep away everything that was complex and messy from before, oftentimes you end up with something of similar complexity if you do it right. And if you do something that actually works well, it often ends up just organically becoming as complex. I can see that kind of thing happening. Would it would not be surprising if that kind of thing, if it were done right, it would end up being either as complex or if it were not as complex as what you were saying, like it ends up falling in ways that we can't imagine.

[00:20:44]

And I think there's like kind of this recognition of humble tinkering approach with any system where it's like we're not rather than trying yet to kind of sweep it away or fully understand it, recognizing that when you're confronted with a technology that might be new to you or or it's you're kind of trying to understand it rather than trying to replace it with something very, very simple, saying there's a lot of unanticipated consequences. There's a lot of nonlinearities in these systems, a lot of kind of feedback in ways that would be hard to understand.

[00:21:15]

Therefore, I'm going to kind of just play at the edges and try to see how it behaves. And you can see this oftentimes in a company someone might like, a new CEO would come in and there's, I imagine, a lot of pressure to make their mark on this organization. But, of course, an organization is also a very complex and nonlinear entity. And so to try to change it in large ways can often backfire in unexpected manners.

[00:21:40]

And so it's often probably better, maybe not necessary from a PR perspective, but from a the perspective of actually trying to modify a system to modify and kind of these small ways at the edges rather than trying to kind of sweep everything away and change it in really big ways where you're then prone to only learning about the complexity of the original system due to all these unanticipated consequences.

[00:22:01]

Right. You made this really interesting distinction in the book that this thread is starting to remind me of between physics thinking and biological thinking and this sort of tinkering on the edges approach. Sounds like would you call that an example of biological thinking?

[00:22:16]

Yeah, I think that would be a good way to describe it. Yeah. So these kind of the distinction between, like physics thinking and biological thinking. And and again, before I kind of delve into I want to be clear that there are many physicists who use who employ biological thinking and many biology and applied physics thinking. But it's kind of a good shorthand way of thinking about it. Essentially, it's the physics approach. You can kind of see it embodied maybe in like an Isaac Newton.

[00:22:37]

You're right. You're right. A simple set of equations and explains a whole host of phenomena. So you write some equations to explain gravity and it can explain everything from the the orbits, the planets, the nature of the tides to how a baseball arcs when you throw it. So it has this incredibly explanatory power. It might not explain every detail, but it maybe it could explain like the vast majority of what's going on within a system. So that's kind of the physics.

[00:23:04]

The physics approach would be abstracting away details to yield kind of some very powerful insights. On the other hand, you have this biological thinking, which is the recognition that oftentimes in other types of systems and certain types of systems, the details not only are fun and enjoyable to focus on, but they're also they're extremely important. They might even actually make up the majority of the kinds of behavior that the system can can exhibit. And so, therefore, if you sweep away the details and you kind of abstract away and you kind of try to create this kind of abstract notion of a system, you're actually missing the majority of what is going on.

[00:23:40]

And so the kind of biological approach would be to actually recognize that the details are actually very, very important and therefore need to be they need to be focused on. So I think within when we think about technologies, I think both approaches are actually very powerful. But oftentimes I think sometimes people in their haste to understand a technology, oftentimes because technologies are engineered things, we often think of them as perhaps being more kind of the physics thinking side of the equation, a kind of side of the spectrum, when, in fact, because they they they need to mirror the the extreme messiness of the real world or there's a lot of exceptions or they've grown and kind of evolved over time.

[00:24:26]

Often it's kind of very organic, almost biological fashion. They actually end up having a great deal of affinity with biological systems and systems that are amenable to biological thinking and biological approaches. And I think we need to I wouldn't say privileged biological thinking over physics thinking when it comes to technologies, but at least recognize how important it can be to say, OK, this is a system. It has a huge. A number of parts that have maybe been added and grafted on over time, it's evolved oftentimes in fairly similar ways to living things.

[00:24:56]

And so therefore, we need to actually use this kind of more biological mode of thought where we look at look at the details and the exceptions and the bugs in a system to really to really get a better sense of how that system is working. I think that kind of biological mode is very, very important when it comes to technology.

[00:25:14]

Another way, I think to capture the difference between biological and physics thinking is that physics thinking relies more heavily on sort of theoretical causal reasoning like the system. I predict this because the system should work this way based on my model, whereas biological thinking is maybe more reliant on empirical evidence. Like I will be confident that the system works this way. When I have tested that the system and confirmed that the system works this way, whether or not that conforms to my expectations of how the system should work.

[00:25:47]

And actually this same distinction, I think, even with the biological and physics labels, was made by another recent guest on the podcast named Vinayak Prasad, I think is what's his name. And he he co-wrote the book Ending Medical Reversal, in which he was talking about all of these signs, all these medical results that were consensus among doctors and put in practice for years. And then finally, a solid gold standard, long term trial was done and found that, oh, actually, stents don't have the positive effects on mortality that we thought they did.

[00:26:23]

Oops. Or, oh, promoting handwashing in hospitals or sorry, wearing gloves in hospitals doesn't have the positive effects. We thought it did. Oops. And he says he thinks that one reason for this, the reason that this keeps happening is that medical students are taught to think sort of like physicists where the human body is this machine. And you can reason about what would happen if you do this thing or that thing instead of taught to think like biologist's where you only really trust the result if you have seen that it is in fact that that, like the evidence supports it and not just the theory.

[00:26:58]

Yeah, I think this kind of goes back to what I was saying with like the idea of like using this kind of iterative approach towards understanding a system where we think we understand the system well. And then there's a whole bunch of bugs or failures that make us realize that there's kind of this gap in understanding and then that causes us to update our model of how we think the system works. And I think that is much more kind of this biological mode of constantly collecting bits of information to to gain a better picture of what what the system is doing.

[00:27:26]

And actually, in biology, one of the ways you can you can do that kind of thing is rather than kind of just waiting for mutations in the system to to to to teach you about maybe how bacteria do their thing, you can actually inject errors. And so you can you can like mutate and you can use like mutagenic chemicals or radiation on bacteria in order to really try to understand the kind of complex feedback within genetic networks. And you can also do the same kind of thing in technologies.

[00:27:54]

You can actually inject errors into into our technologies to really kind of have this empirically based understanding of a system. So so Netflix actually has this software that they've released called Chaos Monkey, where what they do is so much. Yeah.

[00:28:08]

So what they the way it works is it it essentially will randomly take out various parts of the system that you're working on and kind of like render them inoperative. And so the idea is that the system should be as robust as possible. And so when this kind of thing happens, the system should respond robustly and be able to kind of handle all the bugs so that when and of course, if it doesn't, you can kind of improve the system so that when something does go wrong, there is a small a mismatch as possible between how you think it should respond and how it how it actually does respond.

[00:28:39]

And so I think that, like very empirically grounded approach to our technology of like recognizing, OK, it's going to be very hard to understand how these systems work. But perhaps one of the ways of doing that is actually gaining insight into how something works is by seeing how it operates when something goes wrong. And I think that that is a very kind of powerful but also like humbling approach to recognize that sometimes the only way we can understand a system is actually when it malfunctions.

[00:29:06]

And it teaches us about the gap in our understanding. And I think that recognizes, yeah, it's a very different sort of approach than saying I'm going to have this kind of like a very mechanistic, logical approach where I have a simple equation that should explain everything.

[00:29:22]

And when it doesn't kind of be perplexed, there's an expression I forget who originally said this, but it was yes, it works in practice.

[00:29:32]

But does it work in theory? Exactly. Yes. Yeah, I only trust it if it matches my models. Funnily enough, someone told me about Netflix's Chaos Monkey approach a while back and I misunderstood them at first and thought that there was someone at the company. His role was chaos monkey or was the catalyst? Yeah, great job title that's even better than scientists in residence is Chaos Monkey for Netflix. I yeah.

[00:29:58]

So we've been touching a few times now on this idea that complexity makes systems less robust in some ways. And it's definitely it's definitely intuitive to me on some level. But I could also see an argument for complex systems being more robust, like, well, for example, one of the things you talk about in the book is this feature of many complex systems now that you call interoperability, or maybe that's not your personal word, but a word that's used in which systems are designed to to like work together, like two different systems from different platforms.

[00:30:37]

So, for example, Uber uses Google Maps to tell its drivers where to go and how to get there. And you kind of make the case that this creates additional complexity, which I can sort of see. But I can also imagine this counterfactual world where every system is self-contained and Uber in this world. Uber didn't rely on Google Maps, but instead developed their own mapping algorithm or app. And so there's less interoperability there. But then there's also just lots of different systems that that haven't already been tried and tested and developed.

[00:31:13]

People develop familiarity with them. And it's not clear to me that that world of self-contained systems has less total complexity and less total downside risk.

[00:31:21]

Do you think it does so?

[00:31:23]

I, I think a very complex system is, by and large, are robust just because if you look at like like a network, if it kind of has this, like, messy structure, it often means like you take out any sort of subcomponent. Oftentimes the system can reroute around it. But I think also interoperable systems, when a system ends up being used by many, many other technologies oftentimes, and it's being more robust because more people use it, they've kind of rooted out all the errors.

[00:31:49]

And so I think in that sense, these systems can be more robust. But but I think the the kind of the converse of that is oftentimes that when things are enormously interconnected as well as tightly coupled, you can often have a failure that can actually cascade in ways that might be hard to anticipate and actually cause a huge failure. So it could be a work. And if things are fairly small and maybe not so complex and fairly simple, then a single single error, it might just kind of take down a small subset of what's going on versus when things are all kind of tightly connected.

[00:32:26]

Then suddenly these kind of large cascades can happen. And you can see this kind of thing where someone's like a computer malfunction can. Actually, I grounded an entire airline for for a small amount of time because of there because of the possibility for these kind of cascading failures. Or you can have like a single like a small power outage can actually cascade through an entire electrical grid and actually take down a huge amount of of the power grid and affect millions and millions of people.

[00:32:54]

I think another way to think about this is often so there's there's this idea within kind of matha. It's actually a mathematical framework for thinking about some of these complex systems. And so we think there's this concept of the way they refer to as robust, yet fragile. And the idea behind this is that a lot of these very complex systems are highly robust. They've been tested thoroughly. They've had a lot of like edge cases and kind of exceptions built in and kind of baked into the system.

[00:33:21]

And so they're robust to enormous, to enormously large sort of things. But oftentimes they're only the set of things that have been anticipated by the engineers. However, they're actually quite fragile to the unanticipated situations. So these systems end up being robust, yet fragile. And oftentimes because of the complex structure and kind of the tight coupling of the pieces, they can actually kind of have these failure cascades or kind of other other other other aspects of fragility that are maybe unanticipated.

[00:33:49]

And so I think oftentimes with these systems, there's both a great deal of power in complexity, I think overall. And we're going to continue building very complex, sophisticated technological systems because on the whole, they're actually they're very powerful, they're sophisticated. They can do many, many different things. Interoperability is great. It can allow information to pass from one system to another. The downside is, though, is that not only are these systems more difficult to understand, but as a result of kind of the gap in understanding and kind of this increasing comprehensibility, there's going to be the potential for failures in ways that are unanticipated.

[00:34:23]

Right. It reminds me of some of the financial models that were like five sigma and incredibly confident that their algorithm would would not fail massively. But they, in fact, did fail massively in the two thousand eight collapse because real life just hadn't been captured in the assumptions made by the model. And so the guarantee is that the. Just right, like there's a model and then there's like all the details of the model maybe has has swept under the carpet, but it turns out sometimes those exceptions can actually swamp the general rules and yield things that really can can cause big problems.

[00:35:01]

So we've touched on artificial intelligence a few times. And you briefly described some of the fears that people have about advanced artificial intelligence is and you sort of you sort of distance yourself a bit from those fears by saying, like, look, we're not talking about Skynet here where, you know, the risk is of machines becoming self-aware and declaring war on humanity. We're just talking about unintended consequences of these because of the complexity of these systems. But I guess it's not clear to me just how different those two scenarios are in practice, like the you know, the sort of thing that that people who worry about, you know, risks from advanced superintelligence are are afraid of their worrying about things like, OK, we programmed the AI with a particular goal, like, I don't know, maximize the profits of my company.

[00:35:54]

And but we don't give it specific rules that it's supposed to follow. It will learn those rules through various machine learning things, deep learning, etc. And maybe it turns out that one of the things strategies it develops is a clever way to assassinate the CEOs of the computing companies. Yeah, profit maximization, for example. And so I can like I'm not sure there's a clear line between, oh, eyes will turn evil and that's the risk, which is, you know, like turn evil just for no particular reason other than, you know, we're robots and we like to be evil.

[00:36:27]

And that is clearly kind of a silly risk. But in practice, it seems to me that just unintended consequences of complex eyes could produce results that look like is being evil and that we should actually be worried about.

[00:36:39]

Oh, I think that's actually a very fair statement for me. I, I am less concerned kind of, or maybe not less. I'm not focusing on these kind of like distant risks, like the ones of like what kind of a complex A.I. and kind of what it would do or maybe complex like superintelligent A.I at the same time that we have already we have complex and they're not super intelligent, but they are operating in ways that can have unintended consequences.

[00:37:06]

So for me, the kind of like the air of incomprehensibility is not something that's like on the horizon or kind of maybe happening within the next 10 or 20 years like it is here. And now we're seeing this kind of thing or it can be even even as simple as the situation with, like Microsoft Chat bot today, like it was supposed to. I think, like the designers intended it to interact like a think like 18 or 19 year old girl, when in fact it ended up behaving like a white supremacist and it happened.

[00:37:33]

The reason was, I think in retrospect, people realized there was kind of the the kind of input data that it was going to receive was different than what they expected and kind of the way it was going to respond. There was kind of this mismatch between how they thought it was going to respond to the input of data and how it actually did. Right. But at the same time, though, that I mean, that's almost like a trivial example kind of in comparison to all these other systems.

[00:37:53]

And but I think so for me, the concern is just being more aware of kind of recognizing that there is always going to be this mismatch between how we think a system is going to operate and how it actually does. And so, therefore, we just we need kind of a suite of tools and approaches for when we're never going to be able to actually fully understand the systems. But we still need to meet them. We still need to handle them in some way.

[00:38:18]

And so I think my thing for me, I'm more focused on those kinds of approaches for I kind of the here and now rather than kind of the future scenarios. That being said, I'm sure that these kind of approaches are still very useful for the future scenarios as well.

[00:38:32]

Do you have any other proposals or advice for the kinds of risks from complex systems that we are in fact facing today? Like I mean, self-driving cars might be a good test case like AR is the answer. Just test them way more than we think we need to and as in as many different varied scenarios as we can think of, or are there other sort of principles that we can use to increase robustness even if we can't predict exactly what might go wrong?

[00:39:00]

So I think there are like engineering, like engineering Kovik essentially engineering hygene that you can use to kind of make sure you're making systems more modular so that they're more understandable. There are a lot of ways of reducing kind of the the trend towards incomprehensibility.

[00:39:18]

So you can kind of like save it all for as long as possible. For me, though, like the situation I'm interested in is like, OK, assuming the systems are incomprehensible already, which in many cases they are, how do you how do you then approach them? And so I almost and I don't mean this kind of sound like a cop out, but I almost think that to a certain degree we need to kind of take the approach of like technological humility, like humility in the face of technology, where it's the recognition that we are going.

[00:39:46]

It's one thing to say, like if we set our minds understanding something, we don't understand every aspect of it. And in the scientific world, we're recognizing that there are limits, like in terms like the kinds of things we can understand effectively in physics. And I think in technology we need to recognize from the outset that there's going to be limits and what we can understand, like even like theoretical limits to what we can fully understand. And so for me, it's less a cop out and more just like this is the right realized mcdiven or what is the right orientation.

[00:40:13]

We need that. If we build our systems from the outset with this recognition, we're not going to kind of have the same approach of like we can understand these systems and then suddenly be confronted by a failure and really be shaken by the fact that we don't understand them. If we can recognize this from the outset, we're going to continue to try to iteratively understand these systems. But like in kind of a more humble approach, like, OK, there's going to be there's going to always be this mismatch will keep on addressing glitches and failures and trying to test these things as best we can.

[00:40:42]

But we're never it's always like everything is going to be always a work in progress. But I think it will also change how a lot of us like whether even if you're not an expert, how you approach the technologies you deal with, because essentially right now, a lot of people outsource technological understanding to the experts.

[00:41:01]

But if we recognize more explicitly that even the experts cannot fully understand these things, then I think it will ideally create a certain amount of responsibility for each of us to at least try to better understand these technologies, like we're never gonna understand them fully, but it can be as simple as like maybe paying more attention to what a progress bar is doing when you're installing something, even if a perspiration like a tenuous, a tenuous connection to what is actually going on underneath.

[00:41:27]

But I have one point on that. Yeah.

[00:41:29]

Like finding ways of at least kind of like seeing under the hood of our technologies, like even like the ability to, like, call the command line on kind of your slick user interface on your Mac or whatever it is, and see a little bit of like what's going on under the hood. I think those kinds of approaches are going to be increasingly important if we recognize explicitly that we are in a world of our own making that we don't fully understand.

[00:41:52]

And so whether or not we can always kind of have these glimpses under the hood, that's an entirely different situation. But I think we need kind of this like like playful approach to our technology, whether you're an expert or not, that at least try to understand things even a tiny bit in an era when we cannot fully understand them.

[00:42:10]

Hmm. I mean, I definitely like the idea of having a better grounding in what what's going on under the hood of the technologies I use. But it's hard for me to see a causal mechanism between that between like individual users having that understanding and there being less like serious downside risk from those technologies. Is there? Yeah, what's the connection? I think it's actually a very fair point. I think I yeah. So I think there still might always be like sort of downside risk.

[00:42:39]

I think it's just more how each of us can kind of approach it. Or so I'm not sure that this kind of like philosophical.

[00:42:45]

Oh, it's it's about setting our expectations. Yeah. It might just be kind of obvious by changing our expectations. And so it's like the kind of thing we're like now, right now it's like very sexy to learn how to code, whether or not I most people who are learning how to code, they're not going to be actually like making applications are like large software packages, but at least give them kind of a certain mode of computational thinking. And I think this is kind of related to that where having kind of computational thinking or recognition of how these systems work or how they don't or kind of or even just like the types of failure modes for these kinds of systems systems, I think it will give people the proper orientation and how they react to these systems.

[00:43:21]

And maybe I mean, maybe we should actually ask for more. And ideally, we we want to have systems that we fully understand are ones that are completely safe. I think we'll never quite get there. And I think and it could be related to this, which I actually do not discuss in the book, is just a better understanding of like risk, like risk and how we understand trade offs and all these kinds of things, like when we build systems, I think, and the impossibility of getting zero risk.

[00:43:47]

Exactly. And I think those like everyone, everyone wants to say no amount of death is acceptable, you know, as a trade off or for industrial progress or for whatever. We can't sacrifice a single life. But like, in fact, we we must and we always we are already sacrificing.

[00:44:03]

Right. Yeah. So I think maybe this the kind of approach that I've been talking about, I gather kind of related somewhat to this, like these are trade offs in kind of the risk realm. I think that's also kind of important to think about, whether or not you're actually building these systems. Cool.

[00:44:20]

Well, we're about it time. So I'll just remind our listeners, overcomplicated, very highly recommended. We're going to link to it on the podcast website. And for now, Sam, we'll move on to the rationally speaking tech.

[00:44:51]

Welcome back. Every episode of rationally speaking, we invite our guest to introduce the rationally speaking pick of the episode that's a book or article or website or something that influenced his or her thinking in some interesting way. So, Sam, what's your pick for today's episode?

[00:45:05]

So my pick is the book Immortality by the philosopher Stephen Cave.

[00:45:11]

It's a book that kind of looks at the four different modes that civilization has tried to essentially become effectively immortal, like everything from remaining biologically immortal to immortality of the soul, things like that.

[00:45:27]

And he actually kind of looks at the the ancient antecedents, like the ancient versions of these approaches, as well as the more modern versions, and then actually takes each of them and dismantles them and shows them and shows how all these different approaches are doomed to failure. But then and this is kind of my favorite part of the book, although actually the entire book is amazing. He then says, OK, what do we do now? How can we come back from this?

[00:45:49]

Like, if we're doomed, doomed to never kind of have immortality? Like, how can we have some sort of optimism in the face of this? And and he essentially then and he's been talking about both ancient and modern wisdom. He then goes into like the the ancient wisdom literature, kind of the things of like Ecclesiastes and like Stoicism and shows that there is a great deal of like there's a huge body of literature of like the the power of kind of mortality and transience and kind of trying to understand how to create meaning and value in our transient lives.

[00:46:20]

And it's an amazing book. Jumps between ancient history and biology and philosophy. It is one of the most thought provoking books I've read in the past few years, and I highly recommend it.

[00:46:32]

Hmm. I'm so torn about that subject because I, I, I definitely see the appeal of of finding a way to come to terms with mortality. At the same time, I worry that, like, wow, what if we could actually, you know, significantly extend our life spans and we're just we're, you know, turning away from that possibility because we don't want to give ourselves false hope.

[00:46:56]

Seems like a really tricky line to walk to me.

[00:47:00]

So, so, so so I would say and he actually kind of discusses this, like there is a really big and very, very qualitatively big difference between drastically increasing our lifespans and making us immortal. And yeah, and I think that's maybe like and for most people, that doesn't make the difference between, like, living a million years and living forever. I mean, obviously, there's a mathematical difference, but for most people it would not feel different.

[00:47:28]

But I think but he kind of says actually this distinction is important and like forever actually is is never going to be possible. But you're right. I agree that drastically changing our lifespans would actually change how we live. And I think it would change kind of how we find meaning and how to think about it.

[00:47:44]

And I think in I think with Woody Allen, he kind of says like and I think he said like he would much rather like rather than like his immortality is in his work, he'd much rather have his immortality be in his own right.

[00:47:54]

Exactly. Which I actually was also thinking of that quote. I just I, I just really appreciate when people sort of thumb their nose at these, like, nice, sort of prettier comforting platitudes. And they're like, no, actually, the common sense thing is just, you know, I just like dying is just bad and I don't want to die. And, you know, whether or not that's the most psychologically strategic or healthy approach to take, I sort of I just like appreciates sometimes people point like pointing out that the platitudes are platitudes, essentially.

[00:48:29]

Oh, yeah. And actually in this book, one of the cool things, it's been a number of years since I wrote it, but he doesn't kind of give short shrift to those approaches. And he actually the subtitle of the book is, I think like how our quest for immortality has actually driven civilization forward. So he's not saying, like, these kinds of drives are bad and we should abandon them. Like, these are the kind of thing like thinking about these kinds of things have actually driven civilization forward.

[00:48:50]

And so I think he actually recognizes there's a great deal of power in thinking about human longevity or trying to understand the nature of kind of biology and all these kinds of things. And so so, yeah, it's a very complex dance. And I'm not I'm not willing to say, oh, I like that there is only meaning to human life because it is transient. I think that's a silly thing. That's a silly way to think about it. But I think you can still say, given that life is fleeting, how can we still make it?

[00:49:21]

Meanwhile, I think that's kind of the better approach to that kind of thing.

[00:49:23]

Cool. Excellent immortality. We'll also link to that. We will link to immortality on the PBS website Sansbury, as well as to your book. Sam, thank you so much for coming back on the show. It's been a pleasure having you. Thank you so much. This concludes another episode of rationally speaking. Join us next time for more explorations on the borderlands between reason and nonsense. The rationally speaking podcast is presented by New York City skeptics for program notes, links, and to get involved in an online conversation about this and other episodes, please visit rationally speaking podcast Dog.

[00:50:06]

This podcast is produced by Benny Pollack and recorded in the heart of Greenwich Village, New York. Our theme, Truth by Todd Rundgren, is used by permission. Thank you for listening.