Transcribe your podcast
[00:00:14]

Rationally speaking, is a presentation of New York City skeptics dedicated to promoting critical thinking, skeptical inquiry and science education. For more information, please visit us at NYC Skeptic's Doug. Welcome to, rationally speaking, the podcast, where we explore the borderlands between reason and nonsense. I'm your host, Julia Gillard. And with me today, we have two guests, Tom Griffiths and Bryan Christian. Tom is a returning guest to rationally speaking. He joined us a few months ago to talk about whether the brain is secretly rational and when he's not appearing on rationally speaking, you can find him being a professor of psychology and cognitive science at UC Berkeley, where he also directs the computational cognitive science lab.

[00:01:05]

Brian is an author who writes about computer science, cognitive science and other related topics for publications like The Atlantic and Wired in The New Yorker. And he's also the author of the best selling book, The Most Human Human, which is pretty cool. And you should check out as well. But most recently, Tom and Brian have jointly published a book called Algorithms to Live By. It's about some of the most crucial integral algorithms that are used in computer science and how those algorithms actually apply or can be applied to improve our Human Decision-Making in our everyday life.

[00:01:42]

So decisions like how should I plan my day or should I quit my job, that kind of thing. So that's we're going to talk about today. Brian, welcome to the show. And Tom, welcome back.

[00:01:53]

Thanks so much. Thank you.

[00:01:55]

So this topic is something I've been thinking about for years. And in fact, I just gave an interview a few months ago to Vice magazine in which I talked about a lot of similar material, actually, although I hadn't read your book at that point. But I was talking about how algorithms like Bayes rule or or understanding the trade off between exploring and exploiting can help guide our decision making and improve it. And I thought I was being so nuanced and and intelligent.

[00:02:25]

And then the article came out and the headline was something like, Julia Gillard wants humans to be more like robots.

[00:02:31]

So I, I, I found this to be a difficult topic to communicate about publicly. I'm wondering if you guys have had any reaction like that to your book.

[00:02:42]

Yeah, I mean, this is one of the things that we tried to head off in the introduction of the book where we, you know, we have a section where we kind of step out of frame once we've once we've teed up the idea that, you know, there's something to be learned at a at a daily human level from thinking about the problems we face in in computer science terms, we then immediately sort of back off and say, well, you know, in case you are skeptical at this point and think that, you know, we're trying to advocate that we all turn into these kind of robotic Vulcan like beings.

[00:03:18]

No, in fact, that that is not what we're getting at here.

[00:03:21]

And I think one of the most interesting themes of the book is that, in fact, you can make a pretty powerful case grounded in computer science that advocates for things that look a lot more like human intuition and not overthinking things and being a little bit messy on occasion and trusting our instincts.

[00:03:41]

And so I think it's it's there's a powerful opportunity to reaffirm some of the things that we either take for granted or sometimes sometimes get a bad rap about about the human side of reasoning. But the two are not as as divergent as I think people think.

[00:03:58]

Hmm. But that's but the idea is also, I presume that familiarizing oneself with these algorithms or like holding them up against your intuitive decision making and noticing differences, that process doesn't just take you right back to the to the default intuitive strategies you were using. Right. There's some difference. Yeah.

[00:04:18]

So I think one way of characterizing that is that a lot of these algorithms are things that work really well in very precise situations.

[00:04:26]

So if you're in a situation which exactly satisfies the assumptions that go into the algorithm, then that is exactly the right thing and that's the thing that you should be able to do. And so there are cases where people are put in those situations and then the thing they do isn't the right thing. So an example is one of the places that we really start in the book is the 37 percent rule. Right. So this is a strategy that you can use in any situation where you basically face a sequence of options.

[00:04:57]

So, for example, if you're trying to find an apartment, you might be going to open houses and you have to make a decision on the spot if you're in the Bay Area about whether you're going to, you know, give a check to the landlord and make an offer on that place. Or basically, if you leave the open house without having done that, you're going to lose it. So you a sequence of options. You have the chance to make an offer.

[00:05:18]

If you don't make an offer, you lose it. And really what you have to be doing is kind of building up a sense of, you know, how good are your options here? What are the options at the same time as, you know, dealing with the cost of of losing some of those options as you're gathering that information? And so the the right thing to do in that situation is you take 37 percent of the pool of options or 37 percent of the time that you're going to spend looking.

[00:05:42]

So if you're, say, looking for an apartment for 30 days, it's 11 days. You spend that time just gathering information. You know, you leave your checkbook at home, you just are getting calibrated. And then after that, you make an offer on the first place.

[00:05:56]

You see, that's better than any place you've seen so far. And so if you're in that precise situation, that's exactly the right thing to do.

[00:06:03]

Before you talk about how to adapt the algorithm to messier real life situations, can you give some intuition for why 37 percent like that number seems? Weirdly, I imagine that number must be really specific to people who haven't read the explanation.

[00:06:17]

Yeah, it's even it's even more weirdly specific or random that it's actually like it's one over e where is the you know, the Euler's constant.

[00:06:26]

The thing that shows up in, you know, when you're doing things like computing compound interest and, you know, that just basically falls out of the math.

[00:06:36]

But a way to get an intuition for it is that what you're doing is is trying to find the right trade off between basically having information that can inform your action and having enough room to take meaningful action. So so a way to kind of get insight into this is you can think about a case where say you are only going to go to three open houses and you you are trying to find the the the best place on the basis of that. So, you know, one thing you could do is just say, look, I'll just, you know, make an offer on the first place.

[00:07:09]

I just choose one at random essentially and make an offer in that place. And the probability that you get the very best place is one out of three. But it turns out you can do better than one out of three. So if you go and look at the first place and I make an offer and then make an offer on the second place, if it's better than the first place, otherwise make an offer on the third place. That is actually the thing that maximizes your chance of, uh, of getting the best place and it increases that probability to 50 percent.

[00:07:37]

And so if you kind of keep on doing the math where you say. OK, now let's work out what happens with four places and five places in six places, in seven places and eight places and so on, you know where that threshold between looking and leaping lies then, you know, as the numbers go up to infinity, that that threshold converges to whatever you call.

[00:07:58]

So you were going to explain, you know, how does this hold up in real world situations? What are the assumptions that might not be true in the real world? Yeah.

[00:08:07]

So what what happens if you put human beings in this situation and you ask them to make a decision is we characteristically tend to switch to early. So we sort of don't spend enough time getting calibrated where where we see a good place? Well, like, OK, that's it. I'm going for it. And, you know, that's something which has been a little puzzling for people who are trying to figure out, you know, what people are doing here.

[00:08:30]

But one one way of understanding it is that when you look at the assumptions that go into this model, it assumes that there's not really any cost to spending time looking right. Whereas actually when human beings face this kind of problem, there is a cost. You want to get this you know, this where you're going to live, figure it out sooner rather than later. And or if you're in a psychology experiment, you want to get out of that experiment sooner rather than later.

[00:08:53]

Right. And so when you factor in a very small cost, then in fact, you know, making that switch around 31 percent, which is what people do, I think that corresponds to a cost of about, you know, one percent of the value of getting the very best place ultimately. And people are willing to make that tradeoff. And so that's something where what people are doing deviates from, you know, the the rational model. If you're actually in exactly the situation that that that model, you know, corresponds to, then you should do something different.

[00:09:19]

And knowing that you should do something different if you're in exactly that situation is that's what we equip you with in terms of telling you about these different kinds of algorithms. And then in the book, we talk about all of the variants on this. So, you know, if you don't have 100 percent chance that you won't be able to go back to a place after you passed it over. But there's some other probability or if there's some chance your office get rejected or, you know, there's a kind of whole a whole constellation of different variations on this for which we can actually give optimal support.

[00:09:46]

Yeah, I imagine or at least what I found in my experience is that even just being aware of what the parameters are, even even if you don't know the value of those parameters, like like parameters like what is you know, the amount of time I'm willing to spend or the number of options that I think there are, what do I expect the average or like potentially highest quality option might be? Roughly what is the rough switching cost? Like even before I come up with a value for those variables, just being aware of them, I think makes my my intuition somewhat better at solving the problem optimally.

[00:10:20]

Yeah.

[00:10:21]

I mean, I think in in a case of optimal stopping, you know, the the biggest genres of optimal stopping problem are what's called no information games and full information games. And so in a no information game, which is the example time gave with the apartments where you get the the classic 37 percent rule, that's based on the assumption that you encounter your your options in a random order. But more to the point, when you encounter a particular option, you cannot say how much better than other options it is.

[00:10:56]

You can only say in relative terms what is the relative rank of this. So you could say, you know, this is the second best thing that I've seen so far, but you can't say if it's closer to the first or closer to the third.

[00:11:11]

And so this is called a no information game. In contrast with that, the other main sort of branch of these problems is called full information games in which you have some sense of the distribution over which these options are being drawn.

[00:11:27]

And so if you were hiring a a typist, for example, and you knew for certain that they were a 90th percentile typist, you know, on some typing score, well, then you find yourself in full information game and it turns out that the stoping rules for full information games are quite different. And in fact, you don't need an initial looking period before being ready to leap.

[00:11:52]

You do need to know how many candidates are in the pool in order to make an educated guess about whether there's a better candidate still remaining, but that that the optimal strategy in this case just has a fundamentally different structure.

[00:12:06]

And so, you know, this goes back to your point about, you know, just developing an intuition for what the landscape, you know, what the solution landscape looks like and asking yourself as you enter into a problem, you know, do what do I feel like I have full information or do I feel like I have no information? And the choice of whether you need to set this initial looking window is going to depend on that.

[00:12:27]

Great. Let's let's take another example of an algorithm from your book. Do either of you have a particular favorite out of your chapters?

[00:12:37]

Well, so another good example, I think. When we talk about that has implications, both kind of domestically and scientifically is algorithms that are for solving the problem of caching.

[00:12:50]

So and that's spelled C.A.C. h i n g, right? That's right. Yeah.

[00:12:55]

So, yeah, computer science. These algorithms are used for managing the memory of computers so you can think about memory is a constrained resource, is only a certain amount of very fast memory in your computer and then a larger amount of slow a memory just because you know a fast memory is expensive. And so what the computer has to do is to figure out what it's going to keep in that fast memory in order to, you know, increase the probability that the things that you're actually looking for are going to be there.

[00:13:22]

Right. So that your computer can be as fast as possible at wants to make it so that most of the time when it's looking for a piece of information, that information is stored in the most accessible form of memory. But that's a problem that, you know, human beings face as well. We call it organizing our closet. So, you know, you can ask the same kind of question about, like, you know, if you've got things that you could be putting in different places in your house, in your closet, in your basement, or maybe in a storage container or something like that, you have to make a decision about what are the things that you want to keep in the most accessible storage locations.

[00:13:57]

And so that's a context where taking a look at the kinds of algorithms that are used in computer science and why it is that they work is something which is relevant.

[00:14:05]

Excellent. What I'm kind of tempted to go through, like instead of talking about algorithms and then discussing what Real-Life situations they might apply to, I'm kind of tempted to give you a couple of real life situations that I've dealt with or the clients of mine have dealt with and see if if any of the algorithms that you've written about feel like they might be useful.

[00:14:24]

Does that sound good? Yeah, sure. I'm game for that.

[00:14:29]

Oh, great. So one common situation. A lot of my clients are young. They're, you know, just out of college or they're, you know, early 20s, late 20s. And one of the biggest and most reoccurring things they're struggling with is setting up their career, basically figuring out what they want to specialize in and more immediately, like what job they should be searching for now, which is a somewhat separate question from, you know, what they want their career to look like in 20 years, etc.

[00:15:02]

. And so I've I've been trying over time to figure out like, are there good, like rough algorithms they could use for figuring out what kind of job to take? Or are there ways to decide what features of a job to prioritize over other features, like a job that pays well now versus a job that will build up their skills versus a job that'll, you know, give them a lot of information about the field, etc.. So I'm wondering if any of your algorithms feel like they have useful things to say about what to say to someone who is uncertain about what short term job decisions to make to sort of maximize their longer term career?

[00:15:40]

I would say I mean, to me, this feels very much like an explore, exploit situation.

[00:15:46]

And so, you know, the in computer science, the explore, exploit, trade off is this idea of how do you balance between spending your time and energy, getting information and using your time and energy, leveraging the information that you have to get some some good outcome or some payout. And I think that's relevant in a in a career context, in a in a sort of life trajectory context, because, you know, you have to spend a certain amount of time.

[00:16:16]

Trying stuff out and learning what what you enjoy, what you're good at and so forth, and, you know, the key concept that emerges when you look at these explore, exploit problems is that everything really depends on how much time you have and kind of where you where you perceive yourself to be along some relevant interval of time.

[00:16:40]

So if you're at the door at the beginning of a process, you should be much more highly exploratory. If you're at the end of a process, you should spend much more of your time and energy exploiting that, the knowledge that you've gained so far. And so, you know, in in concrete human terms, it's like if you first moved to a city, you should spend the first month or more just, you know, relentlessly trying new things.

[00:17:06]

The first restaurant you go to when you move to Berkeley is literally guaranteed to be the greatest restaurant you've ever been to in Berkeley. The second place you try has a 50/50 chance of being the greatest place you've ever been to in Berkeley.

[00:17:18]

So the the chance of making a great new discovery is greatest at the beginning.

[00:17:24]

And moreover, the the value of making a discovery is the highest when you have the most time left to enjoy it. And so, you know, finding an amazing new restaurant on your last night in town is kind of tragic because you think to yourself, oh, well, I wish I had known about this years ago. And so for both of those reasons, we should be on this trajectory from exploration to exploitation. And so, you know, given that, you know, the group of people you're talking about are coming out of college, they're looking for their first jobs and so forth, I think it makes sense to approach it from the perspective that they have a long time ahead of them.

[00:18:02]

And so it's worth spending time trying things out, even if they have a low probability of being good, because if they are good, they've got their whole lives and their whole careers to kind of reap the fruits of that.

[00:18:15]

Yeah, there's something to do. In a very brief tangent, there's something like kind of counterintuitive about the idea of, you know, trying a bunch of things because some of them will work and and then you can exploit those for a long time afterwards. I mean, it sounds intuitive when I say it, I guess, but somehow it doesn't matter intuition when we're actually faced with situations. I think that like, you know, if a friend of mine was like, hey, Julia, do you want to try, I don't know, trapeze or something, my first intuition might be, well, that probably is something I'm not going to like.

[00:18:49]

Therefore, I won't do it. And the implicit algorithm that my brain seems to be using in these cases as if it seems likely to work, then I'll do it. If it seems likely to not work, then I won't do it. And I think my brain is failing to take into account the fact that, like only, you know, only a small subset of things. I try I have to work in order for the entire set of things.

[00:19:08]

I tried to have been worth it in bulk. Yeah.

[00:19:11]

I mean, I think the the key idea in, you know, explore, exploit problems or in the multi armed bandit problem, which is kind of the canonical one, is that if something fails on you, it only fails once because you just don't do it again. But if it's successful, then you can just keep going back to it again and again.

[00:19:30]

And so if you think about it from the perspective of taking up a new hobby or a new sport or something like that, yeah, you're the metric of what is most likely to be the most fun evening is probably not the right metric, because even if there's only a one in 20 chance that you enjoy doing the trapeze, if you find that you do enjoy doing the trapeze, you can do it, you know, 50 times over the next year.

[00:19:56]

And so that actually puts you, you know, well well ahead in that situation.

[00:20:02]

Yeah.

[00:20:02]

The algorithm we talk about that I think engages with this is what's called up a confidence bound, which is a class of algorithms for solving, explore, exploit problems. And what that says is the evaluation you should be making is not your expected value, not like, you know, how how likely you think this thing is to be good, but rather you should be calculating the upper bound on that expected value. So it kind of like thinking about a confidence interval around the expected value and then you take the upper bound of that confidence interval.

[00:20:29]

So it's kind of like how could could this be under the best, you know, under my sort of best guess of the best possible scenario? Right. And then comparing that across the different activities that you could do. So, yeah, I think I think it's a kind of a kind of optimistic attitude, one the one that favors things that you've got very little information about.

[00:20:51]

What do you think about the strategy? I don't know if this really counts as an algorithm, but just as a strategy of sort of products that put you upwind. Kind of it's a concept I forget where I read it. That's about like doing things that sort of preserve your option value in the future, like taking jobs that will introduce you to many other jobs. Are taking jobs that, like having had that first job, will is going to be respected by, you know, the widest possible variety of other potential future employers across lots of different fields.

[00:21:26]

That kind of thing. Yeah.

[00:21:27]

I mean, I think I think that sounds like a sensible strategy. It's not it's not something we talk about in terms of the algorithms we consider. But I think what's good about it is that it makes it clear that you're not necessarily making a choice for life or something like that, but rather making a choice, which is a first choice. And I think that is something that engages with a different kind of consideration here, which I think is really relevant to making these sorts of decisions.

[00:21:52]

And it's something we talk about in the context of overfitting in the book. So I think when we want to make a significant decision, I think there's a tendency to try and kind of collect all the information that we can and try and, you know, really optimize that decision with respect to all of the criteria that we can identify. And one kind of counterintuitive thing, something that machine learning researchers and statisticians have discovered, is that that's something that can actually be counterproductive.

[00:22:20]

So the analogy is, you know, if you're trying to make predictions, are you're trying to, you know, predict the stock market into the future or something like that, then the more complex you make your model, the the less accurate it turns out being. You sort of there's a some level of complexity that you need in order to capture the signal that's in the data. But once you exceed that level of complexity, you you end up making your model worse.

[00:22:45]

And it's called overfitting. Basically, you make your model so complex that it's only able to it's not only able to fit the signal in the data, it's also able to fit the noise. And so it makes worse predictions because it's kind of giving weight to lots of factors, which, in fact are useful only for predicting the variation that's in the data due to the noise. And so when you get new data, right, you've got sort of the stuff you could measure from the past.

[00:23:07]

But the stuff that really matters is what's going to happen in the future. And the gap between those things is something that then means that a model which is made sort of optimized for the things that you can measure can end up doing a worse job in predicting the things that matter. Right.

[00:23:21]

Would an example of that be like my relationship failed? And my update from that experience is that I should avoid getting into relationships with people who have brown hair of this exact length and who I met on a Thursday.

[00:23:35]

And that would certainly be a kind of overfitting. But but I think we're more prone to doing that as well in general. So, like, I think one way that this really manifests, I feel like it's manifested in my life is failing to recognize that your utility function now is not going to be the same as your utility function in five years or in 10 years. Right.

[00:23:55]

So if you really optimize a decision for where you are right now, then that might be a decision which, you know, isn't actually the best thing for you down the line. Right, because there's an intrinsic barrier variability between, you know. So if you're if you're sort of finding the thing that's the perfect match for the current moment, then you're potentially losing some fit. You know how good that might be for you into the future if you are kind of you know, all of your current idiosyncrasies are things that are contributing to that current decision and you're putting so much effort into optimizing it.

[00:24:29]

Then as those idiosyncrasies fade and maybe you develop other ones, it'll end up being a worse fit than maybe if you'd taken something that was perhaps a little more sort of generic and less perfectly optimized to your current situation.

[00:24:42]

Tom and I have talked about this, you know, from the perspective of buying a home where you're in some sense trying to optimize for the happiness of your, you know, five years from now future self, who is somewhat unknowable.

[00:24:55]

I encountered the same same thing recently when I bought a tuxedo, which was, you know, it's funny to buy something where you feel like you'll wear it one to two times a year for the next seven years. How do you optimize for something that is going to look good when you take it to a wedding in five years?

[00:25:14]

And, you know, just just to use this like sort of banal example of men's fashion, men's pants are much tighter than they were 10 years ago currently. You know, when I look at the jeans that I wore in, you know, the mid 2000s, they're like twice as much fabric or something like that.

[00:25:33]

If I'm buying jeans, which is something where they use case of jeans, is that you wear them, you know, almost every day for like 18 months. And then they develop holes and you throw them out or something. If I wanted to buy jeans, I should buy tight jeans because that is the style of, you know, the mid 2010s.

[00:25:50]

But if I want to buy a tuxedo, then I should deliberately get something that is like looser in the leg because I'm just assuming that men's fashion is on this random walk. And so I actually don't want to nail the current trend like right on the button, because I know that it's going to deviate from that later.

[00:26:08]

Right, right. Right. I have a selfish request, which is. I'm hoping that you guys can help me think about my current life problem, which is that I have a bunch of projects that I feel like I should be doing, like home improvement stuff or getting my finances in order, get, you know, starting a budget, etc. all these things that I've been telling myself I should do for years and like, you know, going to the gym regularly, all these things.

[00:26:38]

But obviously, I don't have the attention and energy to tackle all of them at once. And so in practice, I sort of I feel like I'm almost randomly picking, you know, a project to start at one time versus another. It's maybe whatever thing feels most tractable or enjoyable to me at a given time, but I feel like there must be a better strategy for deciding what to prioritize. When can you give me any advice?

[00:27:01]

Well, we may be able to give you a disconcertingly wide array of pieces of advice.

[00:27:08]

We we tackle these questions of, you know, time management in our chapter on scheduling theory. And one of the upshots in scheduling is that, you know, there is there is an optimal strategy for every conceivable metric of how you want to measure, you know, what good time management means to you. And so, you know, if you want to make sure that no task goes too far beyond its deadline, then there's a strategy called earliest due date that you should follow.

[00:27:41]

And if you want to minimize the length of your to do list at any given time, then there's another strategy that's called shortest processing time.

[00:27:51]

So that would be like to tackle the project first that are just the shortest. But I can finish the fast and then I'll like a week now I'll finish three projects and I'll feel so good. Yeah, exactly, exactly.

[00:28:01]

And you know, the the assumption that that particular algorithm makes is that all tasks are of equal importance. And the amount of time a specific task lingers on your list is not as important as just reducing the amount of things on the list.

[00:28:18]

And so the optimal strategy in this case is just do the easiest things first, which sounds great.

[00:28:24]

But the the those may not be the correct assumptions to make in your case.

[00:28:29]

And so, you know, I think one of the critical things that comes out in our scheduling chapter is that before you can get the the answer as to what you should be doing, you must first articulate exactly what problem you want to be solving. That's not a step in most time management.

[00:28:49]

You know, self-help books. Yeah, I think they often have an assumption about what the right, uh, right. Optimization function is, and they just give advice premised on that assumption without having made that assumption explicit.

[00:29:02]

Yeah, I think that's pretty much right. So there is I think there are nice insights that come out of this, though, in general. Like, I think there are versions of these metrics that maybe approximate human metrics. Like Brian was saying, the shortest processing time assumes that every job is is weighted equally. But obviously that's not the case. Some things are more important than others. And so then you get into what's called weighted shortest processing time.

[00:29:27]

And basically the way that that works is you take the ratio of how important something is to how long it's going to take you, and then you prioritize jobs by that ratio. And I think that gives you a reasonably good heuristic, which is something like, you know, you should only take twice as long to do something if it's twice as important. Right.

[00:29:47]

And out of that comes a pretty kind of pretty general time management algorithm, which is way too short as expected, processing time with preemption. Right. This whole literature gets very, very hairy. And there are all of these modifications you can make.

[00:30:04]

But if you assume that you're in a situation where, you know, you can put down a job, you don't have to just sit on it all the time. That's the preemption part. And you have weights and you care about minimizing the weight of processing time overall. Then the the the optimal strategy in a situation where you don't know exactly how long things are going to take and you don't know exactly what jobs are going to have, is you work on the job that has the highest ratio of importance to time remaining to complete it.

[00:30:34]

And then if a new job comes along, you immediately evaluate, well, what's the ratio of importance to how long it's going to take to complete it compared to my current job, where, you know, at the time that I've already invested into it, it's taken into account. So it's the remaining time to complete it. And then you make a decision about what you're going to do on that basis. So that's a you know, I think that's a pretty good kind of heuristic.

[00:30:54]

It doesn't take into account the fact that human lives have sort of cycles. Right. Like that. It might be good to exercise a few times a week or something like that. And it doesn't take into account the fact that the importance of things might be time varying or that your ability to execute jobs might be time varying as. Well, in those extra complexities, they can go into that, but I think using that heuristic calculation and adjusting it for where you are at that particular moment in your in your day is actually not a bad not a bad plan.

[00:31:25]

There is something that I highlighted, and I think it was this chapter that really struck me. It was something you said about procrastination, which is a problem I've struggled with mightily over the years. You said basically that it might be the right strategy, but for the wrong problem. Can you elaborate on that? Yeah, I mean, I think I think this this sort of cuts back to this idea that even strategies like the straight version of shortest processing time, it's a perfectly viable strategy.

[00:31:53]

In fact, it's the optimal strategy for a specific flavor of the problem. And so it gives us a way of characterizing something like procrastination, which is to say, you know, it is not a faulty strategy to the problem. It is the optimal strategy to the wrong problem. You know, there there have been a bunch of really interesting studies in the psychology literature about kind of intuitive human task completion.

[00:32:19]

And one of my favorites involves a long hallway where there are two buckets of water, like two large heavy buckets of water. One is in the middle of the hallway and one is at the end where the participant is standing. And the experimenter at the at the other side of the hallway says, could you bring me one of those pails of water, please? And what happens, more cases than not is the person immediately picks up the pail right next to where they're standing and lugs it all the way down this hallway, you know, walking by the other one that they could have picked up and walked only half the distance.

[00:33:01]

And, you know, the the authors of this study coined the term procrastination to refer to this process by which people do twice as much work as they actually needed to. And I think it highlights the fact that I mean, my read on this is that subconsciously they're applying the shortest processing time metric and saying, like, OK, there are two items on the to do list now that I've been required to do. One is pick up a bucket and the other is bring the bucket to the guy.

[00:33:37]

And so it's like, oh, I can accomplish half of my to do list right now by picking up the bucket. That's right.

[00:33:42]

Right. And so it's not that that's the wrong approach to the problem. It's the wrong characterization of the problem.

[00:33:51]

And how would you characterize the problem? Better think what would be the right definition for what you're trying to do there?

[00:33:57]

Yeah, I mean, I think I think something like. That's right. Minimizing the the the total amount of time expended in completing tasks, which is which is not something. Yeah. In our normal single machine scheduling problems, we don't have that luxury. Right. Because we don't get to choose whether to do tasks or not. We just like have all of these tasks and we want to complete them and we have to figure out how to prioritize them.

[00:34:16]

Well, this itself actually is, I think, a really deep point, which is that the total amount of time required to complete all the tasks is called the make span. And in a single machine scheduling problem where, you know, you you are doing all the work yourself, you can't delegate it or outsources or anything, and you must do all of the tasks. Then there's this funny kind of a. result that says, you know, the total span of doing all the work yourself is the same regardless of the order that you do it in.

[00:34:54]

Now, the water bucket problem violates that assumption because it just happens to be harder to walk when you're carrying a pail of water.

[00:35:01]

But for most situations, you know, this in itself is this nice heuristic, which is that if you have to do all the work yourself and you must do all of it, then the total time it will take you to do all of the work is the same regardless of the order that you do it in. And so this is a case where actually spending any time prioritizing at all could be counterproductive. You might as well just work in random order.

[00:35:31]

Right. There's a study that keeps coming to my mind, not just in this conversation, but in general when I try to give people sort of formalized advice for decision making. I can't remember the authors who who did the study. But the gist was they had one group of people make some medium to large sized decisions just using their gut, might have been purchasing a car or purchasing a house. And then the other group of people were instructed to use a formal I don't know if it counts as an algorithm.

[00:36:03]

I guess in a sense it does. It was sort of a a pro con list. Maybe they were like instructed to give weights to the different factors, like the speed of the car, the safety, the price, etc., and then make their decision using the result of that process of that algorithm. When the experimenters followed up with the two groups of people, the group that had gone with their gut was actually happier with their choice as compared to the group that used the algorithm.

[00:36:29]

I mean, I think that the takeaway, the experimenter story about this was that there were some really important factors that the formal process group just were neglecting because it didn't seem to fit into their formal process, like how much do I enjoy this particular car which might not have anything directly to do with its mileage or its price, etc. It just might be the look and feel of the car. And I think the results of that and other experiments like it should give us pause.

[00:36:53]

And by us, I mean people who are giving sort of formal advice for how to make decisions. I don't think it's a fatal blow to the idea that we can give formal advice that can improve default human decision making processes. I just I think the takeaway is more nuanced that like you want to make sure you're paying attention to subjective, emotional factors that also matter in your overall satisfaction and preferences. But I'm wondering how you think about that risk. Like, is there a downside risk to using these algorithms and how do you try to account for it?

[00:37:22]

Yeah, I mean, I don't think that's an argument against algorithms. I think it's an argument against bad algorithms. Right?

[00:37:28]

Yeah, actually, it kind of reminds me of people who point out bad science and then conclude from that. Therefore, we can't trust science or like we shouldn't try doing science at all.

[00:37:38]

Right. So so for me, that goes back to this point about overfitting. Right. So when I talk about overfitting, I said, you know, one of the critical things is you don't want to include too many variables in your model because then you're going to be focusing too much on things that actually don't really matter in terms of the thing that you care about, which is in this case, predicting your future satisfaction. Right. And so I think that decision procedure is exactly one that encourages people to overfit, to overthink the problem, to include more factors than they should and to put more weight on those factors just because they're on that list, whereas the fact that they might have had difficulty in coming up with those things might have taken them a while to do so and so on, something that indicates maybe they're not that important in terms of making that decision.

[00:38:17]

So when we talk about overfitting, we really talk about it in the context of being a way of justifying using certain kinds of heuristic strategies. It says that, you know, there are going to be cases where the simplest thing is better, where having fewer reasons is better, where, you know, acting sooner rather than, you know, thinking more is better. And they're going to be those situations where that gap between what we can measure and what matters is larger.

[00:38:43]

So if what you can measure, you know, it is exactly the thing that really matters, then you should put all the effort into it. You should come up with as many factors as you can. But, you know, at the other extreme, if, say, you were submitting a grant proposal to a committee and what that committee was going to do is take all the grant proposals they got, throw them up in the air, and then fund the one which which landed on top.

[00:39:08]

You should invest no effort at what you put into it because, you know, the decision process is completely random. And what you can measure about your sense of the quality of the grant proposal is, is, in fact completely dissociated from what matters in terms of getting funded. So most things are somewhere in between those, somewhere between, you know, having all the information which is relevant and, you know, and it being a completely stochastic process. Most things there's some element of toxicity and then some element of predictability.

[00:39:38]

But as the predictability goes down, the amount of effort you should put into making that decision and the amount of sort of effort you should be put into coming up with reasons or these other kinds of things should be decreasing. And I think there are a lot of. The decisions where, yeah, like, you know, we we we have poor proxies for the thing that really matters, like our future satisfaction. And so by overthinking it, we're making wise decisions.

[00:40:06]

Right.

[00:40:08]

There's one other important motivational point that I want to close on, which is that I often encounter people beating themselves up because they use some algorithm. And and it didn't turn out well for them. Their algorithm, like maybe the algorithm is I should experiment with being more open with people. And, you know, the expectation the reason the person chooses that policy or that algorithm is that they think that overall, in the long run, it's going to be better for them.

[00:40:37]

They're going to like discover they're going to get better at being open. They're going to, like, calibrate their expectations about, you know, how much openness is OK. And they try it. And, you know, in the second or third time they try it, the person really reacts badly to their openness. And so they're like it. I should never have done this was a huge mistake and they feel terrible. And I always want to tell people like just look at the information like, do you think this is a good policy or not?

[00:41:02]

Did you get unlucky or or was the policy actually a bad policy in expectation? And not just, you know, after the fact, it turned out poorly. And so I think, yeah, your book makes a really important point about paying attention to the quality of the algorithm and not the outcome. Yeah, I absolutely agree.

[00:41:23]

I mean, I think one of the other things is paying attention to how hard the problem is that you're trying to solve. I mean, I think one of the things that computer science does really well is, ah, give us a way of articulating how hard problems are. You know, there are huge classes of problems that are just considered intractable. And so we should just not expect to be able to reliably get the correct solution in an efficient and repeatable way.

[00:41:52]

And even some of the algorithms that we've discussed today, you know, the 37 percent rule in the optimal stopping problem. Well, you know, if you read the fine print, the 37 percent rule only works thirty seven percent of the time. And so there's this you know, it just turns out that optimal stopping in the no information case is a is a hard problem. And even when you are following the optimal algorithm, you still fail.

[00:42:20]

Sixty three percent of the time, here's a case where not only does having the optimal strategy not guarantee that you'll succeed every time, it doesn't even guarantee that you'll succeed most of the time. But it just happens that that that's the best you can do. Similarly, in the Explore Exploit case, the expected value of exploring is necessarily lower than the expected value of exploiting. And so when you try a restaurant and it's no good, that doesn't mean that you've done the wrong thing.

[00:42:53]

That is just part of the problem, because if it is good, then you get to multiply that over all the times that you return.

[00:43:00]

And so I think this is actually a really important theme of the book, which is that, you know, as you say, we have a tendency to if we don't get the result that we want immediately, you know, call into question the process that we used to arrive at that result.

[00:43:19]

And I think, you know, having having a grounding in the computer science of, you know, just understanding what with the landscape of these problems looks like understanding what the what the solutions look like gives us a way of resting easy, even in those cases where we don't get what we want to know, that we followed a strategy that was a sensible strategy.

[00:43:42]

Right.

[00:43:43]

You know, a motivational hack that some of my friends use is to remind themselves of the multiverse. And keep in mind that, like, actually the strategy did produce the best results, if you like, look across all the copies of me and all the different worlds. Right. Like maybe I'm in the world in which it didn't work out well. But overall, all the versions of me are better off because I use the strategy. Definitely not just, you know, it could have happened that way as long as you can handle your quantum jealousy.

[00:44:13]

Right. Right. Motivational tricks for nerds and their hazards.

[00:44:20]

OK, cool. Well, I'm going to wrap up this section of the podcast and we'll we'll link to the excellent algorithms to live by on our podcast website. I encourage everyone to read it. But for now, let's move on to the rationally speaking, PEX.

[00:44:48]

Welcome back. Every episode on rationally speaking, we invite our guests or guests in this case to give a rationally speaking pick of the episode that's a book or website or movie or something that influenced their thinking in some way. So, Brian, let's start with you. What's your pick for the episode?

[00:45:05]

My pick is a small and very strange book called Finite and Infinite Games by James Carse. It came out in the 1980s and has just been reissued in reprint. And it's a book that I was able to read in, you know, probably three hours or something like that. But I think about it literally every day. And the basic premise of the book is you can you can draw this distinction of all human activity falls into one of two categories, either a finite game or an infinite game.

[00:45:44]

And a finite game is something that you participate in to to bring it to a close in some desired way. So, you know, a a a boxing match is a great example where each boxer throws every punch with the intent to bring the boxing match to a close. In contrast, an infinite game is something that we participate in in order to prolong it or extend it. And so, you know, conversation or musical improv or comedic improv are all examples of things that, you know, we we participate in then in order to prevent them from coming to an end.

[00:46:26]

And so cast and having having created this distinction goes on to apply it to all sorts of things, from law to sex to medicine to ethics. And it's just a really, really kind of riveting, riveting exploration of human motivation. And it's it's a it's a dichotomy that has stayed with me as I as I go through life.

[00:46:50]

Interesting. OK, I'm not going to ask whether sex is a finite or infinite game for you. We'll just leave out as an exercise for our listener. Tom, what's your pick for the episode?

[00:47:01]

So my pick is also a book. It's a book called Do the Right Thing by Stewart Russell and Eric Wefald. And this is a book that is it's a pretty technical book, but it really makes a very important point. And it's a point which is one of the the implicit premises, I think, in in our book.

[00:47:21]

So the argument in this book is that really the way that we think about rationality should take into account the fact that we as agents have limited computational resources. And then it takes that premise and then it works from there and kind of comes up with a new way of looking at rationality, which they call bounded optimality, where basically the idea is that, you know, being a rational agent isn't a matter of making the right decisions, it's a matter of using the right programs.

[00:47:51]

And that's very much consistent with the kind of idea of a book, which is that that rationality is really a matter of following good algorithms.

[00:47:59]

Yeah, man, rationality really needs a full time PR agent or something. One of the jobs up and I'll double check Toltec cool. Well, guys, thank you so much for for coming on the show. I really enjoyed your book and I will recommend it to others.

[00:48:15]

Thank you so much. Thanks for having us. Yeah, great. Thank you. Well, this concludes another episode of Rationally Speaking. Join us next time for more explorations on the borderlands between reason and nonsense. The rationally speaking podcast is presented by New York City skeptics for program notes, links, and to get involved in an online conversation about this and other episodes, please visit rationally speaking podcast Dog. This podcast is produced by Benny Pollack and recorded in the heart of Greenwich Village, New York.

[00:48:52]

Our theme, Truth by Todd Rundgren, is used by permission. Thank you for listening.