Transcribe your podcast
[00:00:00]

Today's episode of Rationally Speaking is sponsored by Give Will Give Will takes a data driven approach to identifying charities where your donation can make a big impact. Give all spends thousands of hours every year vetting and analyzing nonprofits so that it can produce a list of charity recommendations that are backed by rigorous evidence. The list is free and available to everyone online. The New York Times has referred to give well as quote, the spreadsheet method of giving give. Those recommendations are for donors who are interested in having a high altruistic return on investment in their giving.

[00:00:30]

Its current recommended charities fight malaria, treat intestinal parasites, provide vitamin supplements and give cash to very poor people. Check them out at Give Weblog.

[00:00:52]

Welcome to, rationally speaking, the podcast, where we explore the borderlands between reason and nonsense. I'm your host, Julia Gillis, and my guest today is David Manheim.

[00:01:02]

David is a decision theorist with a Ph.D. in public policy from Party Rand Graduate School. And one of the topics that David has studied and written a lot about over the years and blog posts and academic articles alike is a principle called Goodhart law, which is one of those.

[00:01:22]

It's in that small set of deceptively simple principles that once you understand it, it kind of explains so much of what's wrong with the world. So good. Hertzler, you might have heard it stated as when a measure becomes a target, it ceases to be a good measure. So we're going to talk today about what that means, how good Haaz Laws shows up and kind of the dynamics of how it works. So, David, welcome to rationally speaking.

[00:01:50]

Thanks. I'm excited to be here. I'm curious how you got interested in Goodhart law in the first place and specifically whether it was more like seeing how consequential this law is to education and health care and policy and business and things like that in the real world versus the kind of mathematician's wow, what an intellectually interesting set of dynamics for me to puzzle over.

[00:02:17]

So it's interesting. It was kind of a weird path. I had talked about it a little bit at a less strong meet up in Los Angeles with a couple of people who are now Myrie. Then I was writing a blog post about how corporations figure out how they organize themselves. And a bunch of people commented that corporations should be able to do this really easily. They'll just set targets for what it is that different groups or business units should do and tell the business units to do that.

[00:02:51]

And the business units can kind of go off on their own and just, you know, have the marketing group optimized to get as many people as possible to click on the website and then have the sales people optimized to sell as many products as they can. And it turns out that this really, really doesn't work if you actually get people to work on really narrowly defined targets, as the law says, things start going wrong.

[00:03:21]

That was like the context in which you first started getting interested in Goodhart law was organizational theory. Yeah.

[00:03:28]

And that was much more closely related to my work in grad school on bureaucracy and how it is that organizations work than it was to what I later ended up thinking a lot more about, which was how that matters for I got great.

[00:03:45]

Yeah. And I want to talk about both the organizational theory context and the A.I. context, but let's first just kind of get more of a handle on what Goodhart law is and how it works.

[00:03:57]

So maybe the prototypical example, like when people write blog posts about goatherds law, the illustrative example they start with is a story that is probably apocryphal from the former Soviet Union.

[00:04:11]

Yes. Where the government had this measure of factory performance that they would use to, you know, incentivize factory owners. And it was based on the number of nails produced. So, yeah, factories that were supposed to produce nails, they would the managers were judged based on how many nails they produced. So as a result, of course, the factories produced millions of nails that were incredibly small and not actually useful for anything. So then the government was like, OK, never mind.

[00:04:41]

The thing we're going to judge you on is the weight of the nails that are produced. And so then, of course, the factories were like, great. And then they made, you know, just a small number of extremely large, heavy nails that were also useless for anything. And, you know, the point being that any kind of simple way that you define how people are being judged or graded or rewarded, just evaluated in any way, any simple metric like that is kind of easily gamed, like the nail metric.

[00:05:11]

Would you consider that like a central example of good arts law, or is there a different one that you think is a better illustration?

[00:05:19]

So there are a couple of different ways that class law manifests the central dynamic of people. I guess Munchkinland rules kind of figure skating, like trying to figure out how to use the rules that are there to do things, as you know, as well as they can to do what they want to do and kind of ignore the point of the of the spirit of the law.

[00:05:50]

Right. So. There are a couple of places where that happens. I actually think the clearest example of people trying to beat the rules and ignore what it is that's happening is a scandal that has happened a couple of different times for exactly the same reason every time, which is teachers going in and changing students answers on tests for standardized tests. And they're just directly changing what it is that the result is so that they look better and it's not, you know, a sophisticated game that they're playing where they're trying to figure out how they're just changing things so that they do better.

[00:06:35]

And you have to put a lot of things in place to make sure that you can trust the numbers that come out of a system where you're paying people or motivating them or in the Soviet case, threatening to throw them in the gulag if they don't manage to do what you want them to. It's really hard to get them not to play games then, right?

[00:06:55]

I mean, that the case of teachers actively going in and altering Students' answers is almost a less interesting example to me than other stuff that happens to try to boost standardized test scores like might have been. In one of your blog posts, you you wrote about teachers. Kind of half consciously teaching to the test at the expense of teaching the underlying principles involved, like like telling students, OK, you know, you just plug in on a multiple choice question, just plug in each of the possible answers into the equation to see which one makes it come out.

[00:07:31]

Right. And that's how you know, which what the right answer is, which is a way to get the right answer. But it doesn't help you understand the, you know, the algebra involved.

[00:07:39]

And it doesn't generalize to anything other than a multiple choice test where you're given for answers you can plug in really quick, right? Well, if it had been a fill in the blank test, you'd be stuck. You can't just plug in numbers until you find one that works. That might be a different strategy. You know, figuring out how to find closer approximations is a reasonable strategy. But just plugging in the numbers you're given isn't so. Yeah, I've talked about that a couple of times.

[00:08:08]

Teaching to the test is one of these things that I think illustrates a slightly different part of the dynamics, which is that Eliezer Yudkowsky talks about in a blog post he wrote a bunch of years ago, Lost Purposes, where he says you have an organization that starts out and says, look, we need to do education. And education means people need to be able to do this thing, this thing being, in this case, mathematics. They need to be able to do algebra.

[00:08:39]

So how is it that you teach somebody to do algebra? You have to cover these lessons. So what are the teachers told here is the set of lessons that you need to cover so teachers go through to dutifully and cover the lessons that they are told to cover. Does that necessarily align with making sure that all of the students actually understand the subject? No, definitely not. You know, most students that struggle with math, I would say in high school probably need somebody to spend a bunch of time with them working on how fractions work rather than plugging through more algebra.

[00:09:22]

Yeah, and so what happens is the goal, which is making sure people know how to do things with math, has been lost to the purpose to to the narrow set of things that they've been told to do. Well, the narrow set of things that they've been told to do or prepare people for standardized tests, teachers hate this universally. If you talk to teachers, they say we hate that we're teaching kids. We're spending, you know, a half dozen classes just doing SAT prep in our math class.

[00:09:56]

Like we could be teaching them something that they're not they're not doing this because they want to get away with something. Usually what they're usually doing is following the rules that they've been given to achieve targets that have been set because somebody lost track of the fact that what it is we're actually trying to do is graduate students who know how to do this.

[00:10:17]

So that is a really interesting question of what exactly is going wrong there. So there's it seems like one category of good arts law is when when people genuinely just have different incentives, like the. In a telemarketing company, for example, the maybe the management cares about the profits of the company, like upper management cares whether the company is doing well over the long run or not, or at least the near term, but the workers themselves do not care.

[00:10:48]

And so the management might come up with a rule like your bonuses based on the number of calls you complete in a night or something. And so the workers now just do calls really quickly, but they don't care about the quality of the calls. So they're the sales actually go down and the workers may know that that's what's happening, but they don't really care because they all they care about is getting their bonus. They don't really care about the company as a whole.

[00:11:13]

So that would be a case of just misaligned incentives or incentives that are at odds.

[00:11:18]

There's a there's an important kind of subcategory of what happens there, which is you usually have different business units that inside of a company that are legitimately trying to work on different parts of the problem, but end up in a situation where the people who are doing lead generation handoff leads that aren't actually going to make much money. And the salespeople are trying to figure out how to, you know, maximize the total dollar value of sales instead of the profits. And senior management is sitting there looking at it going, well, how do we get all of these different groups to work together?

[00:11:56]

And the answer is getting people to work together is hard. Like kind of running large organizations is hard. And a simple solution is to give them clearly defined goals. Right. Sometimes it's even the best solution, even though it falls prey to this failure mode. Right.

[00:12:16]

So that. But would you agree that still that's that's a separate category from the category where. The different people in the system just genuinely care about different things and are not they don't have the same end goal. Yeah, the thing you're talking about here is just we all have the same end goal, but it's really hard to coordinate together to achieve that goal.

[00:12:40]

So inside of organizations, there's this is this is something that's more general than good. Hertzler. But I think critical for understanding a little bit better, which is most of the time people's incentives have a lot to do with context and only a little bit to do with kind of. Management dictates usually people are doing the things that they do because this is what they're being told to do, this is what their manager wants them to do. This is what all of the people around them are doing.

[00:13:16]

So most of the time, it's easy to conceptualize things as this is a straightforward principle agent problem where the principal wants X and the agent wants Y, and you need to figure out some way to get them aligned with one where the principal is the person who sets the goals or makes the rules right.

[00:13:36]

And the agent, the one carrying out those orders.

[00:13:39]

Yeah, so in economics, this is a big topic of discussion and there's tons of work on this that assumes that there are these nicely defined objectives that you're trying to maximize. And that gets to, I think, another part of the discussion, which is the reason why this is hard, is because we don't have a really good idea about how to define what our goals are. So in a company, you can say we want to maximize profit and even that isn't really true.

[00:14:14]

We want to maximize profit subject to not having PR fiascos and having our executives thrown in jail for violating laws.

[00:14:24]

And there are a lot of things that you're you're trying to do and not in a way that will cause us to burn out in six months or. Yeah, right. So these are all things that actually matter, how do you operationalize all of those? So even ignoring those those constraints, how do you operationalize maximizing profit in a way that tells individuals in the company what they're actually supposed to do? You tell the the guy sweeping the floor, by the way, sweep the floor in a way that maximizes profits.

[00:14:55]

That doesn't mean anything. Right. Right. You do have to operationalize everything. And if you don't have a clear mental model, you know, actual model of how it is that everything relates to one another, which is hard to do, it's hard to figure out what it is you actually want. So it's really hard to figure out how to set goals that accomplish it.

[00:15:16]

Can I give you a few examples of phenomena? And you can tell me if they count as good hearts, law or not? This is one of my favorite ways to try to understand the boundaries of the definition of something is just throw examples at someone who understands it and have them tell me yes, yes or no. And why not? If not so, one example that actually came up on Twitter last year was a news story that I shared about an aquarium that tried to train the dolphins in their aquarium to clean up the litter that people would toss into the pool.

[00:15:51]

And so what they would do is they would reward the dolphins by giving them a fish if the dolphins brought them a piece of litter or like a dead seagull. And then one dolphin started tearing pieces of litter into smaller pieces because they were being rewarded based on the number of pieces of litter. Right. So then they would trade each torn piece of litter in and get a fish for each one. And then another dolphin started stockpiling fish so they would get fish as a reward, but they wouldn't eat it right away.

[00:16:20]

And they would stockpile the fish to then lure seagulls into the pool and kill them and then trade the dead seagull in for more fish, which is just so ingenious. I'm a little scared of dolphins. Like if they had opposable thumbs, I would be I would be genuinely terrified. But anyway, so there was a debate in that thread about whether that is an example of Goodhart law. And I think you said it wasn't. So why not?

[00:16:48]

So? So there are a couple of pieces of this that are going on, and it kind of depends on exactly where you want to draw your lines. And some of that matters and some of it doesn't, but. The key thing that happens when we're talking about people falling prey to good hearts, law organizations falling prey to good hertzler, is that somebody mistakes the goal, the metric for the goal. So in organizations, what that means is that at some point there's the purpose was lost here.

[00:17:28]

I don't know if the purpose was lost. I think it was just somebody, you know, somebody a dolphin. I think my account of somebody aimlessness.

[00:17:36]

I think any creature can do something that's clever accountants or somebody. I think that's fair. So I would say it has a lot of aspects of the kind of principal agent dynamic where they're not doing the thing that you wanted them to do because you're paying them in fish to hand you a number of things. So they're, you know, they're making small nails, but. It's not due to the fact that there was some confusion at some point about what was or some, there was never a point where somebody said, oh, well, the only way we have to measure this is this.

[00:18:21]

It's just that the trainers found this to be the easiest way to implement this.

[00:18:28]

But it wasn't lost. It was ignored. Right. And sometimes that, as I said about companies, sometimes that's fine, you know. Yeah. I don't know if it's such a horrible thing for the clever dolphin to be whipping up the garbage and handing them multiple pieces to get more fish.

[00:18:48]

That might make it makes it less efficient. It makes it less official works basically right at the point where it's killing seagulls. That's you know, that's that's definitely a more problematic failure mode.

[00:19:00]

Right. That's a good distinction. OK, what about another example to throw at you? What about the user engagement metrics that social media companies like Facebook use, which end up furthering the spread of sensationalist or even false news? Because that's the kind of stuff that causes higher user engagement.

[00:19:22]

Is that a case of good health law or is that just a case of, you know, Facebook's goals are just not aligned with society's goals?

[00:19:29]

So there are a couple of things that are going on there. I saw a comment recently by Robin Hanson saying he doesn't understand why companies don't use a B tests or run experiments more often because, you know, this is effective.

[00:19:46]

Why don't we do this? And my immediate thought was part of the reason why is because we don't have great metrics to run them with. So Facebook and other tech companies do. They are constantly running very sophisticated experiments internally to figure out what drives engagement most, where engagement is measured by a metric that they've chosen, which isn't even necessarily what they want. So Facebook may be incentivized to maximize the number of users that engage with the platform every day because it's a metric that they report to investors.

[00:20:31]

So it looks good. But part of what's happening is that they're confused about what it is that their users want, so so, you know, the studies seem to show that using social media makes people less happy. I don't think that that's something Facebook wants. I don't think that there's a conflict between what Facebook wants and what its users want there. I think there's a conflict between the easy to measure metrics that Facebook can look at and what it is that it's optimizing for and the actual goals of both Facebook and the people using it to provide a useful service that people are interested in so that they'll look at it a bunch and click on ads and make Facebook money or use Facebook a lot so that Facebook can harvest their data so that Facebook can sell it.

[00:21:24]

But whatever the business model is, I don't think that it's actually being served necessarily by the fact that the incentives are being misaligned. Fake news is a really important example right now because it was absolutely inadvertent on the part of Facebook that they're their algorithms motivated people to share kind of news that created filter bubbles, that led to people spreading fake news that let foreign governments promote things that was never it was never intentional on. On Facebook's part to create that dynamic, the dynamic that existed was exploited by others, so it's in some ways more difficult because this gets into a very complex multiagency scenario where the metric that Facebook is using is being gamed by governments and corporations that can figure out how it is that they can use that to manipulate the users of Facebook, whose goals and incentives are a third set of things that we care about.

[00:22:49]

And those are right.

[00:22:50]

That's a very complex one, but it's a very complex case. I think that there are at least two or three different places where. Goodhart phenomena are happening there, so it's a really good example, but it's a hard one to pull apart.

[00:23:08]

OK, all right. Let me let me give you one more. What about cases where governments pass regulations like companies have to, you know, provide health care for employees who work at least 40 hours. So they set a discrete threshold. And then as a result, companies just like have all their employees work 39 and a half hours or they, like do some kind of complex. No, I'm going to stick to that example. They have all they respond to discrete thresholds by by getting as close to the threshold as possible and then not going over it.

[00:23:43]

Does that count? Yes.

[00:23:43]

So I actually was involved in a conversation and dubbed this sporks law of limits based on something that a guy named Steven Shark said on Twitter, which is if you put a limit on a measure, if the measure relates to efficiency, the limit gets used as a target. So what happened here was really specifically that they said, look, here's the limit. If you hit 40, then you have to pay all this extra money. So what people did was they started instead of saying like, oh, well, we'll have some people who we employ that we have for 20 hours and some that we have for 40, they started saying, great, let's get everybody at thirty eight hours or thirty nine point eight hours so that we don't have to pay this because it saves us a ton of money to cut the 15 minutes off of our 40 hour a week worker.

[00:24:37]

So of course we're going to do this and it's definitely closely related to Goodhart. So there's definitely a metric that's being looked at. But I don't know. Kind of the central dynamic for good hearts law is one where the metric stops being useful because of the way that it's being played with. I don't think that the regulators were trying to measure something specific with the full time workers get health care, non full time workers don't get health care. I think that they were using that as a convenient line.

[00:25:19]

So, you know, I'm not sure how helpful it is to start talking about which thing specifically qualifies as, you know, what we're calling, you know, a good heart effect or not. But there are definitely some dynamics in there that relate to the metric that's being used.

[00:25:40]

So I think that that's I mean, the reason this is helpful are the reason I hope it's helpful for people. The reason it's helpful for me is is looking at examples like this. And whether you would call them Goodhart law helps helps highlight that there are multiple important phenomena going on where, for example, one is is like lost purposes and confusion over over what a metric should be. And a different phenomenon is, you know, adversarial game theory where people have different incentives and will, you know, respond by serving their own goals instead of the spirit of the law.

[00:26:16]

Yes, like that. All right. So let me let me now try to summarize the different mechanisms that we've talked about. One mechanism is this kind of adversarial dynamic, which isn't really central to Goodhart law, like the nail factory in the Soviet Union, too, is the kind of vague goals and an underspecified goals, making it difficult to figure out what metrics to set in an organization. For example, three is just coordination difficulties where maybe you have like it's kind of it's clear what goal you're trying to pursue as the head of the company.

[00:26:57]

But it's really hard to come up with metrics that when you implement them, will cause all the different departments to be optimizing for the right thing in a way that, you know, coordinates effectively. And then, OK, so a fourth thing that I don't think we've talked about that I want to ask you about is genuine psychological confusion, not not over what my goal should be, but just getting. Like coming to think that you should optimize for something that isn't what you were wanted in the first place.

[00:27:23]

So, for example, I I've had some conversations with people about scientific progress and in scientific progress slowing down. How can we tell? And something that keeps happening in these conversations is that. Other people point to. A metric of scientific progress that seems completely wrong to me, they point to the number of papers published, so they'll say like scientific progress is actually speeding up. Look at the number of papers that are being published, you know, over the years or per researcher, even like researchers, are publishing more papers per year or over their careers than they used to.

[00:28:02]

And to me, that's so completely backwards. Like number of papers published is an input, not an output. And we should you know, what we actually care about is the number of discoveries or the number of important discoveries. And if the number of papers published is going up but the number of important discoveries is not, then that's that's worth.

[00:28:24]

And so, you know, I don't know what they would say to this accusation, but I feel like they've just gotten confused about what we care about with respect to science.

[00:28:32]

So and what I called what I call the kind of the confusion that happens is people reify goals and reify as a term from psychology where what happens is they take something that they think they see, that they think looks one way and they turn that into the thing itself. So you start with, oh, well, we're trying to do science. Well, what is science? Science COMPRE is comprised of people publishing papers. So papers are science. More papers are therefore more science.

[00:29:12]

And it's not I don't even think that that's wrong. More papers are more science. It's just that. Our goal isn't more science. Our goal is advancing science. We want progress. We don't just want things to happen. So partially, this is an example where I don't think that people are clear enough about what it is that your goals are. As a scientist, you're supposed to you know, even if you were to say, I think correctly, that science is about formalizing insights into the nature of reality so that you have better predictive models, there's still a difference between better predictive models of the way in which sodium and oxygen chemically interact and saying, you know, we have better models of how it is that bubbles form in water versus better insight into how it is that when kids blow into a straw, it makes different things happen in the cup.

[00:30:30]

And you can publish a paper on any of the three of those. And I'm betting that the third one would get more media attention than the first two. Right. And that's a metric I don't think it's the most useful one. But what you end up with then is this situation where people optimize for the easiest insights to find the ones that are the best for their career, the ones that are going to help their citations the most. And all of those things are things that matter locally because of the dynamics of the larger system that aren't science.

[00:31:11]

But back to your point. Yes, people get fundamentally confused about what it is that their goal is. I had an example of this that I mentioned in my in one of my essays on Ribboned Farm, which was I noticed that I use Twitter a lot. It's a great thing to do when I'm trying to run an analysis. And I it's going to take 12 or 15 minutes to run and it's not enough time to do anything else.

[00:31:37]

So I tweet and I reply to people and my behaviors naturally drawn to doing the things that kind of are incentivized by Twitter. So what I realized at one point was that if I put screenshots of the things that I linked to in my tweets, they get a lot more engagement. And Twitter tells you this is I noticed that this is how much engagement you have. And I'm like, oh, that's great. Like, I definitely want more engagement. And so I started doing that more.

[00:32:12]

Right. And at a certain point, I was looking at some of the metrics a little bit more. And I realized, yeah, it does drive more engagement. People click on the image and they read the excerpt that I put and then don't click on the link. So what I've done is cannibalized some of the actual, like people reading the thing that I think is important for them to read into them, reading the four lines that I thought were like most interesting or most attention grabbing.

[00:32:43]

Well, that's not a tradeoff. That's not what I wanted at all. But I hadn't thought through this enough and I just kind of grabbed the thing that I thought was useful without a ton of reflection at all. And that's exactly in my mind, look, I reified engagement, I want engagement. I you know, if I'm if I'm on Twitter and talking to people, I'd like them to interact with me. But the type of interaction that I want isn't that the same thing is true about clever, snarky comments on Twitter.

[00:33:15]

Get lots of tweets and lots of likes and probably drive away the type of person who I'd actually like to interact with on Twitter. Yeah, because it's not substantive and not interesting. So if you're not really careful about what you're doing, then you absolutely end up with not your actual goals as what it is that you spend your time doing. Those those don't even sound like reification mistakes to me, those sound like just like you weren't paying close enough attention or thinking carefully enough about what you wanted to optimize for.

[00:33:53]

But I definitely I have noticed reification, mistakes in myself, where, for example, if I'm on a diet, what what I actually care about is, you know, losing the weight. But it starts to feel like what I care about is making the number on the scale go down.

[00:34:08]

And so, you know, those are obviously very closely linked, but they're not exactly linked. And so what I will sometimes find myself tempted to do is to weigh myself, like when I, like, don't drink a lot of water because I don't want to the number on the scale to be higher because I've just drank a gallon of water.

[00:34:26]

You scale the water that me gain weight.

[00:34:28]

So, you know, you then jump on the scale because maybe I was like a quarter pound and you're like, yes, that's the weight loss was interesting. Right? So I think that and I can see myself doing it, but it. Yeah.

[00:34:40]

So I actually I think that in my case for Twitter, I had reified it in exactly that way. It wasn't only me kind of not thinking about it. I think that part of it is it's hard for somebody to see the difference between you not thinking. So somebody sees you jumping on the scale or, you know, not drinking very much water. And they think, oh, she hasn't thought about her goal of what weight loss actually means very much.

[00:35:05]

She just thinks that it's that the number on the scale goes down. And that's not what happened. What happened is right. You do know exactly what it is. Your goal is you just your brain slipped a little bit because it's hard to pay attention to everything that your brain is doing on a low level.

[00:35:20]

It's hard being a human. Yeah. Oh, well, so I feel like that's exactly the right time to say.

[00:35:27]

Yeah, it's it's going to be harder to be in I but all of these issues and I just want to throw this in because I think it's key is all of the issues that we have with good hearts. Lor one of the key things that we can do to get around them is rely on judgment. You know, ask ask teachers to use their judgment about what it is they should teach a little bit more and follow guidelines on that lesson. Ask people to just think a little bit about what it is they're trying to do.

[00:35:58]

You know, when you're when you're giving them assignments at work, just like tell them, oh, by the way, you should push back if you think this is wrong. Those are all the things that when you're automating systems you can't do. You can't tell. Facebook's AP test, by the way, think really quick about whether this is actually what it is we want. And so we end up in a much worse situation when we don't have all of the fuzzy stuff in people's head to fall back on the point.

[00:36:27]

Yeah, at the point being that even advanced artificial intelligences can't they don't have the they might develop something we would call judgement, but it isn't going to be a close match. For what? For the judgements that we would make as humans, because there's a lot of kind of implicit stuff in our utility, our quote unquote utility function as humans that we can't easily transfer to an AI.

[00:36:51]

Yeah, there are a couple of different dynamics that apply here. And this this is one of those places where I've had a bunch of discussions on less wrong and other places with people who are focused much more specifically on AI. And I don't think that there's a lot of clarity on exactly where to draw the line between AI not being aligned and AI gaming targets and AI falling prey to good Haaz law. And I'm not sure it's interesting. The lines between these are clear to they're certainly not clear to me.

[00:37:25]

I don't think that they're clear to a lot of the people who are more actively working on this. If they seem clear to any of the listeners, it'd be great for them to, you know, write a blog post, tell us about. Yeah, yeah.

[00:37:38]

No, that's a good point. I those tend to get there. Not clear to me either. I'll say that.

[00:37:44]

Would you say that this the difficulty of empowering people to just use their judgment is part of why startups often struggle when they scale up to become kind of larger, more established companies, because coordination is so much harder and you can't just, you know, tell people to use their judgment.

[00:38:04]

There are a couple of things that go on there. That's that's part of it. I mean, that's a complicated topics that I've thought a lot about. Some of the yeah. Some of the things that go wrong are simply that when things get bigger, it's not that it's harder to tell people to use their judgment. It's that it reflects worse on the people in charge when they don't go well and the people not in charge have more difficulty doing the things.

[00:38:35]

So if you're in a startup, if there are three people in the room. You know, then the CEO tells somebody like, oh, by the way, like this is what we should do and the junior person in the room, you know, who's the second guy in the company says maybe, maybe I should take this riskier thing and do this. There's a great study that somebody did a long time ago in management where they asked a bunch of senior managers, not the CEO and senior managers, if you had a choice between these two projects or one of them has a 50 percent chance of feeling, a 50 percent chance of quadrupling the money that you invest or a project that has a ninety nine point nine percent chance of returning, you know, 12 percent on your money and a point one percent chance of returning, only one percent on the money.

[00:39:27]

Which ones you do. Yeah, and all the senior managers go that that second one sounds great. Like that's awesome. The senior managers do, because what happens if you invest 50 percent of your your annual budget in the project and it quadruples? You get a nice bonus, you get recognition, everybody's happy with you. And what if it fails? You probably get fired.

[00:39:50]

Right? Right. So they asked the CEO, they asked the CEO, which one do you want people to do? And the CEO says, what do you mean? Of course I want them to do the first one. This is a crazy question. Why? Why would any of my subordinates not do the first thing? And then they said, and so if you found out that one of your subordinates did that first thing and it failed, what would you do?

[00:40:12]

And the CEO was like, oh, they'd get fired.

[00:40:14]

Oh, man. Well, I can just I guess you identify with that fault in the system. Like, this is not a case where you look at a complex system and you're like, where are things going wrong? I can't find the I can't find the, you know, broken parts.

[00:40:27]

The problem here is just to narrow it down a little bit. The problem here is that when you're in a large company, people can't be informal about things. They have to have fully delegated the responsibility and the person who's making these decisions has to fully take responsibility for the outcome. And so you end up in this situation where you haven't actually aligned incentives with what it is you want to. And most of the time, the reason why is because actually spending the time to really align incentives would be much more work than it's worth.

[00:41:04]

But you definitely get misalignments because of that. So as companies get bigger, some of what happens is some of the junior people who are really used to being able to say, like, I'm going to take this really risky move because I know the CEO will have my back and they do it and the CEOs like I would have their back. But now we have investors and they're screaming that somebody needs to be fired. So I don't know what I'm supposed to do here, but the guys can end up being fired.

[00:41:33]

Yeah. So you end up with just different dynamics because of the fact that it's changed. And some of those have a little bit to do with the metrics and how you align people. And some of them have to do with kind of other factors about how it is that organizations work.

[00:41:51]

OK, so I have I think when I was reading one of your posts about how metrics kind of stand in for people using their judgment and intuition, especially in large, complex systems where you're responsible for one piece of it. I had this idea for how you could get around that problem in a large company, which I'm sure is wrong, because the chances that I would have come up with a plan for organizational theory that people aren't already doing are low.

[00:42:18]

But why don't you tell me why this plan wouldn't work? So to be clear, the problem that I'm trying to solve is the CEO, let's stipulate, has in their mind a a complete understanding of. What they would like everyone in the company to do, like if they could just look at each. They can do all the jobs for them, right? Exactly.

[00:42:41]

They have in their mind what they're what we as a company are trying to optimize for. But it's hard to specify it in a clear way, a simple way as such, that they can just give everyone a task and have everyone go off and do the thing and now the company will just be optimizing. That's just too hard.

[00:42:57]

So what they try to do something kind of like this, they give people metrics and those managers give their employees metrics and so on and so forth. But it's just so crude that, you know, you end up for having lost purposes.

[00:43:10]

Exactly. So that's the problem I'm trying to solve.

[00:43:13]

Could you just have so let's say there's the CEO and then there's a below the material upper managers, then middle managers and lower managers and then more employees. Can you not have any metrics at all? But instead you just have the CEO like quote unquote, check the work of the upper managers below them, by which I mean they say the CEO looks at maybe a sample of like 10 percent of the decisions that each upper manager makes and the CEO, because it's only 10 percent.

[00:43:42]

The CEO can spend the time to understand that decision and how that say the decision is the upper manager is evaluating the middle managers below them. And the CEO looks at 10 percent of those decisions and says, you know, here's how I would have made that decision, given my perfect model in my head of what we're optimizing for. And I'm going to reward or punish you based on how close your decision was to my mental, you know, to to what I would have done.

[00:44:10]

And then in turn, the upper managers do the same with the middle managers below them. They scrutinize, you know, 10 percent of the decisions that made middle middle manager makes about how to evaluate the lower managers and so on and so forth. So it's basically like a reinforcement learning, like it's propagating the CEO's mental model for what we should use.

[00:44:32]

What works for A.I. more in Align Company.

[00:44:37]

And yes, what I'm and I think that some of the intuition, some of the intuition there is is reasonable. I'll point out a couple of reasons that in practice this is like a really, really problematic thing to do. The first one is when we do this with reinforcement learners, the there's some idea about what the goal is at the beginning. So if the CEO gives a bunch of speeches and say, like, look, this is what we want to do and everybody pay attention to the speech and, you know, then like figure out what it is that I want and I'll look at some of your work and check and see if it's actually what I want you to do.

[00:45:17]

And I'll, you know, make judgments based on that. And some of you may be fired and some of you may get big bonuses because you're going to be more or less aligned with what it is. I think that needs to happen. People have a hard time. Operationalizing that it's the kind of thing that. If you were working in a company like that as a mid-level manager, you would constantly be terrified that you're doing something wrong. But you don't know what I cause because you haven't been given a clear goal.

[00:45:50]

You haven't been given a clear plan.

[00:45:52]

But I can't it can't you be reassured by the fact that you're not going to get fired. You're just the rewards and punishments are there, you know, continuous.

[00:46:01]

They're not so optimized less on the things. So then you just have less pressure on them to do the right thing. So, yes, yes, there are some things that are like that. The other part of this is. If you look at what companies did before the era of scientific management, which is going back to like a century ago, maybe a little bit more before they had the idea of having metrics and before they actually measured things much, this is kind of what happened.

[00:46:33]

And most companies kind of most of what happened ended up being judged on pieces that weren't actually how good a job you did.

[00:46:43]

But the CEO, instead of saying like, oh, this is this is exactly what I would have done, ends up saying because he doesn't have any concrete yardstick, this guy seems likable and or he flatters me or he and I don't have any, like, really clear reason to say that he did something wrong.

[00:47:03]

And this guy, when we were golfing the other, we kept on slicing the ball into the lake and it was really annoying.

[00:47:12]

So, yeah, I'm not thinking about that, but I kind of don't like this guy.

[00:47:16]

So, like, anything that he did is kind of bad anyway. So if you don't have clear metrics, then you do end up with people's biases taking a huge role. Yeah, and it's not necessarily true that the biases that people have would overwhelm their actual judgment. But and this is the last point about why it is that I think that this is a certainly really bad idea in practice, which is that I would guess I'm not a lawyer. I would guess that the lawsuits about somebody not getting the bonus or getting fired or anything like that in a system like this would be impossible for the lawyers to defend.

[00:48:01]

And everybody would end up furious and you'd end up losing tons of money because you spent, you know, 50 percent of the profit of the company defending against the four lawsuits from the people who actually probably should have been fired. But because you don't have a defensible system for explaining why. So there are there are elements there that I think would be useful. But I also think that, yeah, this this is not something I would tell a CEO to do.

[00:48:30]

OK, fine.

[00:48:31]

So maybe I shouldn't run society just yet, but if I think about it a little longer, maybe I'll be able to centrally plan my organization.

[00:48:41]

I don't think that I don't think that it's essentially a bad idea. I think that some amount of doing exactly that is what good managers do. What good managers do is they stop by. I've actually I've heard this specifically about Elon Musk that, like, he stops by people's desks. I had a friend who is at SpaceX and he'll stop by like you're not paying attention. And he'll, like, lean over your shoulder and be like, so why do we shape it like that?

[00:49:09]

And you're like, look over your shoulder and be like, oh, the. Oh, hello, you're on. Yeah, yeah. And you'll like, walk through your thinking. And about 90 percent of the time we'll be like, oh, that's good. And about 10 percent of the time we'll be like, wait, no, we should be able to do this, this and this. And he'll want you to defend your like it's not like he's like he's like, no, fix it.

[00:49:30]

You did it wrong. But he'll want you to explain, like, why it is that you didn't do that. And occasionally and he's a really bright guy, so, like, he's not asking dumb questions and like, that actually helps. And I think that that's a very extreme example. But good managers do stop by and say, hey, so I was looking at the work you handed in and this seems a little bit off or this seems like really great.

[00:49:53]

You did a good job, like keep on going. That is what they're supposed to be doing to some extent. But that's on top of the metrics.

[00:50:01]

That's not right. Right. Yeah, so just we'll wrap up in a minute, but are there any effective ways to get around Goodhart law that we haven't talked about?

[00:50:14]

Like, yeah, so I actually I have a paper about this recently. I'm trying to figure out where to submit it because it's not in every paper. Right. But basically the the there are a couple of really specific strategies. One is make a bunch of metrics instead of just one and figure out if you can, you know, hopefully they feel in different ways so that if you look at all of them, you don't end up messing up as badly when you incentivize people to build lots of little nails, you know?

[00:50:48]

And is that just are you just trading you're making it more robust to the kind of problems that Goodhart law causes. But in exchange, you're building in more of a role for your own, you know, intuitions, biases, because you have to decide how to weigh those different metrics against you.

[00:51:04]

Sure, you could you could even specify how you weigh them beforehand. Like, that's not a problem. It turns out it's really hard to game complex metrics compared to gaming. Simple ones. Yeah. Which is not to say that people will not spend some effort doing it, but it's more complex. That's a benefit and a and a problem because. Right. You don't want your goals to be so complex that people can't figure out how to accomplish them.

[00:51:32]

But you do want them to be complex enough that like the easiest way to do them is to actually do the thing you're supposed to do, right? Yes.

[00:51:38]

That's that's a nice way to define the know. So. That's right. Yeah. The constraints. The next piece, I think that's really important on how to deal with this is don't put too much optimization pressure on things. So finance does this horribly where they will, you know, almost explicitly say, by the way, your bonus is going to be, you know, about 10 percent of the profits you pull in in a given year. Well, it's really clear what it is you're optimizing for.

[00:52:10]

And it's short term profits over the course of a year or so, you know, go forth and take risks. You shouldn't.

[00:52:19]

So this is you know, if you push really hard to optimize on a on a goal. So if you give people fifty dollar gift certificates to, you know, the Visa gift card things, when they do the thing that you want them to do, that may be too little optimization pressure, but you're probably not going to fall prey to good hearts law to any really significant extent. If you put, you know, five times their annual salary riding on the metric, then, yeah, you're going to end up messing things up.

[00:52:50]

Right. The next thing is you're relying on people's judgment. Is it a bad idea? Like there are lots of places where just saying and there's a book, The Tyranny of Metrics, that basically spends two hundred pages saying so we should rely on people's judgment more. And that's that's not fair because places where people use metrics, it improves things like the world has gotten a lot better now that we have people actually measuring the results of what they do.

[00:53:21]

There are there are some downsides. If you push too hard in places where you're not 100 percent sure what it is you're doing. So there are good reasons to be careful, but, you know, don't abandon metrics, but sometimes abandon metrics like this is not a good place to use metrics here. There are places where that's going to be true. So I think that those are the big ones that I would say people should be paying attention to.

[00:53:46]

And, you know, a lot of this is if I had a hard and fast rules for where you should and shouldn't use metrics, I'd be thrilled. But it's not quite that simple.

[00:53:57]

I would I would be shocked and impressed if we lived in a world where there were such hard and fast rules.

[00:54:04]

And probably once those hard and fast rules became well known, they would be gamed. And so they would no longer be applicable.

[00:54:09]

Yeah, so great.

[00:54:12]

Well, David, before we wrap up, I mentioned that I was curious if there was a particular book or other resource that you could point to that was particularly influential on your thinking or your life.

[00:54:26]

So I'm going to I'm going to skip the easy examples that I have and, you know, not talk about Peter Singer or the sequences or anything and go, as I think I mentioned it earlier, but Bureaucracy by James Q. Wilson.

[00:54:41]

The subtitle is more about that, what government agencies do and why they do it. And it's really very readable. I mean, it's a little bit academic, but it's really very readable. And it actually goes through like, hey, this is this is why some government agencies do a fantastic job. Social Security Administration is great. They have a very clearly defined job. They send out checks, the checks. Get there on time, everybody knows what they're supposed to do and it gets done and it's fantastic and there are some government agencies where it's really hard and there's horrible bureaucracy and nobody knows how to fix it.

[00:55:15]

And there are good reasons why.

[00:55:18]

Do you recall any examples in that category?

[00:55:20]

Oh, I mean, so the first thing I would say is if you look at, for instance, the U.S. military, the primary reason that it's not efficient and people complain about the fact that there are all sorts of things that are deficient is because it's about as efficient as you would expect for an organization that's 10 times the size of the largest company in the world. It's human, right? Yeah, there's no way to manage that. And then the next piece, especially about the military, is and what is their output and peacetime military.

[00:55:51]

He talks about this peacetime military in a really bad situation. We're like, what are they supposed to be doing, getting ready to to do a good job at something in the future, in an undefined future scenario? Well, how do you measure that? How do you figure out what they're supposed to be doing? So it's a really hard situation to be in. Yeah. And there are ways to do it slightly better and slightly worse, but there are good reasons to say like, yes.

[00:56:15]

So that's why it's hard to figure out what it is that this bureaucracy should be doing. So it has a lot in there about kind of better understanding what it is that happens specifically in government. And I think that it's really useful because people like to dump on government for being inefficient. And I think that they're they're right and a lot of places. But there are good reasons why it works the way it does. But it's also really valuable. And people use it a lot in business schools to talk about how businesses end up in some of the same places.

[00:56:47]

So it's a really I just highly recommended.

[00:56:50]

Excellent. And would you say that in your trajectory in particular, was it well, that mostly influential in getting you interested in these in analyzing organizations and systems through these lenses? Or was it like you used to view, you know, you used to view government as just incompetent and the book caused you to recognize some of the hard problems that government is trying to solve that you didn't see or I was in grad school learning a lot about this.

[00:57:16]

So it definitely wasn't as simple as I used to think. But there were a lot of places where I updated really significantly about where the problems were and what types of things you need to think about to understand them better. Yeah, there are some tools in there that really do help you like, oh, this is this is why this is hard or this is the types of things that people have tried that don't work or that do work nice.

[00:57:44]

I really appreciate books where I come away with kind of a tool for analyzing things or or like general questions to ask myself in in trying to understand other completely unrelated things or seemingly unrelated to things to the topic of the book. That's a treat.

[00:58:02]

So if you want to understand bureaucracies, it's really highly recommended. It won't help you, you know, get get the phone company to transfer your number faster or whatever, but it will help you understand why it is that it's so hard.

[00:58:16]

I wonder if there are any disgruntled Amazon reviewers who are like, I was hoping this would help me figure out how to deal with, you know, Comcast bureaucracy.

[00:58:25]

Why would you get Amazon to refund me? And right now. All right.

[00:58:30]

Well, David, thank you so much for coming on the show. It's been an enlightening hour. I appreciate. Thank you.

[00:58:35]

I encourage our listeners to follow David on Twitter for more scintillating insights like those you've just heard. His Twitter handle is David Manheim. That's David M.A and I am David Manheim. This concludes another episode of Rationally Speaking. Join us next time for more explorations on the borderlands between reason and nonsense.