Transcribe your podcast
[00:00:00]

This episode of rationally speaking is brought to you by Stripe Stripe builds economic infrastructure for the Internet. They're tools help online businesses with everything from incorporation and getting started to handling marketplace payments to preventing fraud. Stripe's Culture puts a special emphasis on rigorous thinking and intellectual curiosity. So if you enjoy podcasts like this one and you're interested in what Stripe does, I'd recommend you check them out. They're always hiring. Learn more at Stripe Dotcom.

[00:00:44]

Welcome to, rationally speaking, the podcast, where we explore the borderlands between reason and nonsense. I'm your host, Julia Galef, and I'm here with today's guest, Michael Webb.

[00:00:54]

Michael is doing his PhD in economics at Stanford University, and he is the co-author of an exciting recent paper called Are Ideas Getting Harder to Find?

[00:01:04]

And that is the question we're going to be talking about today.

[00:01:07]

Michael, welcome to the show. Thanks for having me. So your paper is actually kind of a funny exception to the general rule. I think it's called the headline in the form of a question rule. And the rule goes, if some article, you know, using the popular media has a headline in the form of a question, the answer to the question is almost invariably no. Right.

[00:01:26]

Like, you know, are our children getting dumber or like, could coffee be the cure for cancer? And the answer is always like, no, actually, no. Yeah, but the answer to your articles, you know, spoilers is actually it kind of looks like. Yes, yes. Yeah. The answer to your question is a very strong strong yes. So, yeah, this is this is work that I should say is co-authored with Chad Jones and Nick Bloom, who are at Stanford and John married and his M.I.T. And yes, we work in this paper for a few few years.

[00:01:56]

And I can tell you kind of why we've been working on it for so long. But the write the summary is that exponential growth in basically anything in the economy. But looking at the aggregate level or any particular case study where we can measure these things, well, growth is getting harder and harder to achieve.

[00:02:17]

So you have to sort of throw more and more scientists, more and more R&D budget at any given sort of real world industry, relevant scientific problem to to get a particular given amount of growth over time. So just to sort of summarize it with an equation, if I'm allowed to put an equation on the podcast, it's very simple.

[00:02:39]

One, I would suggest that you can think of economic growth only sort of two percent a year or whatever is the product of two terms. The first is research productivity. How much do you get per scientist? And the other is how many scientists do you have? The number of researchers? And we this paper is that kind of wherever you look, whether you can measure these things, research productivity is falling dramatically and the number of scientists is is rising a lot.

[00:03:06]

So where we do see constant growth is simply because we have an incredibly increasing number of scientists who are offsetting the exponential decrease in those scientists productivity.

[00:03:17]

So it looks it looks to people who aren't paying attention to or aren't aware of the rapidly rising investment, a number of researchers or research hours.

[00:03:27]

It looks like progress is going great and and they're just failing to account for the the, you know, dramatically rising cost that we're paying per unit of progress. Right. Exactly. What would an example be? What's an example of a metric that you've looked at?

[00:03:40]

Yeah, so I think a kind of classic one that when we start with is is Moore's Law. So Moore's Law is this this famous law that the number of ownership doubles every 18 months and we show that. So this is a very well known fact. And it's amazing how how straight the line is if you plot it on a on a log graph over time. And what people don't perhaps don't know quite so well is the fact that it just takes so many more researchers today to get that same level of growth than it did originally.

[00:04:11]

So compared to the 1970s, it takes about 20 times the number of researchers to to get that same rate of progress. Yeah, it's interesting.

[00:04:20]

So I've observed some reactions to your paper is going around the my corner of the Twitter verse a while back. And I've also observed people reacting to, you know, similar ideas expressed without sort of a rigorous paper behind them. But, you know, is progress slowing down, as is research productivity slowing down? And there are these two very different reactions to it, where one reaction is like, no, of course, it's not slowing down. Like look at all the progress we've made.

[00:04:47]

And the other reaction is like, well done. Of course, like there are diminishing marginal returns to anything, like any endeavor you're going to like take the low hanging fruit and then it's going to like take more investment to make more progress. So basically, like half of the I don't know about exactly half, but like, you know, one group is saying, like, well, that's so obvious. You know, it doesn't need a thing.

[00:05:04]

And the other half or the other group is saying like, no, it's like wrong. But is a pattern that I see a lot with any argument that people make that like one reaction is like, yes, done. The other reaction is like, no, of course not.

[00:05:17]

But maybe maybe the reason there are those two different reactions is just people who think it's obviously wrong aren't paying attention to the cost, the increased costs that we're paying.

[00:05:28]

They're only looking at the numerator and not the denominator of the of the fraction.

[00:05:31]

Right. So possibly so. So there's kind of the bit of economics in which this paper sits is its growth theory, actually, and it's trying to address a specific hypothesis that has. Embedded in the literature on growth theory for a long time, which is precisely the assumption that scientific productivity is constant, right. And so even within economics, within people who study this full time for a living, there are substantial disagreement. And there's this paper and this there has been paper is an attempt to say, look, guys, this really is happening in this way.

[00:06:07]

Sorry.

[00:06:07]

It's it's standard practice to assume that productivity is constant. Why would you assume that?

[00:06:13]

So it turns out that models where you make that assumption have a bunch of appealing properties to you.

[00:06:22]

That and it's I've not, you know, been working since the 1990s, which, you know, some people who sort of argue about these things certainly have been. But my understanding is that, you know, even though sort of the aggregate level, suddenly no one has been believing that what we say is not true. So everyone can see these. These are basic facts that GDP has been pretty, you know, constant, if not declining for the last, you know, many years.

[00:06:52]

And at the same time, research investment has been increasing exponentially. Whether disagree is is more about the level of of where you look. So I say, look, look, OK, maybe this is true, the aggregate level. But if you look within, for example, any particular product line, then maybe it's true that the number of researchers that they've sort of followed their productivity is constant. And so this paper is saying, OK, let's let's take that idea seriously.

[00:07:14]

And it's going to look at a bunch of product lines and where we can measure the money carefully. And it's sort of that to be true, what we what we find. But that has been very much a kind of an open, live question in economics in recent years.

[00:07:26]

OK, well, let's let's give one or two other examples of metrics just to give people a sense of the the range of different places where you've looked at this phenomenon of falling researcher productivity.

[00:07:36]

So another one that I like quite a lot is looking at seed yields. So you may not know that over the last 50 years and in the US, so people often know about the green evolution. But even within advanced advanced countries like like the US, the the yields you get per say, the bushels you get per acre of corn or wheat or whatever it is, has been increasing a lot by about two percent a year. In fact, due to what what kinds due to scientific research on things like, you know, hybrid hybrid seeds and other also like pesticides and this kind of thing.

[00:08:16]

Right. However, that is true. At the same time, we've had, you know, hugely increasing amount of R&D going into precisely that thing. So there's the CD yields. Another when we look at then is medicine. So within medicine, we look at both new molecular entities. So these are the things that kind of get registered with the FDA. And every year some small number of them will turn into blockbuster drugs. And you look at kind of the flow of those against or indeed in pharmaceuticals.

[00:08:47]

And we also look at disease mortality. So we say let's try and quantify the amount of sort of increased life expectancy we are achieving for a given amount of research effort on a particular disease. So if we run a thousand clinical trials against heart disease or breast cancer, what are we getting for those clinical trials in terms of actual improvements to life expectancy?

[00:09:10]

Yeah, exactly. And that and the thing we're getting is like increasing a little bit, but not a lot. And we're putting like like many times more researchers into the problem. Right. It's getting less out of it than we used to. Yeah.

[00:09:24]

What's the what's like your preferred explanation for why we see this diminishing returns? Is it just like the structure of the universe that like, you know, as you try to discover things, you find the easiest things first and then like it gets harder and harder to find new things. And that's just like the structure of knowledge.

[00:09:44]

Or are there other, you know, candidate explanations. So I don't think we have a I have a preferred explanation. So there's this paper was very much just documenting the fact that it's not trying to make a case for any particular reason why it might be true. I think that's like a super interesting research agenda that is now suddenly, you know, sort of it's not as if we are proposing this question for the first time, people have worked on aspects of this question.

[00:10:08]

But I think it's clearly this wide open vista of interesting research that has yet to come that will both explain something about the sort of knowledge perhaps, and also then what we might do a do about this, which we might come onto later, I guess. But so in terms of the kind of explanations that you can think of. So so one is this idea that the the distance to the pit face and you have to walk so is a mining metaphor that doesnt have to walk to get to where you're sort of shovelling the the newest bit of getting get into the gold or whatever is a lot further than ever.

[00:10:41]

So the amount of. Know that you have to have as a scientist to be able to get to the frontier to make those contributions is just so much larger today. If you and you can see this from the amount of time it takes to do a Ph.D., how old an inventor is the time of the first take out a patent the size of research teams? So Ben Jones, who's a fantastic economist, professor at Kellogg, has some papers on exactly this piece of documents, these things.

[00:11:06]

And that means that for individuals, they kind of either end up spending more time studying, which is what you see in the ceilings, or you see that they just focus on narrower and narrower fields. So forget about a time you sort of can only learn something about a much, much narrower field, which might mean that you just have less good insights. Right. If it turns out that really because the field or the wider field, you have to be combining with knowledge from quite different bits of science.

[00:11:31]

So that's kind of one big explanation, is a distance to the to the first thing.

[00:11:36]

Some others are just that we are experiencing these kind of general-purpose technology driven ways of waves of technological change. So you might think that we've been the last 40 or 50 years we've been benefiting from from it from the computer. Before that it was electricity. And within a given kind of wave, your when you first discover it, it's like, you know, oil field and you first strike oil and it's like spurts out and it's like super easy.

[00:12:07]

But then over time you sort of take out most of the oil and gets harder and harder. It's like trying to find the last sardine in the saltine. Then it's like really, really hard. But we just have to sort of find the next big, big oil field, which is which would then be maybe that's ehi maybe it's some years some of the new things coming out of synthetic biology or whatever.

[00:12:24]

So that's kind of one thing of like diminishing returns within a given technological paradigm or some other other quick ones.

[00:12:33]

One is kind of innovation exhaustion. So this might be described as like a fishing out story. So they're like this are few of them.

[00:12:42]

And we've probably taken the low hanging fruit and there's like not so much left. That's so that's consistent with what we find, but it's certainly not employed by it and a couple of final ones. So one is that it's possible, possible now that we're spending a lot more of of R&D on averting bad outcomes.

[00:12:59]

So you think about something like a failure to do a failure to do this was the BP Deepwater Horizon disaster. And that has, you know, well, the data by Bellone is now the oil spill. British Petroleum is Obama kept calling it at the time.

[00:13:14]

So that was you know, that has been associated now subsequently with loads more spending on like really, really expensive valves, wait for the oil and gas industry and things to prevent this sort of thing happening.

[00:13:27]

And it kind of you don't want that doesn't show up in GDP. Right. And it just kind of a disaster. And everybody sort of low probability events. Right.

[00:13:37]

We have fewer low probability events then like so maybe sort of if you manage to if you could run history a bunch of different times and sort of take the average of those, then you would see that this was worthwhile spending and it was increasing things we care about. But the given run that we experience through through history, we don't see the bad things happening, but we do see all this money that went towards preventing it. And so you don't that is going to show up in quite the same way.

[00:13:58]

If that's one final thing on the final final thing would be a of paper that's been looking at the sort of the difference between the kind of innovation that startups do like this one, the big grand idea, and then the kind of innovation that more established firms do, which is much more going to follow on innovations to that one idea they had originally. And so insofar as we have a lot more bigger firms today and fewer startups, which which we do, you might expect to see that there's less kind of really foundational, exciting innovation going on and more incremental stuff for us.

[00:14:30]

That's valuable, right?

[00:14:31]

OK, that's a that's that's you know, that's great. I appreciate the taxonomy. I love me a good taxonomy.

[00:14:37]

What OK, to the last point that you made. So it unless I'm misunderstanding your paper, it seems like you guys were mostly looking at let's use the mining metaphor. I like you. You know, you're mining some mineral in a given mine. And like, you know, you start having to go deeper and deeper into the earth to get the same amount of mineral. And so that's sort of diminishing marginal returns. But you could go elsewhere and like start mining in a different location or you could, like, start mining a different mineral.

[00:15:10]

And then that could compensate for the diminishing marginal returns per mine. And if I'm understanding your paper correctly, it looks like you're you're tracking diminishing marginal returns per mine like, you know, diminishing marginal returns and productivity of researchers working on, you know, stuff relevant to transistors on a chip or relevant to crop yields or something. And so couldn't it be that, like, there are just other mines that are being started that you aren't capturing in your in your analysis?

[00:15:35]

Like, I don't know. I you I'm sure you know more about computing science than I do. But like, if you looked at, you know.

[00:15:43]

Our ability to process natural language or something, I know there have been a lot of increase in researchers into computing science recently, but there have also been like, you know, huge improvements in our ability to process natural language in the last 10, 20 years. And so I would like expect that that like productivity per researcher curve to look better than crop yields or something. I mean, I'm just I'm just guessing.

[00:16:07]

But, like, are you how do you feel about the the argument that you're not capturing, like, lines of new research, you're only measuring refinements to existing ideas.

[00:16:16]

So I was certainly sympathetic to the argument.

[00:16:18]

So on the one hand, the pattern that we find is writ large in GDP overall. So if you sort of look at the overall economy, we've had constant, if not declining growth and yet exponential increases in researchers. And so in theory, that should be, you know, GDP should be capturing all this amazing new stuff that's happening. You're going to have a story as to kind of why that's also failing. And it may well be you know, it's I'm not an expert on how things end up in GDP.

[00:16:47]

Maybe the GDP is kind of subject to the same critique that what we're doing is in some sense.

[00:16:51]

But the fact of GDP that sort of you have to say, OK, so just to make sure I'm understanding the two different ways to approach this question, one is looking at productivity in a specific, narrowly defined field or problem or technology. And that's that has the upside that it's like very, very concrete and objective. And like we can track crop yields. It's like not a lot of wiggle room in interpreting that, but has the downside that like there's a bunch of other potential technologies that you're not capturing because you're just focusing on specific selected ones.

[00:17:21]

And then the other way you could approach this question is looking at GDP productivity in the whole economy. And that has the upside that you're not missing. A bunch of things that people are doing has maybe the downside, that it's less clear that that productivity overall is capturing what we care about. Right. Like. So so it seems to me that that might be the case, because it's just like measuring GDP is tricky or it's tricky to capture whether we're weather like the quality of the things that go into GDP is being captured or.

[00:17:56]

Exactly.

[00:17:57]

I mean, I'm I should let you know, the economists, but like, why is it that GDP might not reflect increases in productivity in the sense that we really care about?

[00:18:05]

Right. So I just sort of just just go back on what you were saying. There's absolutely this this huge trade off as everywhere, really, and empirical economics between coverage and quality of measurement.

[00:18:17]

We measure a small number of things really very well. But if we do this paper or you can measure like the thing, you actually care about, the really big thing, like quite badly. And and so we do both. We talk about GDP in this paper and so GDP. So I you know, I should know a lot more about how GDP is contracted. I don't claim to be an expert. My understanding is it varies a lot by the kind of product.

[00:18:38]

So they sort of if it's like steel, that's really easy. You can it's like, you know, count the number of steel balls thing. You all sort of do something that tries to approximate that. But with, like software or it gets much, much harder because we don't care like any other day.

[00:18:53]

We don't actually care about number of transfers on a ship. We care about what the thing can do and what that thing is worth to us. Right. Right. And so the people who calculate this for the U.S. government and the OECD and others who sort of publish these guides as to how all countries are supposed to do it, like it's an absolute giant mess, is my understanding of like a bunch of, like, specific kind of hacks for any given particular product and and ways of getting these things and in particular, new products.

[00:19:21]

So I'm remembering now from things I've read, I get I'm not an expert, but what I remember is that when new products come in to GDP, often it's with a big delay. So like I think I'm remembering that, like cars, like Ford's cars did not show up in GDP for like a long time after he first brought them to market.

[00:19:38]

And so if that's if I remember correctly, that that would also be that would mean that this critique you're mentioning is kind of still right on with respect to the GDP.

[00:19:48]

Another thing I would say on that critique is that, you know, I'm pretty sort of sympathetic to the idea that we almost the very fact that you can then you can measure something mean that it's sort of already got to that point where there's enough people in the field. They've kind of had the big ideas already. And you're now already starting to go down that curve of diminishing returns and maybe like at the very beginning, that you have as like three people in the field and like they have all the really important big ideas.

[00:20:15]

So, you know, you look at, you know, computing, for example, we didn't sort of start measuring Muslimism. I remember looking at that in sort of from the 60s or so. But most of the big ideas for computing happened in a bunch of papers in the 1930s. Right. Or look at deep learning today. You mentioned these amazing things in natural language processing of areas.

[00:20:32]

Most of the really foundational stuff that was done, you know, the first patent for voice recognition was in the 1960s to right back propagation was it was around from around that kind of time and then being the sort of basic structure of algorithm that was really important and kind of exactly means of learning for these algorithms.

[00:20:53]

And that now actually works. Now we have enough computing power. But the idea was sound, you know, when it was first time and it didn't kind of wasn't useful for that many things back then. And so, you know, that was when it was this tiny, tiny field.

[00:21:05]

And so I would you know, if we now went and started measuring deep learning as as a part of our case studies, just the number of people in the field now is absolutely enormous.

[00:21:15]

And there's a huge amount of duplication going on. And a bunch of people watching, like not very good papers. And it's like impossible for anyone to sort of figure out what I was doing because so much of it so hard to separate the wheat from the chaff and also, like, really targeting these these metrics. So, you know, they care about things like percentage accuracy on some benchmark. And if you do these graphs again, I kind of I I've not looked into it and with the numbers in detail, but I would put a big bet that they would totally reflect the same story is another paper, which is that we have these like you have machine translation, you have sort of very slow study process before learning comes along.

[00:21:51]

Then you sort of turn that switch and suddenly it jumps up the sort of huge amount and then and then a few big jumps.

[00:21:58]

But like now there's like thousands of people all trying to squeeze out like a no point, five percent, one percent change in these metrics. Right. And so if you did the calculation that we do for everything else in this paper, you would.

[00:22:09]

I Betsie, again, probably an even bigger decline in the research productivity than than we have in our case studies.

[00:22:15]

OK, so that was a poorly chosen example of writing. What's so hard about this whole exercise? Right.

[00:22:21]

Well, I mean, if it's true that ideas take a long time to show up in productivity, that they're not just that would just imply that that things looked bleak, you know, 20 years ago or something. That's sort of like looking at the stars in the sky. We're seeing stars. The light comes. A long time ago, so we're seeing them as they look, you know, many millions of years ago and actually know what the scale is here.

[00:22:46]

But but that I feel like that doesn't actually that doesn't undermine your analysis. It just means that, like, we should be wondering about what was happening 20 years ago or 30 years ago or something. Why did things slow down?

[00:22:58]

But I guess I'm also wondering it it seems like from the way people talk about the paper, the way I've read interviews with, you know, one or more of your co-authors, it seems like they're assuming or implying that.

[00:23:12]

That if we see productivity slowing down as a number of researchers goes up, it can only be that ideas are getting harder to find. But it seems to me that there could be a bunch of other things, like other inputs to productivity, besides a number of researchers like what about, you know, amount of regulation or competitiveness in an industry or something like what if I don't know. There's like there's been like a steady increase in the rate of of people developing widget producing technology.

[00:23:38]

But the widget industry is just like kind of unhealthy. And so, like, it's harder for new and new companies get started or like that. The industry has been like super regulated. And so they like their productivity looks bad even though the researchers are not at fault.

[00:23:53]

So. All those things you just said could be true, and I think a lot of them do have strong elements of truth and there's bunch of bunch of evidence that, you know, kind of new business formation is has been declining for decades now.

[00:24:07]

And there's bunch of, like, really worrying trends in that direction.

[00:24:09]

But I think the the trends we document are just so big like this, like such a big increase in researchers. And what's an example?

[00:24:19]

Take Moore's Law, for example. What's the rough increase in inlike number of researchers working on the problem? Right.

[00:24:29]

So what we're putting it is that every 10 years we are doubling the number of researchers working on Moore's Law.

[00:24:39]

And and what's actually what's Moore's Law, again, remind me of the transistors on a ship.

[00:24:44]

Yeah, but what's the every every 18 months you double the number of citizenship.

[00:24:49]

Every 18 months, you double the number of transistors and every 10 years you double the number of researchers. That sounds good, actually. Am I doing the math?

[00:24:58]

So though we but we are treating is the idea output, the percentage change over time, so is a constant 35 percent per year exponential growth rates.

[00:25:10]

And that's not changing. But the number of researchers that are being used to create that constant 30 percent a year is going up. Got it right.

[00:25:17]

Right. Well, that actually was another thing that I want to talk to you about is, as you know, we chatted about this a few weeks ago or something.

[00:25:24]

One of the responses to your paper could be, well, why look at the percentage change?

[00:25:31]

Why not look at the absolute change? Like, isn't it impressive that researchers are able to produce more and more transistors in just an absolute like? Why why grade them in a different way?

[00:25:40]

Why grade them by the percent improvement in number of transistors on a chip?

[00:25:45]

So we just think that's the natural kind of unit, if you like, for that thing. So we don't we we do use different units for four different case studies. So for for medicine, for example, we're looking at absolute changes in life expectancy. But for most of all, we think it's totally makes sense. It would be very silly not to use proportional if you just think about the nature of the ideas. Right. So if you have an idea for a clever way of a new shape for transistors so that you can, like, fits more of them on on any given chip, if you hand me a chip with a million transistors, then I might have a new idea will mean that you've got now one point two million transistors.

[00:26:26]

If you have a chip with a billion transistors, suddenly that same same idea will give you one point three billion on not sure. So the idea is giving you a proportional improvement, not an absolute improvement. Does that make sense?

[00:26:37]

Yeah. Does it I honestly, I confess, I don't know how to think about whether we should care, should care about absolute versus proportional.

[00:26:47]

It feels like I'm so used to things being measured, progress and growth being measured in proportional terms. But in other cases it seems like there are reasons for that. I don't know. I find it hard to think about, but that that's that's reasonably compelling to me. The kind of research that allows us to increase the number of transistors, transistors on a chip is kind of feels more applied to me than the kind of research that, you know, physicists are doing that might help us figure out what the structure of the universe is.

[00:27:18]

Do you have you looked into. Do you have any intuitions about whether progress is also slowing down in more pure research or just applied?

[00:27:27]

So I have very little intuition about the rate of progress and kind of pure or sort of propositional knowledge.

[00:27:37]

You know, I think we we've been looking at these these very concrete things as a very applied research where we can they have actual economic applications and therefore easy to measure. I would have no idea how to go about measuring the amount of knowledge we have from theoretical physics. For example, if you talk to these people that some of them think there's been a big slowdown in that field, for example, something sort of to take, they're not that what they what they believe is some evidence that I should take account of in terms of my own beliefs.

[00:28:05]

But I think it's just it's not at all clear to me what what the metric would be, how you even begin to think about.

[00:28:13]

I can think of some some ways you can think of many different ways of quantifying increased knowledge, sort of propositional knowledge in the natural sciences.

[00:28:21]

When you say propositional knowledge, that's in contrast to what a contrast to prescriptive knowledge.

[00:28:26]

I'm, for example, like services applied or is it going of. Right.

[00:28:30]

So so so this is I'm taking this directly from Joel Malkia, who's an economic historian at Northwestern, who's he's wonderful. You know, he's his son. He didn't invent those times to publish all knowledge is is knowledge.

[00:28:43]

What an prescriptivism what is how what is kind of what how does the world work?

[00:28:49]

Like what does nature what does the boiling point of this thing like where is Neptune. Yeah. And prescriptive knowledge is how would you build a ship that goes into a computer or how do you make clothes or whatever. And so that's prescriptive is kind of what can you what can you patent. Right. And the prepositional is, you know, what counts as a discovery in some sense. You can't patent discoveries, but you can patent new ways of building something that's a good way to draw the line.

[00:29:18]

And indeed, so the actual definition that we don't we don't of mentioned in the paper, but it's sort of implicit in everything. And it comes from the economist Paul Romer, who founded the sort of ideas based growth models in a wonderful 1990 paper in the Journal of Economy, where he defined ideas as the instructions that we follow for combining raw materials. So there's a great quote, which is 100 years ago, 100 years ago, all we could do to get visual stimulation from iron oxide was to use it as a pigment.

[00:29:50]

Now we put it on plastic tape and use it to make a video cassette recordings.

[00:29:54]

I can tell us what can see. Yeah, but, you know, it's a nice kind of way of putting it. It's like, how do we combine what we already have to get better stuff, more stuff that we that we care about? I like it. And that's that's what we're selling in this paper. That's that's the ideas that we're referring to. And it is getting harder to find when it comes to the things that physicists study often.

[00:30:13]

That's incredibly important for them, being able to make new ideas as we mean them new, like recipe instruction manuals. So stuff we care about.

[00:30:23]

And possibly, you know, one of the reasons that we we've seen declining productivity, I didn't mention it but long. But you can called a taxonomy.

[00:30:31]

I just got a laundry list of all types of devices.

[00:30:36]

The one right is just less spending on precisely that kind of propositional knowledge that a pure, pure science foundation of science. And it's if I was like really big breakthroughs in applied work and ideas as we mean them come from discoveries that were more fundamental science, spending less on the first than we might perhaps be getting less per scientist. Yeah. Of prescriptive knowledge. Yeah.

[00:31:02]

I mean, maybe we should maybe we should only care about propositional knowledge insofar as that eventually translates into something, some measurable improvement in productivity or like human welfare or something like that.

[00:31:15]

And you know, the reason that we care about it is, is that we assume eventually some, you know, fraction of the of the propositional knowledge we accumulate, we'll be able to be turned into prescriptive knowledge.

[00:31:29]

And so it's only, you know, we still might as well just measure the outcomes we care about and assume that eventually the propositional knowledge, if it's worth anything, will show up in that somehow it just means it's like a longer we'd have like measure these trends over longer time horizons if we don't want to, like, look at propositional knowledge separately, as I thought.

[00:31:47]

So that's a very sort of Baconian view, that sort of thing. The value of this simply insofar as it creates useful knowledge we can we can use for for the economy. And I'm certainly sympathetic to that. I wouldn't want to go on record and say, you know, physics has no value except insofar as it leads to better, you know, but iPads or whatever.

[00:32:06]

Well, just in terms of like what we should ask the government to pay for in terms of what we want taxpayers money to do, you know, should the taxpayer funding, you know, us to study just stuff that we find interesting, even then maybe you might think that going to the moon has has value just as a thing of scientific discover.

[00:32:26]

And not only did that, it would still be a great thing to have done, and we might think that one of the roles of an advanced economy is precisely to improve human understanding and knowledge, even if it has nothing to do with how good our iPads are. I'm you know, I still think there's a huge almost moral value and certainly aesthetic value. I think many scientists would say to that kind of more pure foundational knowledge. But I'm also, you know, sympathetic to the idea that we mostly care about it, at least as economists.

[00:33:03]

We hear about it insofar as it translates into precisely interpretative knowledge.

[00:33:09]

I I might have missed this in your list of potential explanations, but it seems intuitive to me that.

[00:33:17]

There's just all this institutional inertia in in universities and academic fields and society in general, and maybe the reason that we're seeing declining research or productivity is that unproductive subfields just there's no mechanism to kill them off, like they're just going to keep, you know, submitting grant applications and they're going to keep like making arguments for why they're worth funding. And, you know, they're going to keep finding things to study that may not be that important, but like it's a thing to study that, like, justifies their existence.

[00:33:49]

And so, you know, if we had a way to kill off unproductive fields, then we would see better productivity. Mm hmm.

[00:33:56]

I mean, it's kind of a question about the allocation of resources in society. Right. So I'm sure you could you could kill off this field if you thought you could find one. The question is, you know, my question would be, how do you know that that field is, in fact, unproductive and is not on the cusp of something?

[00:34:14]

It's really important that the other, you care about fields or like departments or departments. Right.

[00:34:20]

And then also the people in those departments. But they go and do instead, because if you spend 10 years getting a PhD in the superhero thing and you are contributing to that thing, but like, you know, it is so specific that if you weren't doing that, then you'd have to be like, you know, an unskilled manual labor or something.

[00:34:37]

Right. That's putting it too far.

[00:34:39]

But it's sort of like less good than what you're doing as a as a professor in some department, then it's not clear that we should, you know, do that, but. Right.

[00:34:48]

So maybe we don't care about research our productivity literally to the exclusion of other of all else.

[00:34:55]

Right. We like the academic outcomes. Right. Like if we were to use we're talking about interventions here. If we want to go and sort of somehow the NSF declares we are not sort of stopping all funding of ABCDE, then what would sort of what would happen?

[00:35:09]

Yeah, that's a different question from just how volatile these fields right now. Right.

[00:35:14]

Right.

[00:35:16]

What other questions are you most interested in in this general area that like what do you think are the important next questions that we should be trying to answer?

[00:35:26]

Yes, I think the really important questions are what is driving both the overall trend that we see and also the kind of differences between between fields and then within a field, between labs in the field and so on.

[00:35:43]

And I think we're now in this amazing time in human history where where we really have incredible access to exactly the kind of data you would need to start to answer these kind of questions. And also, I think a much increased willingness among everyone to submit themselves to OCTs, for example, and for funders to fund these kinds of things.

[00:36:04]

Now that that's the sort of like increased in people in various fields, paying attention to rigor and applicability and so on has sort of taken root.

[00:36:13]

Exactly. And so I think you see an increased willingness of scientists to kind of impose that on themselves. So economists, for example, have recent meeting experiments of how do we incentivize peer reviewers to have a faster turnaround. So we like randomized you to either like, will you promise to pay your money for do it in this time or we we sort of just thank you for doing such a good service for the profession and a lot of things. I mean, to run the city and see what happens.

[00:36:39]

And I think I know less about other fields and I know economists are very serious about sort of taking what they do seriously and applying it to themselves. We have you know, the American Economic Association has committee is on market design for the economics job market. Right. They really take their own theories very seriously because I think it's great.

[00:36:57]

Like the academic equivalent of eating your own dog food. Yeah, exactly.

[00:37:00]

Yeah, it really is. And so I think kind of that combined with just the the ease of measuring now.

[00:37:05]

So now that so much more is being done on sort of fairly consistent interfaces on computers and it's like easy to measure things now. And we've got the you mentioned these wonderful like natural language processing advances, for example. Sort of. I just feel like many, many things we care about and it's much, much easier to measure. Now, maybe I'm being naive or maybe they look easy to measure.

[00:37:27]

But actually, till you get a close up close and something this this paper has been the experience of looking outside and getting up close and realizing, like, so much more going on than you realize. And it takes you an extra year to measure, you know, who was doing something that already in 1970 or whatever would be had to do for this paper. But in general, I'm optimistic about our abilities to to measure and then kind of test theories about what's important and then also, again, to troubleshoot.

[00:37:54]

So I know people working in Hollywood right now who are full of people in HBO doing cities. The business school is a school. We're doing our cities on the medical school to see how we sort of encourage them to collaborate more, for example, or.

[00:38:07]

Yeah, maybe that's the way to do it. Have have each school or department do our cities on different school departments. They're not tempted reget. I think that's an. I think you're promising for the area and I think a laundry list of things I mentioned, you know, some of them are amenable to our city, some of them kind of really not, I think the scope for a very wide collection of disciplines and quite possibly there's, you know, strong suspect that there are many people who are not economists who are working on this.

[00:38:33]

Some of them I do know about. I'm sure there's many I don't know about who are doing really good work on this kind of general general area.

[00:38:39]

Which of the explanations in your laundry list, I guess, for calling it that, not taxonomy anymore, which of those explanations would you like be most excited to start with, either because it seems like the most likely to be true to you, or it's like the most the easiest measure under the lamplight.

[00:38:55]

I would distinguish between, like, theories that it used to test and interventions that are easier to test. So I think you are testing the theory that the low hanging fruit get plucked first, like some of those theories are almost impossible.

[00:39:10]

They have, like, no testable implications. It's me. Just like by definition, the thing that you first find is because you found it first.

[00:39:18]

Therefore, it must be hanging fruit.

[00:39:20]

Right? Right. Is there any other measure blowing other than the time it took us to find out exactly that. And I've not spent months thinking about this. And maybe if I thought more, others thought more, we would find out that it wasn't total logical. But it strikes me from, you know, first glance that this is a hard thing to test hypothesis. Right. But in terms of interventions, hey, it's like it's super easy to go and say, you know, if we find ways of getting scientists to sort of collaborate more, you know, through some clever way of forcing them to talk to each other, that has been done already.

[00:39:49]

Or if we go and actually measure the practices of the most productive labs and how they're doing things differently from the less productive labs. And can we spread the best practices from the good to the less good labs, that kind of thing? There's no one, as far as we know is doing. There's certainly no big effort to do that in any fields that I know of.

[00:40:07]

And it seems like that would be incredibly useful, productive thing to experiment with.

[00:40:13]

Well, before I let you go, Michael, I want to ask you to nominate the pick of the episode, which is a book or article or could be a blog or other website that has that you don't agree with, at least you substantially disagree with, but you still think is like valuable and worth engaging with.

[00:40:34]

And maybe that's because you think it's just like well argued, even though you disagree with the conclusions or maybe you think, you know, this was like a really interesting hypothesis, even though I think it's probably unlikely to be true. Does anything like that come to mind? Yeah.

[00:40:47]

So I think I would go for Fuyuko.

[00:40:53]

Second choice. What's the case?

[00:40:56]

Here's an interview called Truth and Power that he did, I think, in the late 1970s, which is, you know, not one of his famous works, but it's a very nice summary of a lot of his thinking. And I read it in an undergrad in my political philosophy class kind of at the end was like one option, optional class where you got to choose what you study. And I choose Ficco and. So he argues that we live in regimes of truth.

[00:41:28]

So any particular point in time points in history, the the criteria for what counts as true. Changes and the criteria for sort of how you distinguish between two things and not true things and who has the rights to say those kind of things, for example. So you want an example today?

[00:41:46]

Like I seats are held up as the gold standard of evidence. Right. There are these little hierarchies that medical institutions put out. And the number one, the gold standard is that.

[00:41:58]

And that's a pretty new thing, actually. And on the days ago, no one had even heard of that phrase 100 years ago because anyone doing them and before that, you know, you had a figure looking at some kind of mental health, for example, and how that was what the sort of power were around that.

[00:42:16]

And you you just see that the people who had the power to decide kind of what was to promulgate what was true and the the means by which they were allowed to assert those claims and be taken seriously by society and by the government have just changed dramatically over time.

[00:42:36]

And so I don't know what he would say, but it's I can certainly really insync debate and economic certainly.

[00:42:41]

And I'm sure other feelings about the validity of cities and what they would tell us and are they actually useful and the problems we've seen them recently and have sort of sort of disappeared over time if this kind of thing. You know, I think it's quite possible that we say I think cities are amazing and or sometimes some people do and will have very different opinions on cities. And in 50 years time, 100 years time, and in general, you know who scientists are, the kind of what they do, maybe it might be very different now.

[00:43:07]

You know, I I'm not sure that it's a figure. We go we go a lot further, I think, and say I'm no expert at all on FICOs, but a little bit of that paper was like the most mind blowing thing I read in my life so far at the time I read it. And, you know, he makes a bunch of very strong claims and often in ways that's not exactly clear what the assumptions are and what the argument the steps are.

[00:43:33]

And it's I don't think that you would like he was kind of slippery. I said no, because actually on my list of things that I like, like enough smart people I respect have said it's valuable that I'm like grudgingly willing to keep trying to get the value. And and also, like, this is sort of a brief tangent you're technically.

[00:43:55]

But to indulge myself, I've been trying to think about how to how to find sources that might like update my models of the world and change my mind and and like one kind of obvious, easy way to do that is to find people who have really good, like epistemic standards. They're like really rigorous thinkers. They like, you know, have nuance where I think they should have nuance and they like change their mind and seem very true thinking. But they have different object level views from me.

[00:44:24]

And so I like want to know, like, why do you you know, why are you confident that, like, you know, we shouldn't allow immigrants in or something and yet you like you seem to have such good standards of argumentation and thinking like this is a really intriguing it's like a high value person for me to talk to. But the problem with that strategy is that what of my standards are what counts is good, epidemics are wrong. And how am I going to find people who, like, show me that my criteria are wrong?

[00:44:48]

And the one of the working answers I've come up with so far is find people like thinkers, writers, et cetera, who could go, yeah, well, like what is the rule that, like, generates.

[00:45:01]

Right. And I think the rule might be fine thinkers, et cetera, who are whose epistemic standards seem like not great to me, but who are respected by people who I respect and that like using that kind of transitive property. You might allow me to expand my or revise my set of criteria that I'm using to pick people, maybe that I thought about all he was about due respect to more than one link of.

[00:45:22]

Yeah, but the I want to have a fidelity might break down after 20 Leytonstone. It is interesting, I think what's in what I figure and others who I regarding the class of sort of consciousness raising writers right. On to not be on the syllabuses of, you know, kind of modern many modern university courses and analytic philosophy, for example, are they kind of like just like decent different glasses you put on there? They're so foundational about the way you see the world that it's it's not as if you can look at them with your current glasses on and say, right, okay, show me the assumptions like who has his beliefs I have.

[00:46:00]

And then sort of how do I build on those to get to where this person is? Yeah, it's like, no, you've got to take off your glasses and forget everything you thought you knew and put on these other things and just take that leap of faith. Right.

[00:46:12]

And I said, you've taken leave already with your own worldview about what standards of evidence count and what the reasons for doing that. So you think about modern science like everybody has given us all these amazing things around it. Sort of it really seems to work in terms of generating useful stuff, at least generation technology, solving method and so on. But maybe that's not the only thing that matters. And I think it's just really important to have a bunch of different lenses on the world for understanding any particular thing.

[00:46:44]

I mean, I'll pop it in one of the interesting like coming out a second recommendation. So I was sort of the most cool book I read last year. That's kind of on the theme of different lenses. It's a book called Images of Organizations by by Gareth Morgan, industrial sociologist.

[00:47:00]

I've written this is I've written in the I want to say like the 80s, early 90s.

[00:47:06]

And it's just it's a list of 10 or so different ways of different metaphors for understanding what an organization is.

[00:47:15]

So one of them is that the obvious organization as machine organization, as a brain sort of information processing unit. But there's also an organization as an organism in an ecology organism, as psychic prison. There's there's a bunch of these other things.

[00:47:31]

And I think that book makes a really strong case that if you give them all of them, this does not paint the whole picture. And the skill has to be in figuring out when to look through each particular lens. I think that is true for everything in the world. And the scientific lenses like the best one we have for most things. But it's not the only one as one of the things where you might actually learn a lot by trying to understand other perspectives, even if that is uncomfortable and doesn't make sense from a, you know, a given worldview, will resist the temptation to start an entirely new episode on definitions of truth and lenses.

[00:48:06]

Oh yeah. For looking at the world. But Michael, thank you so much for coming on the show. Will, we'll post links on the podcast website to add to your picks, but also to your paper and and your website with your other research on it.

[00:48:18]

Thanks for having me. This concludes another episode of Rationally Speaking. Join us next time for more exploration on the borderlands between reason and nonsense.