Transcribe your podcast
[00:00:00]

Today's episode of Rationally Speaking is sponsored by Give Well, a nonprofit dedicated to finding outstanding charities and publishing their full analysis to help donors decide where to give. They do rigorous research to quantify how much good a given charity does, how many lives does it save or how much does it reduce? Poverty per dollar donated. You can read all about their research or just check out their short list of top recommended evidence based charities to maximize the amount of good that your donations can do.

[00:00:25]

It's free and available to everyone online. Check them out at Give Weblog. I also want to let you all know about this year's Northeast Conference on Science and Skepticism being held in New York City June twenty ninth through July 2nd. I'll be there taping a live podcast and there will be lots of other great guests, including the Skeptics Guide to the Universe, my former co-host, Massimo Plushy, the amazing James Randi and keynote speaker Mike Massimino, former NASA astronaut.

[00:00:51]

Get your tickets at NextG and you see ASPHAUG.

[00:01:09]

Welcome to the podcast, where we explore the borderland between reason and nonsense. I'm your host, Julia Gillard. And with me is today's guest, my very good friend, Spencer Greenberg. Spencer is a mathematician with a PhD in applied math from NYU, where his work mainly focused on machine learning. And now he is running a startup incubator called Spark Wave that focuses on projects with significant positive social impact.

[00:01:36]

And what that means primarily is ways of doing social science research that are faster and more rigorous and more directly socially useful than is the norm. So his projects are that the main project at Spark Wave, through which he runs this research and publishes modules using the research, is clear thinking. You can check out his work at clear, thinking big and that's all going to be talking about today. Ways to improve the way social science research is done and make it more useful to society using the Internet and our machine learning.

[00:02:11]

Spencer, welcome to the show.

[00:02:12]

Hi, Julia. Thanks for having me.

[00:02:14]

It's a pleasure. Why don't you start with some examples of the kinds of projects you're working on it spark, wave and specifically at clear thinking.

[00:02:24]

Sure. Very happy to do that. Yeah. So what we do is spark. Wave is we build technology to try to do better, faster, more rigorous social science research that can help impact people. And we also actually carry out that research and then build tools from it. So to give a few examples, one of our projects, a clear thinking is we're beginning to study habit formation. So we're looking at how can we get people to actually stick to new behaviours, which is obviously incredibly important.

[00:02:51]

We also look at decision making. So we actually build the tool that we're going to release soon on our website. Clear thinking big that will help walk you through a major life decision, like should I marry my partner? Should I quit my job and tries to help you do a better job, make that decision so that you can get the things that you value. We also do some more pure research, like we did a bunch of research on Trump versus Clinton supporters and what variables predict their support.

[00:03:19]

We're doing research on overconfidence. We also productize our research to try to actually directly build products out of that, that we could be beneficial. So one thing that one of our big projects is Barkway. It is a tool to help people with depression. So we've done a ton of research on how do you automate depression treatment and can you actually provide a software that can that can benefit a wide range of people?

[00:03:43]

Do you have any results that you can share yet from some of these studies?

[00:03:47]

Well, we had some really exciting initial results on our depression app, which is called Uplift. So we actually found this is just a pilot study so far. But our pilot study, we were able to reduce depressive symptoms by about 50 percent on average over thirty two days. So that was super exciting. And we actually did a follow up where six weeks later with the same people and we found that their depression symptoms were still basically pretty much the same level they were at the end to the end of using our tool.

[00:04:14]

So that was really exciting, roughly how many people in the Depression study. So that was the numbers I'd just give you was for the first 80 people that completed our entire program. Got it.

[00:04:24]

So for those 80 people, the average reduction in how are you measuring depression? Is that like a self report on a scale?

[00:04:31]

We use a very common way of measuring depression. It's called the nine. It looks at nine different symptoms and you should get a score in each of the symptoms and then averaged those together. And so that's what we're trying to reduce is depressive symptoms. And so a typical population would have maybe a three to four on the time we recruited a bunch of people with mild to moderate depression. So they started like a nine and then they went down on average.

[00:04:56]

It started nine and a half, went down an average of something close to 50 percent. So we've got them actually pretty close to what you'd find in the general population.

[00:05:04]

That has a large effect size for social science. I got, as I'm sure you know, that's most of us are not like even if they're statistically significant, they're not that practically significant.

[00:05:14]

Well, I have to say, I almost fell out of my chair when I saw the results. So, I mean, obviously, we believe that was going to work and we wouldn't have invested so much time and energy into it. But, yeah, it was really exciting.

[00:05:25]

What kind of interventions were you doing to reduce depressive symptoms?

[00:05:29]

So what we're doing is automated cognitive behavioural therapy. So that means so, you know, it's not therapy like you do a therapist, but it's taking the principles and ideas of cognitive therapy, which is actually the most evidence based form of therapy for depression and anxiety. And what we did, we extensively studied the evidence on it, looking at what's known about which parts of it work, why it works, what's limited about it, and and really trying to build a program that can walk you through that evidence based intervention, but with pure software.

[00:06:05]

So it's constantly adapting to whatever you say. So. To do so, for example, gives you homework assignments, if you didn't do your homework and tries to help you figure out why you didn't do it, to help you develop a strategy. So you'll do it next time, you know, did you forget and therefore you need extra reminders or maybe you didn't understand the homework, so you need a better explanation, etc..

[00:06:22]

And so the homework you're talking about here is part of the therapy, like like assignments people are supposed to do that will help with depression.

[00:06:28]

Yeah, it's part of it's part of the API. So cognitive therapy is very, very different than most kinds of therapy because first of all, it's very goal directed. You have a specific purpose. So you're doing it for second of all, it's very evidence focused. And third, you it's about turning you into the therapist in a sense. In other words, it's about empowering you with skills that you can then apply to improve your life rather than so.

[00:06:54]

I mean, think about it when you go if you go to the therapist's office, you're with them, let's say, an hour or even less a week versus how much time you spend with yourself dealing with your problems. If if you're taught to use these skills, then you can apply them every day rather than just once a week. And it's so much more impactful.

[00:07:11]

So I don't know if this is like a fair or coherent question, but do you have a theory of what counts as social positive social impact? Like is there any kind of framework you're using when you decide, like this is a project that or like, you know, this is software, this is an intervention that's, you know, if I could get it to work and be scalable, would have really large positive social impact. Or is it just a I know it when I see it type question?

[00:07:37]

Well, I think there's direct suffering reduction, which is something I care a lot about. I know how horrible it can be to have depression and I'm extremely motivated to eradicate it so that, you know, that's like the really clear cut case. Some of our work is less clear cut. Like another thing that personally I care about a great deal is helping people make really good decisions. And so that's why we built this tool for decision making and we've done research on decision making.

[00:08:01]

Now, it's not as directly a clear cut. It's not like you're literally making someone suffer less immediately. But it is in the long term, making better decisions will have this ripple effect in two ways. One, it's on your own. Life can make better decisions and you're happier and you live more the life you want. But second, on society, where in the world we live in, as has been growing increasingly complex and difficult to deal with.

[00:08:24]

And I think people really as a as a society, we need to make better decisions. We need to be better decision makers when we make these really hard decisions that affect thousands or millions or even in some cases billions of people.

[00:08:36]

Have you measured the impact of your decision making yet or is, well, still preliminary?

[00:08:41]

So we actually have been teaming up with economic to do to do some really interesting studies where we'll actually see if people use our decision making to have better decision making performance. That in the sense of are they're happier with the decision they made. They feel better that it was better compared to a prior decision of theirs. That was similar because, of course, you know, it's tough to have someone who comes and has making two decisions and we randomize and use our two on one and not and the other.

[00:09:08]

That's tough to do as an experiment. But we can compare it to a previous decision that was similar. And that's so that's actually where we're gearing up to do. But actually, we but we also do research internally in order to figure out the details of our programs, for example, of that that I was really excited about was there's this problem where when people are trying to make a decision, they often don't consider enough different options. So they kind of narrowly frame their problem as, OK, I can either quit my job or stay in my job.

[00:09:37]

What should I do? Right. Right. We knew that this was a problem. And so what we did is we ran a little randomized control trial where we randomized people into two conditions. In the first condition, we said, hey, there's this big problem where people don't consider enough options. We're going to ask you to wait right now. We're actually not even let you continue. We're going to put like a timer and then and ask you to spend the next thirty seconds generating some more options.

[00:10:01]

The other group, we were even meaner to them. We said, hey, there's this problem call narrow framing. We literally will let you continue unless you generate at least one other option for things you could do. And don't worry about being like, amazing, just you have to generate at one. So can you guess what happened to you in this experiment?

[00:10:17]

So you're comparing the people who were told to wait.

[00:10:22]

Was there a control group who who were no know in this case? Well, we previously ran a study where we didn't have either of them. So that was sort of a control group. But but in this case, we were either asked you, like, you know, you can't procedes for thirty seconds. We ask you to spend that time thinking of other options or we literally force you to and we won't let you continue unless you generate one. So can you guess?

[00:10:41]

I guess I guess I'm I would expect the latter group, the group that had to come up with an option to. The more likely to generate an option that they liked relative to their previous, though, the initial options that they had.

[00:10:58]

Yeah, so get into it. Get into it just like it's too easy.

[00:11:02]

It's too easy to, like, write gibberish if you know that's all you have to do for there or just to, like, not write anything. If you know that you get to continue in 30 seconds or maybe even to quit. I don't know if you're forced to wait. Yeah.

[00:11:12]

So it turned out that virtually none of the people in the group had to sit and wait, actually generated the options. But when we forced them to do every single person an option and twenty five percent of them ended up choosing that option at the end of the program, instead of one of the originals, like one in four people chose a different choice because we literally forced them to make it another to make another option, which I think is just mind boggling.

[00:11:37]

Yeah. Do you have any shareable examples of alternate choices that people generated when you forced them to do so?

[00:11:45]

Because I finally enough I was reading the data recently and there was one person in there who I think that their choice, if I remember correctly, was like quit my job or leave my job. And basically what they say at my job, I'm sorry I got my job was my job and what they ended up doing all right. And what they ended up realizing is that they could basically renegotiate their current position instead of doing one or the other kind of extreme choices, which to me, which is totally reasonable.

[00:12:20]

And, you know, you'd think they would have done that initially. But, you know, if they've been in that weight group, they probably probably would have never thought of it, probably. But because we forced them to think of another, it came to mind. So so that's that's the kind of way the online research, I think, can be so powerful is that it can you can do these kind of micro randomisation. It doesn't have to be some massive study that takes weeks or months.

[00:12:42]

And then you spend another two months writing up as a paper. It could just be let's study this one aspect, what we're doing and see if we can get a better answer than we had before.

[00:12:50]

OK, so you're running all of these studies online. You're turning them into actual tools. How far can this process actually go? Like it seems to me that a lot of social science research involves people coming to a physical location and like experimenters observing the way they behave and and like having you know, this is like a stereotypical social science experiment type thing. You know, you have a bunch of people in a room and they're like someone is a what's it called?

[00:13:26]

They're like a plant basically by the experimenters. And they're the one who who claims that, like, you know, oh, those two lines look the same length to me. And like that blatant lie that is clearly contradicted by what you can see with our own eyes causes other people in the room to say, like, yeah, I guess the lines are the same ones, that kind of thing. Or, you know, researchers observe like, oh, when people are given a hot cup of coffee to hold, does that change their moral decision making, that kind of thing.

[00:13:51]

So, you know, the stuff you've been talking about is like seem like examples of things where, you know, people are being given suggestions for things they can do, where they're being asked questions to reflect on. But how far does that go? Like how much of social science research could actually fit your framework?

[00:14:09]

Well, that's a great, great question. And so there's different answers to that. And I think that's kind of interesting nuance there. First of all, you could ask, well, all of the current experience being done, experiments being done, how many of them could you could you do online and and the answers that, you know, quite a few. But there's definitely ones like the examples you gave that you just couldn't do the experiment a lot.

[00:14:27]

But there's a deeper thing going on here, which is that the purpose of the studies is not to prove that holding a cup of hot water causes X Y is the effect. Right. It's like to either train a test, some other high level concept. So there's a deeper question, which is could you design on the experiment test the thing you're trying to test? Right. Even if the current way with testing isn't online. And I think there's a much broader class of things you could test with a cleverly designed online experiment that are currently being right.

[00:14:55]

But the other thing I would say is it's sort of the the issue of if you lose your keys in the dark, but there's, you know, a lamp in one corner of the room, look under the lamp first, not because your keys are more likely to be there, but because if they are, it's going to be a heck of a lot easier to find them. That's how I view online research as well as that. With all my research, we can do it incredibly quickly and cheaply and we can do this constant iteration where we design one study, run it, modify it, run another, run another or another, kind of trying to converge towards the truth because we can do it so fast rather than having to put so many resources in doing one that we can only do one every three months.

[00:15:30]

Right. So we can. So basically, it's like it's a very powerful flashlight to peer into human psychology and how to help humans be happier and make better decisions, that kind of thing. So so I think that's part of the reason I'm really excited about it. And the last thing to say about it is that I'm really excited about all my research because it can. Directly turn into apps and online tools, which can be deployed effectively for free, you know, software.

[00:15:57]

So if you find something, you don't have to go, you know, teach to a person, you know, face to face or somehow try to convince people to apply in their own life, you can actually give them a tool that helps them do it right.

[00:16:09]

So it's it blurs this distinction between research and application. Right. If we design a study that causes or in fact, we can literally deploy that as it now as a tool. Right.

[00:16:19]

Are you are you mostly using Mechanical Turk to to test your tools or to test the interventions?

[00:16:26]

Well, we do we do a lot of Mechanical Turk. It's a wonderful source of participants for studies. We also I mean, clear thinking. We have a mailing list with about twenty three thousand people. And that's that's a really good resource as well, because we can can test it on our own are people who are just interested in our products and want to try them out and give us early feedback and that kind of thing. So that's that's a really great resource.

[00:16:49]

And then we actually build our own technology to aid in research. So we have a tool called Task Recruiter, if you wanna check it out, Tuscarora Dotcom. And it's basically a research tool helps you that lets you recruit people to do studies. And it so we use that extensively in our own work.

[00:17:09]

To what extent are you worried about the population of people like on your mailing list or the population of people on Mechanical Turk not being representative of the population of people?

[00:17:19]

I love the U.S. Of course, the question I mean, like the kind of girl talkers are, you know, they there are people who for whom it is it makes financial sense to, like, earn a small wage, answering questions online. And like, they're all they're Americans mostly. I think actually, I'm not sure, but probably they're not us in India. Yeah, I know.

[00:17:41]

I should be clear that, like, when you read social science studies in a typical journal, that population or the the population who participated in those studies is also not representative. They're like they're probably less representative than, you know, because they're like, you think.

[00:17:57]

Eighteen to twenty two year old kids, college students are less represented. Yeah, I, I totally agree. I totally agree. Yeah.

[00:18:06]

But like your mailing list is probably it's like selecting for for I mean it's probably over representing, I don't know, higher socioeconomic brackets in the US, but it's also selecting for people who are interested in decision making. And that's probably going to have certain, you know, cognitive component correlates as well. So anyway, I'm just wondering what you think about the whole representative population issue.

[00:18:29]

No, I think it's such a great question because this is something that plagues all of social science research, which is that so let's say you go and you show that doing this particular thing to these particular people in this particular setting, in this particular way, at this particular strength caused some effect. Right. And then someone else tries it in slightly different population or a slightly different way of actually doing the intervention or whatever. And they don't find that fact.

[00:18:52]

And so what do you really conclude from that? Do you conclude that the original paper was a false positive? Do you conclude that actually it only works on some subset of people or it has has to be done at a certain strength? It's a huge problem that we actually have a way of cutting through that problem that I find very appealing, which is that we like to test our tools on the people we actually want to deliver the tools to. And so if you can get an audience that's really similar to the people you're trying to reach, then you can solve the generalizability problem, right.

[00:19:23]

So that's what we do. So testing on our clear thinking audience is fantastic for us because those are our number one prime consumers of our tools. Of course, we try to reach you know, we're always trying to grow and reach a broader audience. And there are lots of users of our tools that aren't on our mailing list. But it's pretty darn representative of the people we're reaching. Right. So we are building tools to help those people. And Mechanical Turk, similarly.

[00:19:43]

So Mechanical Turk, it skews a little bit younger than the general population, more tech savvy than general population. You know, there's kind of these there's no excuse it has. But what's cool about that is actually a bunch of the skewes are similar to our audience we're trying to reach. So it's certainly way more representative than college students would be for us. It's not perfectly representative. That's what I like to do. A combination of testing on Mechanical Turk, also testing on our beta tester email list, which are like are particularly excited active users.

[00:20:15]

And so that's how we actually target the group we really want to reach.

[00:20:18]

I was going to say, well, it sounds like you are engaged in a different project from academics who are looking at similar questions about depression or decision.

[00:20:26]

Oh, we are, though. We are definitely going to the project.

[00:20:30]

Well, I mean, the way in which that kind of difference that I was going to bring up is that academics are, at least in theory, trying to derive these these like causal models of human psychology. Like this is how humans behave under these situations. And this is like why this is sort of, you know, our explanation for what's going on. It makes. Behave this way, and that might have implications, but the main goal there is his explanation, whereas it sound like your main goal is is like useful interventions, ways to reliably intervene on human psychology, to cause better results.

[00:21:09]

Like, is that an accurate. Yeah, I've been going back and forth as I've been speaking in this paragraph as to whether that's a good characterization or not. What do you think?

[00:21:16]

Yeah, I think that's right. On basically, if you start with the goal of helping people make better decisions, helping people be happier, suffer less, and then you work backwards, you get something that doesn't look the same as what academics are doing. That's not the process that they are going through. You know, obviously, some of them are motivated by helping people. But but but the incentive structure in academia pushes in a certain direction, which is towards generating new insights, publishing papers and top journals.

[00:21:44]

And if you're not doing that, then you kind of get kicked out of academia so you don't have much choice.

[00:21:50]

I mean, here's why I was hesitating, aren't I? I framed that as if it's like they're doing the one thing. You're doing the other thing. But doesn't your thing also do the explanatory thing? I mean. In what way are you not producing explanatory models of human psychology? Well, it's not the prime thing that we focus on. It sometimes is useful. We'll do try to do it when we need it. But, for example, let's suppose we try something and we find it works.

[00:22:16]

We don't need to take the next step and actually figure out why it works. We can if if if knowing it works is sufficient and we can plug it into our tool and we can prove that it has the effect we want, then, you know, it's it's gravy if we can prove why. But it's not our fundamental goal. Our goal is making healthful, trying to apply it in an effective way. And so sometimes we do try to understand phenomena and because we think it it kind of aids what we're trying to do.

[00:22:42]

And we will look at the academic research and topics and look at the theory is they have a lot of times we find when we do that is we get a lot of good ideas from academia, but usually at some point we kind of hit a wall. We're like, OK, well, we look at these different theories, look at these different ideas, but it still doesn't tell us how to build this thing. Like it gives us some hints and suggestions.

[00:23:00]

But there might be, for example, multiple theories that are not not being coalesced. And so, you know, you don't have a comprehensive theory that happens a lot or there's not enough detail, enough specificity that we can then actually build something. But but it's still helpful. But, you know, we have to go a step further. And actually, I have this idea I like to think about of full stack social science. Like if you heard the phrase like full stack web developer.

[00:23:24]

No, actually, sorry, I haven't I don't know what it means. So for first, that web developer comes to me a lot in the Bay Area. I hear that term is getting tossed around by my code or friends. I'm like, oh, I'm not going to bother interrupting conversation, make them explain it to the non coder.

[00:23:37]

Sorry to throw out random jargon as the metaphor doesn't work as well if you don't know what it is. But basically a web developer is someone who works on both the front end, like the interface of an app and the way it looks and stuff and the back end. So the databases and the kind of underpinnings of that. Right. And a full developer kind of goes between both of those. Right. Rather than being just front end or back end.

[00:23:59]

And so I like to think about this idea. I call full stack social science, which is OK, suppose you come up with great ideas, but you don't test them. Well, they're not going to be that useful, right, in social science. Or suppose you come with good ideas and test them, but you don't actually go and do anything with them. Like you just say, OK, those ideas are out there, you know, like someone actually going to pick them up and apply them in a useful way, like maybe if you're lucky, maybe someday or may or probably not.

[00:24:26]

Right. And so you kind of have to do the full stack of activities in some cases to really have the impact, in my opinion. So in other words, you know, you have to fill in gaps where there just nobody has the, you know, just the right idea to solve that problem. You've got to do background research to understand the phenomenon better and test parts that maybe need to be tested. But then that's not enough. You have to actually go build something out of it and then you have to go market that thing and get it in the hands of people so it can actually help them.

[00:24:53]

And so that's what I view as our mission is, is kind of the full stack of activities so that we can change behavior. Hmm.

[00:25:01]

Well, I guess what I'm wondering now is why why wouldn't it just be better for academia to start with the question of like, how do we. Like, what are the major problems that need to be solved in the world and then do research to figure out how to impact those problems and then from there figure out why that's working? Or is there or do you think there's just like additional unique value that's being added by starting with like, let's build models of human psychology and then later see how to apply them?

[00:25:30]

It seems more indirects to me.

[00:25:31]

I think that would be a great thing for the world. I just think it's a different thing than what academia was was ever doing or was ever designed to do. I mean, you know, it's sort of a machine that tries to create understanding of the world. But it's I don't think, as far as I know, has never been directed at specifically how do you solve problems in the world, which is a different thing. And, you know, and sometimes understanding does later eventually lead to some breakthrough.

[00:25:56]

You know, people classically quote, you know, how no theory was originally believed to be completely useless. And actually mathematicians, some officials like that about no theory, but then, oh, no, they found this application, cryptography. Now, it's not a pure science anymore.

[00:26:10]

You know, right now, the cool ones at the math parties. Yeah. So it's just, you know, kind of a different different goals.

[00:26:18]

Are you separating out your exploratory analysis from your hypothesis testing, like, I mean, a common critique of social science, as I'm sure you know, but for our listeners is that if you just look at a test, a whole bunch of hypotheses at once, some of them are going to come out significant, even though that's just noise. And so, like, common exhortation is to like first do a bunch of exploratory work or you identify which hypotheses are worth testing and then do sort of statistically rigorous testing where you can, you know, say that if it comes out significant, then that's like that is meaningful.

[00:26:58]

C to your questions. Make me so happy as a mathematician. Really? Yeah. So basically that is a multiple hypothesis. Testing is a massive problem in science in general and including social science. Or if you test too many hypotheses and you don't have enough data, you'll never really find something. It doesn't mean it's real, I think is extremely, extremely seriously. Our approach to solving this problem is we just keep running study after study and we just make sure that we confirm things beyond any reasonable shadow of a doubt.

[00:27:28]

And that's when you're bent on figuring out the truth of something as opposed to saying, I've got a deadline, I've got to get this thing out, you know, which is a luxury we have where we can focus on the truth of the matter. It just changed everything because then you're like, wait a minute, we think this is true. But we just came up with some possible convoluted way that we may have misinterpreted this. Let's go run yet another study.

[00:27:49]

Right? There was one project. We ran 12 studies because we kept finding way. It was we were like, well, maybe we kind of didn't quite fully understand this. Right. And now we haven't published that work. It was just twelve studies to understand that phenomenon. So it is a it's a totally different mindset and building. This is part of why we built the technology behind running studies, because we know we have to make them so fast and easy for us to run that it's not a big deal.

[00:28:13]

We can run twelve studies. That's fine.

[00:28:15]

Just to clarify you, it's not just like you ran twelve studies. Each one tested a slightly different version. And you go with the one that came up significant is the the sort of end states that you are like the final step. You know, something seems to work in the eleven study and then you're like let's confirm in front of twelve study. And if that one also comes out significant then you're like, yes, this is real. Is that right?

[00:28:36]

Well, yeah.

[00:28:37]

In that case we were trying to pinpoint this phenomena and we kept finding like slippery ways. We were like, well, but what if this what if we misinterpreted this or what if this is just happening because of this other phenomenon? So we kept trying to fight. Basically, we kept coming up with ways maybe our original by conclusion was wrong and trying to find studies that would test if we were actually mistaken. So that was basically the process of trying to hammer that out.

[00:28:59]

It took so long.

[00:29:00]

I don't think it's widely appreciated that a way to solve the, like, P hacking problem is just to make it really fast and cheap to run. Study is.

[00:29:09]

Well, it's you know what? It's funny, because it's it's in a way it depends completely on intentionality.

[00:29:15]

Like, that's why I saying you want to solve the problem.

[00:29:18]

Yeah, exactly. That's why I say that because we are lucky enough and also mature enough to be hellbent on figuring out what's true for us. Running lots of studies is a wonderful thing because it helps us poke holes in our own work, get a deeper understanding of like of reality. But it suppose that your goal was to get a paper out by a certain date. Right. Then running is more studies is a way to just find something that works, that looks like it completely depends on its, you know, can be used for good or evil.

[00:29:47]

Right. Right. You know, and there is a problem worsen as people will run a bunch of studies and only published one of them. But the other one's kind of somewhat contradicted the one, the publish or, you know, not necessarily in a fraudulent way. Like usually it's not fraudulent. Usually it's just kind of there's a rationalization by which maybe it's OK, but but maybe it's not OK. Right. Yeah.

[00:30:10]

When you're talking about noticing little ways, the. Study might be wrong or am I measuring something other than what you wanted? I remember you telling me something a while back about discovery, like asking people how they were interpreting your questions and and like continuously getting hit by the struck by the fact that people were interpreting your questions very differently than you thought. Is that what you're talking about?

[00:30:30]

Well, yes, that's one that's one example and one that's very dear to my heart because. OK, so I'll give you an example. We were trying to replicate we it wasn't an exact replication of some academic literature, but we were trying to do something similar. When we were looking at the sunk cost fallacy. We were trying design questions that would elicit some cassol say, like, for example, on our website with explain what the sun cost.

[00:30:53]

Oh, yeah, absolutely. I forgot. That's not necessarily things. I bet a lot of my audience knows this, but I bet they get some kind of universal.

[00:31:01]

Yes, it's some cost fallacy is basically it occurs when you invest a lot of time or resources or or money or just emotional energy in one particular path. For example, you're in law school and you've already spent the first year of money on law school and you think of yourself as a lawyer and then you realize that actually that path is no longer sufficiently valuable going forward to make it worth it.

[00:31:23]

But because the job market is so bad, you don't actually expect to get a good job or you don't expect to enjoy it or something. Exactly.

[00:31:28]

So that but then the problem is that if you start if you think about leaving law school, you start to have to register all the investment in law school as a loss. Whereas if you just keep on with the same path, you could pretend like you haven't lost that value. You invest it. Of course, it doesn't make any sense from a rational perspective, because in this case we stipulate that you're better off leaving law school. It's just about how you process the past loss and it's gone either way, regardless of what you do.

[00:31:54]

So the success that we actually have a clear thing that we have a program that trains people to help avoid the same cost also. And we did we did some research on it. And so one thing we're doing is we're trying to develop a question that would elicit the sunk cost fallacy where we'd ask you, like, OK, suppose you're at dinner and you order some food and you realize, though, that you're actually not at all hungry. You're totally full and and you can't reasonably bring the food with you.

[00:32:23]

You know, like, would you just finish eating it anyway? And we thought, well, maybe this will elicit it because maybe they'll view it as like, oh, if I don't eat the food that I've like, waste the food and the money that I spent on the money I spent on it. And maybe maybe that's the lesson cost. And it's kind of based on some academic research we're looking at. So it turned out it caught tons of people.

[00:32:41]

Oh, look, these people are rational, sunk cost fallacy. But we were lucky enough to also ask those people like to explain their answer. Like, why did you say that you would keep eating this food? Can you go on to call that lucky?

[00:32:53]

I think that was good as well. I have to say now, we always do that.

[00:32:57]

Now we OK, we've learned we've learned that you need to always study your own questions to try to understand not just like not just look at someone's answers, but understand why they're answering that way. So we love asking people why they gave an answer they did to better understand it. And so in this case, all these people that we thought were being rational, can you guess what they said, why they they would force themselves deth this food that they didn't like, the taste that they when they were already full.

[00:33:22]

Maybe they said that like they were trying to teach themselves the lesson or something that like this would make them less likely to just sort of blithely order large amounts of food and the next time.

[00:33:33]

So that's that's an explanation. I bet.

[00:33:36]

And I bet some people do think that way. But by and large, the most common reason people said is because they assume they'll be eating with someone else. And it would be awkward if they didn't eat their food.

[00:33:47]

We had never even occurred to us.

[00:33:49]

They would assume that right now that totally makes sense. They didn't. Yeah. So so then we we change the question. We stipulated that they were not eating with something, that they were alone. And then a handful of people said, well, we can I guess.

[00:34:01]

Yeah. Yeah. Do they not want to make the waiter feel bad.

[00:34:04]

Very close. They said usually they said the chef. But anyway. Yeah, so so you know, this is really hammered home for us. So like unless you had a multiple choice question can be a very dangerous thing because it gives you the sense that you've measured something. You can get a score. But if you don't understand like the why behind it, like why are people putting six instead of three. Right. Then you may be actually really misleading yourself.

[00:34:26]

And this actually segues me to another example of this, where we worked this case, we were directly trying to replicate some of the academic literature. It was someone who developed the scale saying that people had that way more people than you'd expect had these kind of delusional beliefs of sorts. And we were really excited about this because we thought maybe, hey, maybe this would not help people if people really do have these delusions or it's also just the interesting, important thing to know about.

[00:34:53]

So we ran a study and it turned out that we so we were able to replicate a bunch of their findings in the sense that people did report these delusional beliefs. Like, for example, people reported that they feel like bugs are calling over their skin and other strange things like this.

[00:35:11]

But we also ask them. I'd answer that way. Well, it turns out a bunch of people have lice. Oh, my God, that's horrifying. They also kind of funny a bunch of people.

[00:35:21]

Well, well, I mean, they weren't saying that, you know, 50 percent people had this, but they were saying, you know, five times the population. Yeah. They were saying, you know, more than you'd expect. You wouldn't expect like 10 percent of people say that or sorry, five percent of people say that. They have that they have these delusional, like bugs crawling the skin. But like, you know, five percent of people may have a bug infection or bedbugs.

[00:35:41]

Exactly.

[00:35:42]

But we what we realize is that almost every one of these things, there was actually a fairly reasonable explanation, although at first glance it seemed like these people must be delusional.

[00:35:51]

So the huge lesson for us, do you I know this is like I'm asking you to sort of speculate wildly here, but do you have any guess as to what percentage of results, like significant results reported in social science literature would be significantly undermined if the researchers just asked subjects, what? Why did you answer that way?

[00:36:11]

You know, I really have no idea. But I'll give you an interesting anecdote about that is, you know, the endowment effect where people so basically, if you give someone a mug and then you say, how much would you sell this for? And versus if you show something in the mug and you say, how much would you buy this for? They give really different numbers. And it's called the domino effect.

[00:36:31]

It's viewed as similarly viewed as like cognitive bias of some sort that like you would have to pay them a lot more to give up the mug than they would be willing to pay to get it.

[00:36:38]

Yeah, like the fact that you were just giving it suddenly makes your value of it increased dramatically for some reason. Right. And this has been the kind of well studied. And just recently some researchers, this wasn't the only piece of evidence they had, but one of the things they did was they asked people why, OK? And they found a bunch of people were saying, well, I don't want to get ripped off. It's like I feel like that, you know, it's like I have this mug now.

[00:37:01]

I don't want to, like, feel like I was cheated by being paid too little for it, which is not the same thing as the original theory behind endowment effect, which is which is more around. Somehow we like intrinsically value the things we have or the things we don't have or something like that.

[00:37:16]

Huh. I mean, it's still like I don't know if I want to call it irrationality, but it's still like not a pure sort of trying to match the value of my possessions.

[00:37:25]

I think about it. But think about this. Let's say they're aware, vaguely aware of a market price. Right. If you thought, oh, the market price for this mug is X, I don't want to sell it for less than that. Right. Because then it sort of it's just sort of a different explanation. Right. Whether I guess whether it's rational or not is sort of is an interesting point, but it's sort of secondary to the point I'm trying to make, which is just that like they asked the people and the people gave a different explanation.

[00:37:51]

And now and there's actually other evidence now that that might be the more accurate explanation to so rather cited evidence. So, you know, who knows? I don't know. I don't know what the deal is with the endowment effect, but it's pretty interesting.

[00:38:01]

But you but I guess one way to estimate this is just how often it turns out when you run studies and you ask people why they answer the way they did that, it turns out that the way they interpreted the question or the context in which they were thinking about it or whatever is different than you assumed when you were running the study. Like if that is a common thing that happens for you, I can assume probably it's common would be common for other researchers if they tried to check.

[00:38:22]

Yeah, we estimate probably be about a third of a third of the questions we asked some we learn something. It doesn't necessarily mean we end up changing the question, but we kind of learn something maybe that we did that would change your interpretation of the results.

[00:38:34]

Yeah, well, maybe a little bit. But, you know, I mean, there are sometimes it's just we just actually literally just got rid of the question because we're like, yeah, this is not at all. We thought we were measuring were actually measuring why. And actually this leads into a topic which is quantitative and qualitative research, which we didn't talk about, really, which is, in my opinion, very often these things are separated, like some people are really into qualitative research.

[00:38:57]

Other people are doing the qualitative research where they like doing a lot of interviews and things like that. And I think they're both so valuable and useful that they like and they work so well in synthesis. You know, they kind of really enhance each other. And I think it's a shame that they're not run in parallel more often. And I think this is a good example of that because they're asking people why is actually qualitative data right? You're getting free form text responses.

[00:39:19]

You have to literally read and interpret with your brain, you know, where it's multiple choice questions. They're quantitative. You could, you know, run an algorithm over. Right. And so we really like this back and forth. Well, design a question as you answer it, but also ask them why and then use that to redesign the question and then get a measurement and so on. Cool.

[00:39:40]

Yeah. OK, so we've talked about a number of ways that running studies online can help make research faster and thereby better or just like directly make research better. Another big category that we haven't really talked about is making research more open in the sense of open science. We've we had Brian Nosek from the Center for Open Science on recently. We've talked about this great science. Yeah. You, I believe, are working on a way to to. Like systematically make research more open for researchers using your online study tools.

[00:40:19]

Can you talk a little about that?

[00:40:21]

Sure. So when it comes to open science, where you're basically making it clear to the world what you did and what your results are and also make it easy for other people to replicate those results, also make it easy for people to, like, get new insights from those results. One of the challenges is that a paper just doesn't contain enough information in a certain sense. Right. Right. It's part of it is that sometimes journals just limit you.

[00:40:44]

They say this is how many pages you have and you're like, well, but I did all this research and you have to cram it in. Right. In other cases, it's just because, you know, you don't you don't want to distract from the main focus and you could have like online appendices.

[00:40:58]

I feel like I feel like you're the excuse that, like, there isn't enough space. And the journal is not really like researchers actually cared about being transparent. They're relatively easy ways to do that.

[00:41:08]

Yeah, I totally agree. But if you think about the incentives, unfortunately, they don't get a benefit. Right. They can go put it up on their personal website or something. But if the Journal doesn't doesn't ask them for it or make them do it even and and there's not even a place the Journal offers to put it, you know. Yes. They get put on their blog. But will anyone see it? And is it worth that extra time?

[00:41:26]

And, you know, they don't get the benefit.

[00:41:28]

Yeah, Brian, Brian's working on that, but it's a long way to go.

[00:41:32]

Yeah, Brian is definitely working on that. So one thing that we think about so one of our tools that we're building is called hypotheses. It's to help people desist. And our idea is that to a shocking degree, people often make errors when doing this analysis. Even people that you think wouldn't make errors and this is because this is really complicated, is not really well suited for the human brain, and also because a lot of people who work with statistics stated they weren't originally trained as statisticians.

[00:42:01]

Right. Ironically, a lot of time statisticians don't deal with that at all. They just kind of theorize and the people dealing with data are not the decisions. And so anyway, so one one thing I really like about the idea of building tools to help people with statistics is that then the statistics you do could be a permanent link, basically. Right. So I've completed a statistical analysis for my paper. I could literally have it be a link.

[00:42:25]

I could put that link in my paper. Someone could click on it, go see every statistic, run not just the you know, the few that I put in the paper, but see the full gamut of them and potentially even change the assumptions and see how that would have updated the results in real time.

[00:42:38]

A robust they are are not just robust, but like how context dependent they are, what context they apply in.

[00:42:45]

Exactly. Or even just maybe they are interested in a slightly different analysis that could help your own work. And so that's why, you know, this is a small piece of the puzzle. But I think it's one potentially helpful thing that could help people have better grasp of what's happening. And then, you know, related to that, we also build a platform called That Guy, the track that helps people build online experiments. And the idea is try to give people a really super powerful tool to build kind of a completely automated experiments and interventions.

[00:43:13]

And we use ourselves heavily. But one of the neat things about that is that once you have an automated intervention that you've built, you can then put a link to that in your paper and someone can we actually have an option? You can say that to public. Some can can go to your experiment. They can view it from exactly the experience that the participants got, which is really important when you're trying to replicate work. They can also look at the code of it if you made it public and there's a button to copy it.

[00:43:40]

So now you can just say, oh, I want that experiment copy. You go make your own tweaks and then you run your own version of it now has some different or new variables. Yeah.

[00:43:47]

So let's say you just wanted to try to replicate it directly and not make any tweaks. Could you literally just click a button and replicate it?

[00:43:55]

Well, this is part of our long term plan, but we're not there yet. We're not there yet. Yeah. I mean, basically, if you take our tool like task recruiter for recruiting people online, you take a track for building on the experiments you take. Ah, you take a our statistics system. You know, this is what we're we're converging towards is ultimately we want to, you know, one click replication, you click a button and it you can completely replicate an experiment, recruit new recruit new participants that match the demographics.

[00:44:23]

They're supposed to be recruited automatically get them into the experiment. The experiment automatically runs on them all. The data is collected automatically, statistically analyzed based on the original systematic analysis you chose, you know, reports generated and yeah, there you go. It's like Amazon one click, but for science.

[00:44:39]

So that's very cool. But the the like making open science easier to do is sort of only one side of the problem. There's also the problem of giving people the incentive to do to make their work easy to for people to try and replicate, which, as you were pointing out earlier, is not really built into the way the incentive structure of academia. This is also something Brian and I talked about on his episode. You know, people get rewarded for publishing significant results and publishing a lot of them.

[00:45:08]

That's how they get approval and citations and tenure and. So on and so forth, so what is there is there any way that your tool addresses the incentive side? Well, I'm a big fan of Netjes, basically. You know, it's very hard to force anyone to do anything, but you can give the nudge in the right direction. And sometimes I just can do a huge amount. You know, there's interesting research on organ donation and how flipping it from, you know, the default being an organ donor to default, not being organ donor can have a huge impact, it seems.

[00:45:39]

So if you take the idea of, OK, well, you know, right now it's all this extra effort for someone to publish their data or publish their extra statistics or publish the codes and make it easy for someone else to take that experiment and and copy it. It's so much cheaper to do these things. And the benefit is so little if we can bring down the barrier of effort really, really low. So so that it's very easy. And in fact, if you're already using our tools, we cannot sort of automatically kind of need you to it, maybe even make you feel slightly guilty for not releasing it into the world.

[00:46:17]

You know, like click here to confirm that you are a bad actor who doesn't care. I don't know.

[00:46:25]

But seriously, you know, if you if you have someone's data already in your tool and you're like, hey, you know, by clicking this button, you could release the world. So many people could benefit from your data you collected. You know why we could even, like, put our time delay. We can delay it for a year and then automatically release it in a year. If you you know, if you want to make sure you have time to write your paper, you know, that's a very different scenario than if you're like asking someone to go do a whole bunch of extra work or nobody is even asking them.

[00:46:49]

Right. Like, it's just right.

[00:46:51]

Nobody's even asking them to do it at all. But they'd have to think of it on their own and then do a bunch of extra work instead of just doing it for them automatically. Right.

[00:46:58]

Well, OK. I guess I was conflating a couple of things there when I talked about the incentive problem. There's there's the you know, you get no reward for all this extra effort you have to put in problem. But there's also the problem of I guess this doesn't quite fit under open science, but like good scientific methodology in general, there's the problem that following good methodology and avoiding peer hacking and stuff like that makes it harder to get significant results.

[00:47:21]

And as mentioned, you get rewarded for publishing significant results. So it isn't like, you know, giving people tools to make sure their statistics are all very rigorous. Isn't that kind of counter to their interests?

[00:47:34]

Yes, that's a great question. So that's why it's so critical that if you provide some tools that make their results more robust, those tools also have to make their research faster because it compensates to compensate because a more robust research means fewer publications, because you don't publish as much bullshit, right. As fake as much fake stuff, or you catch it before it goes out more often. And so it has to be faster in order to compensate for that.

[00:48:00]

But I think it can also make it higher quality at the same time. So, you know, you can trade these things off against each other. But I think that's really important.

[00:48:07]

Right. So you end up hopefully, if all goes well according to plan, you end up like publishing significant results at a similar or even faster or greater rate than you were originally. But they're like actually real or much more likely to be actually real or at the very least, much more much easier for other people to test if they're actually real.

[00:48:26]

Yeah, yeah. And you know, another thing I'll say about that is I think there's really if you're producing research that shoddy, then you're trying to build a foundation. You like building a foundation of a house out of sand, and then you're going to try to build the upper layers, you know, over the next 10 years. You know, if you're the higher quality of research, is it actually cost more up front? But like, you're going to build a much better house, right?

[00:48:47]

You're ten years from now, you're going to build layer upon layer upon layer and make a lot more progress. And your later research can be much benefit. And I think that's also a problem in society at large. When we have a situation where there's a lot of false positives, researchers can't trust each other's work that well. And it's hard to build on what other people do because maybe that's just not a good foundation to build on. And I think the whole progress in general goes much slower.

[00:49:13]

And it's it's no longer standing on the shoulders of giants.

[00:49:16]

It's like you've got to do a check really carefully, which you're standing on to see if the giant has a gimpy leg to mess up the metaphor anyway. Cool. Well, we're like actually rather significantly over time at this point. I just got caught up in this thread. But before we close Spencer, do you want to introduce the rationally speaking pick of the episode? It's a book or website or article or something that has impacted your thinking in some significant way.

[00:49:47]

Yeah. What would your pick of the episode be to my pick?

[00:49:49]

Would be the book Feeling Good by David Burns? Yes.

[00:49:53]

I remember when you were giving copies of Feeling Good Out to anyone who would take. Oh yeah, I have copies of my house right now. I think it's a it's a wonderful book that introduces people to the principles of cognitive therapy, which, as I mentioned, is the most evidence based therapy for treating depression, anxiety. It's not to say it's the only other institution, but it's it's almost the. The largest quantity of evidence showing it's effective. But what I think is really wonderful about it is that it's it's about a lot of it is about the way you think and you start.

[00:50:22]

So for me, one of the big changes ahead of me is it's made me much more aware of how when I'm upset, when I if I express any emotion, it changes the way I think about things. So what would have been I would have thought about something a certain way an hour ago. Now I'm thinking about a different way and potentially much more harmful way. And I think CBT is a wonderful way to look into sort of a set of tools that you peer into your own mind and kind of find more helpful ways of thinking and actually try ways of thinking.

[00:50:52]

And so I love that about it. And that book, Feeling Good, is specifically focused on depression. So I especially recommend it if if you're depressed or if someone you know is depressed. But actually, David Burns also has a lovely book on anxiety called When Panic Attacks. It's not just for People. Panic attacks for Zachary Bradley. CBT applied to anxiety. So if you have anxiety, I would highly recommend when panic attack so you don't have to pick.

[00:51:15]

That's totally hijacked and you can. Wow. So you're still well within range. Cool. Excellent. Spencer, thanks so much for being on the show.

[00:51:25]

Thank you so much. This is a great I. We'll link to clear thinking big as well as to your topics. This concludes another episode of Rationally Speaking. Join us next time for more explorations on the borderlands between reason and nonsense. If you enjoy listening to the rationally speaking podcast, consider donating a few dollars to help support us using the donate button on our website. Rationally speaking podcast dog, we're all volunteers here, but we do have a few monthly expenses, such as getting the podcast transcribed and anything you can give to help.

[00:52:04]

But that would be greatly appreciated. Thank you.