Transcribe your podcast
[00:00:00]

Today's episode of Rationally Speaking is sponsored by Livewell Give Oil takes a data driven approach to identifying charities where your donation can make a big impact. Give all spends thousands of hours every year vetting and analyzing nonprofits so that it can produce a list of charity recommendations that are backed by rigorous evidence. The list is free and available to everyone online. The New York Times has referred to give well as quote, the spreadsheet method of giving give. Those recommendations are for donors who are interested in having a high altruistic return on investment in their giving.

[00:00:30]

Its current recommended charities fight malaria, treat intestinal parasites, provide vitamin supplements and give cash to very poor people. Check them out at Give Weblog.

[00:00:53]

Welcome to, rationally speaking, the podcast, where we explore the borderlands between reason and nonsense. I'm your host, Julia Gillard, and my guest today is Professor Andy Schabowski, and he is an experimental psychologist and the director of research at the Oxford Internet Institute, which is dedicated to the social science of the Internet. And he is I would describe him as part of what I would call the backlash to the backlash to tech. So I'm sure you're all familiar with the backlash.

[00:01:23]

There have been countless books and articles in the last few years about how smartphones and social media and things like that are making us stressed out and insecure and disconnected from each other and so on. And increasingly prominent people in tech have been signing on to the backlash as well, like founders or early employees of Facebook or Twitter have been expressing concern and regret about the effects of their creations. So then there's the much smaller backlash to the backlash, people arguing that the case against tech is actually much weaker than it seems.

[00:01:57]

So that is where Andy comes in.

[00:01:58]

Andy, welcome to rationally speaking. Thanks for having me on.

[00:02:02]

So, Andy, what is your, in a nutshell summary of why you think social media and smartphones are getting a bad rap, specifically in terms of being bad for their users?

[00:02:12]

Yes, I think it's a really kind of interesting topic. You know, we when we think about any kind of new technology and we look at the history of them, you know, they're met with different types of skepticism and concern. But I think that something kind of really special has happened here with with things like social media and screen time. It's kind of like a Goldilocks topic in terms of attention and how the news works right now. So, you know, most of what scientists do, even if you study well-being or health, you might not really actually get a lot of attention for your research.

[00:02:45]

You know, this is porridge. It's too cold for public interest. And, you know, if you're, you know, studying kind of like like planets or how to cure cancer, you know, that's a really, really hot topic, right, that everyone looks at. Right. But it's also a topic that gets a lot of scrutiny kind of scientifically. Right. And and when we talk about the ways that, like, tech might be addictive or whether or not social media has negative impacts on us, it's really this not too cold.

[00:03:16]

So it's not too irrelevant for for public interest stories. And it's not so hot that, you know, real scientists are weighing in with with critical methods. And so. Oh, I see.

[00:03:29]

It's in like the uncanny valley of. Yeah, it's in the Onaga, the uncanny porridge valley of, you know, wow, what a what an unholy metaphor we've created.

[00:03:40]

Yeah. I apologize. So I mean, so there is a there's a kind of trend in what you could call it technology effects research where in you know, we have these large scale surveys of people's behaviours, of their attitudes, of their health. And these are typically kind of data sets that are collected primarily to study some other thing, you know, about about the human condition. And maybe in the last 15 or 20 years, people have been adding questions about things like screen time to them.

[00:04:14]

So how much time people play video games or watch television or more recently, how long they're online? And you know what happens when social psychologist sometimes come across that and we have a question or thought about, you know, what role might technology be playing in their lives? People will analyze this data. It's kind of like found secondary data and with very few exceptions, because there is a kind of somewhat of pressure to publish positive results. You kind of have a subgenre of academic publication, which is that there is a small but statistically significant correlation between X, whatever type of technology use it is and why, whatever kind of outcome that you might care about.

[00:04:57]

And these correlations will invariably be statistically significant because the sample sizes are so large. And so the kind of thing that we were trying to do is to kind of provide a you know, based on the different types of data that we're dealing with here, to provide a baseline for a critical reader to put social media or tech use in the perspective of of the data set. What are the other things in this big data set? What are the other things that actually account for wellbeing in young people?

[00:05:32]

Right. So being bullied, drug use, home circumstance. So we picked some things that like really should have a big impact for good or bad on kids lives. And then we looked at stuff that you really shouldn't think could impact wellbeing, things like whether or not they wear glasses, whether in. They're left handed or right handed. Whether or not they eat fruit or potatoes, right. So where did it compare to being bullied versus, you know, eating potatoes?

[00:06:02]

Right.

[00:06:03]

So so so listening to music, let's say another type of technology use, the effect of listening to music on well-being was about 13 times larger than screen time in the negative direction. And that doesn't mean that Mozart makes teenagers depressed. That means that if you're a kid and you're listening to loads of music, there probably might be something else going on in your life. It's nowhere near bullying or drug use. There's figures in the paper where we kind of draw beautiful things that we've made animated GIFs that that compare all these things.

[00:06:40]

But but no, the negative effect is somewhere between whether or not you wear glasses and how much potatoes you eat, huh?

[00:06:49]

And so this is all we're still just ignoring the the fact that it's purely correlational dress and not not causal and just asking like even just, you know, looking at the correlational data, is there something here that seems like it needs explaining with a causal story?

[00:07:08]

And your answer is not really a different kind of correlational fact that often comes up in discussions of the harmful effects of social media and smartphones is just the trend line that depression and suicide rates among adolescents have been going up dramatically ever since smartphones became widespread, which was in about 2011, 2012.

[00:07:33]

And, you know, obviously correlation doesn't prove causation. But as expected, the comic, the webcomic once said it does waggle its eyebrows suggestively while mouthing look over there.

[00:07:45]

So do you find the spike in depression ever since smartphones took over at all suggestive of a causal link?

[00:07:54]

Yes, that's a really interesting question. I would say that I'm really happy that we're going to we're actually running a three year project to look at that question that we're starting in October. And it's something that that concerns us a lot. But these kinds of trends and these kinds of correlations like this kind of time series analysis, which is never actually done properly, it all depends on which data set you look at. You know, you don't you don't see things like in in more tech saturated countries or in other countries, in other industrialized countries, you don't see, you know, two or three years ahead of the United States, the South Koreans and the Japanese having spiking rates of self-harm or depression, you know, in places where you where, you know, there's more Internet penetration at an earlier time or at a later time or at the same time, you don't you don't you don't see the same trends in Germany or the United Kingdom.

[00:09:03]

Right. Or Canada, where you have, you know, just as many iPhone sales per thousand thousand kids.

[00:09:12]

Oh, I mean, that actually seems pretty damning of the argument, this piece of the.

[00:09:17]

Yeah, but it's true. It is it is tremendously damning of the of the core thesis.

[00:09:23]

But the tone didn't make it sound as damning as it. No, but but it's more of a cure, but it's more of a curious thing for me, which is that how how has the narrative found the two data sets where you can kind of draw a picture and tell the story? How is our how's our narrative in the West that this thing is happening? And the authors of the people who are kind of pushing this narrative, they managed to find the only two data sets where this is kind of even plausibly true, because if you if you kind of even zoom out in the same data set, you see that there's like a long linear trend.

[00:10:00]

When you say zoom out, you mean look at years before 2011. If you look at 1970, when you look at like 1970 to twenty eighteen or something. Right. There's like a much bigger decline that tends to follow, you know, you know, there's there's not a decline actually. There's like a tremendously steep dip in the 90s and anything that we'd be worried about in kids. Right.

[00:10:29]

Meaning sorry, when you say your thing in the nineties, depression, suicide rates, et cetera, went down. Yeah.

[00:10:37]

They all they all started crashing up until the new millennium. Right. Or the most recent millennium shift. And so, like kids, kids were like across all metrics. We're looking like way, way better from where they were in the 60s, 70s, 80s and early 90s. Oh, I see.

[00:10:55]

So this could just be a regression to the mean type. Exactly the same. It's not just I mean, it's probably taking a bite out of the gas or something, but the. Yeah, but but like kind of the but what happens is if you if you kind of chop things off in 2005 and 2014 and you exaggerate the Y axis, you can tell yourself a story that something happened in 2011. Right. But it's much more interesting for me as a scientist to say like, well, how the heck how the heck is this data set we've got to look at?

[00:11:30]

And how how can this drive so much hand-wringing over at NPR? Because, you know, I can I can pick a different North American data set and show almost the opposite pattern. Or I can pick ten data sets all across the EU and show you that every year kids are getting more healthy in terms of their drug use or their drinking or how late they stay out at night or things like that. Right.

[00:11:56]

Is your your the implication being that if like if drug use had gone up or if premarital sex or like, you know, sex at a young age had gone up instead of going down, then we would be are these these same people would be pointing to those trends and saying, you know, that's clearly connected to smartphones. Right.

[00:12:15]

And you can see this in the media coverage, actually, which is that whenever, you know, whenever you find it doesn't matter which way the tiny trend goes. Right. It's evidence that, you know, there's directly conflicting headlines sometimes where it's like, you know, it doesn't matter if the tiny trend means teens are having more sex or less sex. The very fact that there's any difference, it will get attributed to technology, right? Yeah, yeah.

[00:12:44]

And so we just kind of like a history of this in terms of like every 12 months or 18 months, you have a new guru who comes by and they have a they have a technology, you know, like so we had Phil Zimbardo of Stanford Prison Experiment fame. Right about three years ago. He wrote a book called The Demise of Guys. And there the technology was video games and Internet pornography. The so that was the X. The the Y was traditional gender roles, which he described literally as chasing girls and skirts.

[00:13:21]

And then for him, the. Correlation between these two things, the mechanism sorry, was that that playing games and having access to pornography and he also identified a higher level of female teachers in his time series analysis. This has led boys to to be be less like guys. And he did a TED talk brose, yeah, less like. I don't I can't even imagine him saying the word bro, but but but the but the larger thing is that that this is this is kind of a cyclical thing where where what happens is it's not like nobody collected data to show that the demise of Guy's thesis was wrong.

[00:14:07]

Basically what happened was, you know, Adam Outlier or someone wrote a new book about his Accies technology is addictive. And his wife, you know, sorry, sorry, his his ex is, you know, persuasive design and and his wife is technology addiction. And then he tells a story that connects X and Y in terms of whatever the technical technological flavor of the month is.

[00:14:29]

Maybe what we need is a nonprofit where all they do is just for for each new instance of this, where someone's telling a causal story on pretty thin evidence with, you know, trend lines that could have been explained the other way. This nonprofit will publish TED talks making the exact opposite case, using trend lines that sound exactly as persuasive. And they just keep doing this until the public gets it that you can do this for any story you want.

[00:14:59]

Yeah. Oh, I forget the name of the law, but the problem is that you're assuming it was the bullshit asymmetry principle. And so the problem is that the amount of energy to to refute bullshit is an order of magnitude higher than the amount of effort required to generate it. So so it has to be it has to be very well well-funded non-profit. I'm sure Google or Facebook would love to fund that.

[00:15:26]

All right.

[00:15:27]

Well, let me let me throw something a little bit harder at you, which is that there's not a lot of this research, but there have been a few studies that have tried to do to kind of identify the causal effect. They've done some kind of randomized control trial. So one maybe the most well known just came out earlier this year. It was by Hunt Allcott at all. It's called the welfare effects of social media. And what they did is they basically took a group of people randomized, I guess half of them.

[00:15:56]

They paid those people to deactivate their Facebook account for I think it was four weeks and then followed them, made sure they didn't actually reactivate their Facebook account in a moment of weakness and compared their various mental health metrics to a control group that stayed on Facebook and found that the the group that left Facebook was significantly happier.

[00:16:19]

And I'm sure there were other metrics as well. But that was the one I remember. And also some significant minority of that group chose to stay off Facebook at the end of the four weeks, which, you know, I haven't looked into the method closely, but that seems like the kind of study that I would find convincing, do you not?

[00:16:37]

I don't. I mean, we've done well, but there's a few few reasons. So, I mean, it all depends. I think the most important thing that you said was I haven't looked really carefully into the methodology, but this sounds like the kind of evidence that I'd find convincing, I think.

[00:16:55]

I mean, relative to like correlations and a giant. See it. Yeah. Yeah. All right. So when you're not well. Well, I would I would say that I happen to have read this paper after conducting three similar preregistered experimental studies where where I did not find where we did it across three different countries where we did not find the same pattern of effects.

[00:17:20]

Just to clarify, you did studies where you some people were randomly selected to deactivate an account or or to go off social media, to have data, to either have days where they used non their normal amount or more forms of of social media and online interactions. So just to see a series of field and they're not published and and that kind of gets into the space where actually, you know, it's not peer reviewed just like this. This Amber Paper that you're talking about, the Allcott, the Allcott, it all paper.

[00:17:58]

And and, you know, there's just a lot of things that the devil's in the details. There's just a lot of things that, you know, were immediate red flags for me. So that the Allcott paper says things like the study was preregistered, but it wasn't. I mean, they had their they had their study plan ahead of time. But I could not find the preregistration document where they outlined their sampling plan and their analysis plan in a way that I would recognize as they study preregistration.

[00:18:30]

So I was like, OK, that's not their meaning.

[00:18:32]

You're so you're saying they just they ended up leaving themselves more room to cherry pick or data mine than.

[00:18:39]

Right. But but and there's obviously there's differences between fields about what the. Preregistration means, but I mean, they had a plan, they had a plan for what their study was going to be ahead of time, which I agree is admirable to have a plan. But when it comes to hypothesis generation versus testing, right, there are things that we've observed like non adherence. So when you ask people not to use social media or even, you know, there will be a certain percentage of people who actually we need to brief them.

[00:19:11]

They tell you that they didn't adhere to the instructions. Sure. And it didn't seem to be in that detail. Also seemed to be strangely missing from this study. And so there is just kind of like if the kind of the the system by which that I use to consider something kind of a compelling piece of evidence, I mean, this is still this paper still absolutely worth reading and and absolutely thinking about. But then the disjoint between how it's framed, what was done and then how society talked about it.

[00:19:45]

Those those gaps lead me to say, no, I don't find that convincing, I, I feel like this is the experiment that absolutely should have been done a decade ago. And if it had been done a decade ago, we'd know so much more. Yeah, but it's not it's not something that you should that that makes sense to point at and say that this is like the end of the story or this is a conclusive piece of evidence. It's it's really the beginning.

[00:20:14]

This is this is an example of something that would be the beginning of knowledge. Oh, for sure. And it would need to be refined.

[00:20:19]

What what is your own personal hunch about the effects? Like let's say we, you know, had an unlimited budget. We could snap your fingers and just know the answer. What would you expect to see the effects of tech being?

[00:20:31]

I think the core thing has to do with motivation. How so? So I so I think that the.

[00:20:39]

I think that people will be satisfied or dissatisfied or made more vital or drained, drained of life in their interactions with platforms for the same reasons that they they feel these ways about every day non digital contexts. Right. So motivationally speaking, if people are doing things out of a sense of choice and volition and they feel they're doing things with a platform or context or a relationship because they feel like they want to, you know, instead of feeling like they have to.

[00:21:16]

Yeah. Instead of feeling like they're manipulated, I think that that's that's the core of it. And the problem is that it's not about money. Money is not the thing that makes this hard to study. The problem here is data that this is not the kind of thing that you can just study with a questionnaire. This is you can't build your own Facebook in the lab. Really.

[00:21:38]

People try why can't you just do the Allcott that all study that we were just talking about but, you know, cleaned up or better, you know, to your standards? Why isn't that the kind of experimental design that should help us tell whether Facebook is bad for people?

[00:21:55]

Yeah, well, I mean, the problem is that we're treating Facebook, even even Facebook is. It is it's very silly. This is a study about Facebook because really the thing that we care about is social media. Probably when you read the paper, you kind of like you kind of like the paper kind of assumes that all social media is Facebook and Facebook is all social media.

[00:22:15]

But you could do the same thing with Twitter, you know, just. Yeah, piece by piece.

[00:22:18]

But yeah. So but the thing is, I mean, that's very positive. It's the view. And I also hope that to be the case. But but the the issue is that these are these are whole contexts, right. These are whole systems. Right. And so your question is like, well, how do we know if Facebook is good for you? Right.

[00:22:37]

Or that our social media is good or bad for you or it kind of makes you more or less happy or find that's then I'd be I'd say I try not to you and say, well, let's assume that's a meaningful question. Like, is school good for you?

[00:22:52]

Is a burn is when someone asks the question, you're like, well, let's assume that the meaningful question. Right.

[00:22:58]

But then. But then. But yeah. But let's assume. Right, OK, then it's like, well how would we begin to break that down. Right. If we would have to go into a school, we'd have to break the school day down. We'd have to we'd have to do classroom observation. We would we would talk to parents, talk to teachers, talk to principals. We would actually have to kind of deconstruct this context because a school or education, is it a solid right?

[00:23:27]

Is education good for your well-being? It's not a solid. We have to break it down. OK, we're interested in primary schools, OK? We're interested in classrooms, OK? We're interested in math, OK? We're interested in learning. You know, like you kind of have to zoom down. And the problem is, is that unlike a school or unlike a playground or unlike some other kind of context, these contexts are privatized. These are proprietary spaces.

[00:23:54]

And so as social scientists, we don't actually have access to the levers of power that would require us to break this down from some kind of useless abstract level to kind of like the behavioral level or to the context level.

[00:24:09]

To me, this just feels like letting the perfect be the enemy of the good or something like that. Certainly the question like however, people choose to spend their time when they have a Facebook account, does that make them feel, you know, happier on average if you, you know, measure their well, they're like subjective well-being compared to other people who don't have access to their Facebook account. That's not the only, you know, important question to ask.

[00:24:36]

You could ask other questions about like, well, if we changed Facebook in such and such a way, then would it be like would it how would affect people's subjective well-being? Or you could, like, focus on particular parts of the current Facebook and do those make people happy or less happy or, you know, does it affect their attention span or. There's lots of questions you could ask.

[00:24:53]

But yeah, but this is the but this is but this is this is the path of madness, because that's what I'm wondering.

[00:24:59]

Why can't you just pick a few of the core like like if you imagine the Hunt Allcott, it all study had been, you know, superbly done. And, you know, let's say it replicated, you know, did a bunch of other studies like it. And we just kept finding the same effect. That feels like a valuable thing. We would have learned about Facebook that like if you deactivate your Facebook profile or at least if you're the kind of person who would participate in such a study and be willing to consider deactivating their profile, which is the reference class here, then like chances are you will end up happier or like chances are you will not regret it.

[00:25:32]

Like that's a valuable thing to have learned in this hypothetical I've constructed.

[00:25:37]

I really wish that was true. But it's not like because you have to because you can't just say, are people happier if they disabled their their Facebook account for a week or whatever you have to say? Are they happier compared to what? Because as a society or as a person?

[00:25:57]

No, because what's the leading cause of death among Americans besides heart disease, cars, car accidents? Right, OK. How so can we as a society, we absorb the American society absorbs, what, 40000 deaths? A year, I don't know how accidents sure it sounds that sounds right. I'll make up a number. Let's say it's 40 miles.

[00:26:22]

And so but and we could we could we say, OK, well, having cars results in 40000 deaths. And if we all chose not to drive, that number would plummet. Or if we set a uniform speed limit of 10 miles per hour or even pedestrians who were hit by a car, most of them would die. Right. Like you have to make, like, life, because we don't live for an infinite amount of time. It's a series of trade offs.

[00:26:50]

And so the question is, if you're not spending two hours playing video games or on Facebook, you have to ask yourself, Will, how representative, you know, what is the other thing that you would be doing with that time? Like what is the comparison and what what what is that cognitive surplus look like? And so it's not sufficient to just say, like, you know, is there necessarily I mean, there isn't a reliable effect. You know, we've we're doing our fifth version of this experiment, like right now in Croatia.

[00:27:22]

So I don't know the results of that one. But like, even if this was true, what would this mean compared to the other kinds of tradeoffs that we make? What is the what is the active ingredient in stopping Facebook engagement that actually ends in this uplift? Is this is this actually something that is a one off deal and you've given up Facebook for the rest of your life and you had a barely statistically significant positive effect across twenty seven hundred cases.

[00:27:54]

Well, you know, so so if there's 20 if there's twenty seven hundred people in a study of Facebook, right. And you assign half of them to stop using Facebook. Right. So. So a thousand. A thousand. I'm saying this to someone who doesn't have a Facebook account, by the way. So I'm a giant hypocrite here. But you get thirteen hundred people to stop using Facebook. Right. And they feel happier on some self report assessment for one week.

[00:28:21]

Right. That's then your policy prescription is close your Facebook account, but you don't actually know if that positive emotional effect lasts for more than a week like human beings. Are these like balloons of happiness that just kind of just be permanently inflated by like one tenth of one standard deviation of happiness? That's right. And so you would actually want to know, well, what is the thing about Facebook? And this is why I said this thing about why understanding the context is so important and not treating it like it's a solid, like education.

[00:29:00]

Right. Breaking down. Well, what leads to effective teaching or what leads to kind of good peer relationships in a school or what prevents bullying? Right. That's the kind of thing that would give you meaningful interventions. And so you'd understand what the scope of the effect should be and how to make the thing a better place. You don't learn anything from abstinence.

[00:29:20]

Doesn't this just strikes me as like a fully general argument against against the value of learning that like starting X or stopping Y will make you happier because because you could just always say, well, no, we don't.

[00:29:34]

I mean, we don't know that your happiness last or.

[00:29:38]

I mean, it's true then the question to study, you know, is it our improvements to our our subjective well-being lasting and if not, you know, is there a way to make them laugh?

[00:29:49]

I think a very important question, I grant you, but it doesn't seem specific to the question.

[00:29:53]

I think a number of I think I think a number of world world religions have also tried to to tackle it as well and philosophies. But no, but but it is a it is a super kind of interesting one. Right. Like because this is this is one where it's like, you know, this is the logic that that does undergird things like debates about violent video games or or other things where you kind of say like, oh, I see the participants in one condition, they they put slightly more hot sauce on a taco then than the other condition after they've played Grand Theft Auto versus Tetris.

[00:30:29]

Was that actually one of the measures of violence is the Hot Fuzz you put on your own taco on somebody else's food and imaginary other persons?

[00:30:38]

Oh, I see.

[00:30:39]

And I presume you were. I'm just curious about this weird metric. I presume that the person with told the hot sauce would be painful for the recipient or were they? Because I could imagine a lot of people who like spicy food were like, I want to be helpful. He has a lot of hot sauce.

[00:30:53]

I am so sorry. Your listeners have to hear about how embarrassing social they know. Yes. So, so, so no. So so yeah. This is called the hot sauce paradigm. OK, and so the general idea is that your your a participant and you're meant to participate in two experiments and you're supposed to believe they're not the same experiment and an experiment. One, you do something like play a violent video game or not, and then you say, OK, thank you for participating.

[00:31:19]

Now what you're going to do is you're going to do a taste perception experiment. And here, try a little bit of hot sauce. And, you know, how about you give here's a container, put hot sauce in for somebody else to try and then, you know, they use that as an indirect measure of aggression.

[00:31:38]

So it's just a measure of like. Your house, how spicy you like things I can't oh, man, I mean, it's it's a measure of a lot of things, but behavioral aggression, it's probably not a measure of. But but but you say that. But it's that's nearly as dumb as asking somebody if they're, you know, using using a handful of self-esteem questions and then calling that depressed depression when you go on ABC. So like there are giant leaps, logical leaps and bounds that that that people do take on that don't involve something as silly as hot sauce.

[00:32:13]

But but this is this is the kind of thing I'm saying is like, OK, well, let's say that you find a difference in the amount of hot sauce or how angry somebody is. Right. Don't do violent video game. Players just become infinitely angry, like the more doses of gaming they have. And like the idea that idea is actually insane. There's nearly a perfect negative correlation between violent video game sales over time and youth youth aggression and arrests.

[00:32:43]

And but then it becomes a really interesting question is like, OK, well, if a game is really frustrating and it makes you angry, how? Because you're not good at it, let's say, or someone beats you, like, how long do you actually stay angry? Or if social media if there's something about social media that makes you unhappy. How how long do you stay unhappy for?

[00:33:02]

You know, it's not we measure that too, though, like, you know, I mean, we're trying to study happiness, have apps that people download that will like ping them randomly throughout the day and experience.

[00:33:12]

How are you feeling or how happy are you? Can we just do that?

[00:33:15]

I mean, that's well, that's what we're doing. I mean, so if you know anyone if you don't want to fund the research, let me know. But that's called an experience experience sampling study. And what we're trying to do is we're trying to connect these kind of moment to moment assessments of of of wellbeing and happiness and connect that not to necessarily asking somebody if they were using social media, but keeping track in terms of their actual behaviors on their device about what they've been up to.

[00:33:45]

To try to break this question down a bit more in reading some of your writing and interviews you've done on this question, I noticed a potential disagreement that I might have with you, which is that I've seen you criticize certain moves made by tech companies, as you know, lacking evidence behind them, like Apple and Google, creating dashboards for people's smartphones so people can monitor how much time they've spent or set time limits for themselves or hide notifications or other things that might distract them.

[00:34:13]

And you said I have a quote from you in a nature article about this. You said, None of this stuff has any empirical evidence behind it. They're just doing it because they need to do something because everybody's making noise. Yeah.

[00:34:25]

Which sounds like the part about they're doing it because everybody's making noise. But to me, those kinds of interventions that I just described are a great idea for two reasons.

[00:34:37]

First, because, you know, until we someday in the future get solid research on the actual psychological effects of things like social media and smartphones, in my opinion, the next best thing is just to hand the reins to the users, like let people decide for themselves how much they want to subject themselves to a thing that may or may not be harming them, just like all else.

[00:34:59]

Equal free choice is good.

[00:35:01]

And then the second reason that those moves by Apple and Google seem like a win is that it seems clear to me that the effects of screen time and its various forms are going to be really heterogeneous where some people are harmed, other people are helped.

[00:35:15]

And though, you know, we don't want a one size fits all solution, we want people to be able to, you know, gauge the effects of their Internet usage for themselves and choose their own experience. So I guess another way to say all this is that it seems to me that the question of should we pressure tech companies to give users more control over their own experience and how, you know, addictive, addictive it is for them. That question seems just separate from the question of like, what are the average effects on people of using these various platforms?

[00:35:46]

Does that make sense?

[00:35:46]

Yeah, yeah. And so I absolutely did say that. And I can remember the words coming out of my mouth. The the the question here isn't whether or not giving users more control is is an unalloyed good. I don't disagree with you on that. I think of it insofar as as platforms provide users, us humans with meaningful opportunities for action and for asserting our values. That's that is an unalloyed positive. All right. And that's not what I'm I'm talking about when I say that there isn't evidence to support that these things are good or not.

[00:36:26]

What I'm saying is that, OK, this topic is too important to trust it to toolmakers. OK, so this is like Duku the let's use Google as an example. There you are with something like well-being, dot Google dot com. Right. And the thing that I'm saying is that there's no there's no reason to believe that this new feature is any different than goup. Like, because this is about Gwyneth Paltrow, yeah, the Gwyneth Paltrow site where there's all these things that are made, it's like two eggs you put in your vagina.

[00:37:05]

I mean, I cannot speak to that. Having to answer went to Mount Holyoke and two who went to Wellesley. I'm not going to enter that realm. But no, I would say that that that there are many interventions that companies are engaging in, whether it's Facebook, suicide prevention tool or these screentime tools, which all sound great to me. And I'm a Nintendo kid. I love that. I love that Nintendo tells you to get up and move around every two hours.

[00:37:40]

And as a parent, I love this screen time myself to to kind of allow my daughter to use audible, but nothing else within certain hours. But I don't take any money from loadable. And I but that doesn't mean that these should be understood as validated health interventions.

[00:38:01]

Oh, you're just saying you you don't want people to. To think that, like, because these companies have made these changes, that we should expect that now things will improve. Right. Because like the case I was making for it is just like in the absence of evidence, you know, this is better it's better to give people free choice than to not give it free choice. And you're you're worried people will read it as like they've solved the problem.

[00:38:26]

Right, as that it's done and dusted. Right. Because the issue here isn't one of whether or not it's better than nothing. Of course, it's probably it. Well, the suicide prevention tool might not be better than nothing because there's a lot of like things about ethics and and incorrect reporting and human human agency and dignity around things like end of life. But no, but but but this is all about an opportunity cost. So we have a situation where someone has screamed, there's a wolf in the forest and it's a blue wolf.

[00:39:04]

And what's happened is Google and Facebook have jumped up and said, here's our little wolf trap, we've created this new tool that will help you control the blue wolf. Right. But nobody asks. It's like, OK, well, we're going to make a tool. Now, here's a new tool to get rid of the wolf in the forest problem. Right. And this will this is an opportunity cost. This can actually this costs us the app.

[00:39:32]

But by making it into a feature, we are actually diminishing the serious attention we should be giving to this idea that these platforms are bad for us and are bad for our society. If we if we think about it like like let's imagine that there was that that that lets just imagine that obesity was a problem in our society. Right. And what what what's happened is that the maker of the worst cereal know the cereal that we all think is really bad for us.

[00:40:05]

Right. They say at the bottom of every new box of this cereal, there's a stopwatch. And you can use this stopwatch to measure how much time you've spent eating the cereal. Right, and and then they have a website called Stopwatch Dot tot, serial, dot com, where they list all the new tools they've put at the bottom of their horrible serial as like their corporate responsibility to us promoting our well-being and fighting our obesity. It's it's at best, it's homoeopathy.

[00:40:43]

Intissar, sorry. At worst, it's probably it's homoeopathy and aanestad, something that at best it's something that's some high functioning people in the society can use to exert more control. If there's like a nugget of something defective, they're right. And then then at worst, what it does is it distracts us from from getting the real data and figuring out if these things actually are used and whether or not they work.

[00:41:07]

I just I'm realizing I didn't fully understand your whole position in this when I first invited you on the show. I'm recapitulating now.

[00:41:16]

So you're you're like I described you in the intro to this episode as like part of the backlash to the backlash against tech, which, you know, implied that you were like. You know, pretty positive about what the tech companies were doing that I am guessing now is not correct.

[00:41:34]

And so I'm glad we're glad of fine. I mean, I feel bad for my tech listeners who, you know, thought I was I was thrown in jail either way, but.

[00:41:45]

OK, but but you don't you are actually pretty confident that the these companies aren't or, you know, social media, smartphones, et cetera, aren't having these negative effects that the critics were claiming.

[00:41:56]

Yeah, but the bad thing that you are, you know. Holding their feet to the fire about is just the like, their control over the data and the process and everything and and not making that available. And you're worried that the, you know, giving the the users more control over their experience is just a way for them to avoid doing that?

[00:42:17]

Yeah, I mean, it's it's the issue here is that I that what's happened in the last, let's say, 30 years is that there's been kind of a steady privatization in some ways of play and of childhood and of socializing. And so the kind of like the space at which we can understand what's happening for us in terms of wellbeing and motivation and health, that space is increasingly the purview of a relatively limited number of powerful companies. And and it's and like it's not that there aren't wolves in the forest, it's just that our actual ability to identify them is getting worse and worse over time.

[00:43:08]

And so we have people who are are like prophets of doom and gloom who have who who write a book or have a consultancy or do whatever that they say, oh, we found the blue wolf or the red wolf or something. Right. There's no chance that they've actually found this, Wolf, because their method completely mismatches the whatever fear mongering they're doing. Right. But but they're they're definitely tapping into a very real anxiety that we have about these companies and so and and these technologies.

[00:43:41]

And so, you know you know, from a historic perspective, you know, it you know, we were worried about plays. We were worried about the printing press. We were worried about radio and jousting and and Dungeons and Dragons and violent video games. So kind of on that basis that there probably isn't anything uniquely bad about any one of these technologies. It's just we wouldn't have a really good way of knowing. And and these companies aren't. And these and these companies aren't monolithic.

[00:44:12]

They're full of amazing, hardworking people, people of conscience. And and many of them do want to make most of them, nearly all of them want to make amazing, positive experiences for their users. But even they aren't necessarily sure of how to do that, because what's happened is the actual like, whether it's behind a wall of NDA or impenetrable team systems that some of these companies have, they don't they don't they don't actually know how to ask those questions, even though they have the data.

[00:44:48]

Well, I have to say, I'm I'm kind of impressed now, looking back that for someone who who really does want to hold tech's feet to the fire, you didn't just side with the critics of big tech because they were critics. You instead criticized the critics because their methods were bad, even though they were fighting the same people that you wanted to fight for different reasons. Did that make sense? Yeah, I yeah, I mean, I don't want to fight them, they're natural allies.

[00:45:19]

Yeah, no, I know, but like I could imagine someone else in your position being like, you know, unconsciously, I'll be sitting like, well, yeah, well, just being like, look, the enemy of my enemy is my friend.

[00:45:30]

And again, you're not saying the tech companies are enemy, but like, if the goal if the goal you want to achieve is like getting tech companies to be more forthcoming about their data or more transparent or something like that, I think pushing the like tech is bad for us all line would actually maybe be a good way to accomplish that.

[00:45:49]

But the problem here is that, you know, scaring people will work until it doesn't. And so it's very easy to find the mistakes in this research or in this reasoning. Right. And you don't have a good basis for actually making these arguments. They're actually very easy to knock down. And so if you get behind bad logic and bad data and bad science, and that's the thing that gets you a seat at the table, it's actually really easy for these companies to knock that down and then to delegitimize it, the entire critique.

[00:46:29]

Right. It would be as if people were actually falsifying their data on something like global warming or on on on on pollution, some aspect of pollution, like you wouldn't want to I wouldn't want to sign up with somebody who was who is doing that, because the moment that Exxon Mobil finds that you've been kind of making it all up, then they've got even more ammunition to do nothing. And so if things like this law that I think it's called the Smart Act or the smart draft or whatever the heck it is.

[00:47:03]

Right.

[00:47:03]

Yeah, that's a game. You aren't a fan. I mean, it was my favorite feature was that it bans gamification because because I know that, yeah. There's like a gamification game, a five badge system thing that it says it's going to ban. But in one of the subsections, what it's the best feature of this draft is that it was so short it didn't waste a lot of my time to read. But it but the problem is that it if you were to take it seriously, it actually it actually it actually tells you what how poorly lawmakers or the people who are informing lawmakers how poorly they understand these contexts and how poorly they understand the role of these technologies in our lives.

[00:47:51]

Because if if they're going to believe something like that, what else are they going to. What else are they going to believe? Yeah. And and and we don't want to be in a situation where I don't want to be in a situation where we waste another 20 years with with Delmon data and dumb regulation. And we don't actually make these companies meaningful partners and part of a of the web of our society.

[00:48:21]

Before I let you go, I wanted to ask if you had thought of a book or other source that influenced your thinking or influenced your life in some way. It doesn't have to relate to the topic of the episode.

[00:48:34]

Yeah, I mean, I think that probably my my mid 90s obsession fixation, I got a copy, I think in fourth grade or third grade of Michael Crichton's Jurassic Park. And it I yeah, I spent, I spent the whole summer, I must have been, I don't know, eight, seven or eight. And I obsessively read the book over and over again. And it just it was this, it was after having read other sci fi and other things, you know, like, like foundation and other things that my dad had provided for me.

[00:49:14]

This was just such a more real Crighton presents this this real world science of the future, where everything has become kind of industrialized, industrialized, commodified commercial science. And this tension between, you know, you know, the the genetic, you know, doing genetics, genetics and academia versus, you know, making an amusement park with dinosaurs and all the tension and all the espionage and it kind of it the hollowing out. So have you read Dressier Park like any time recently?

[00:49:55]

Years ago, I must have, but. Well, it was please, please reread like the first third of the book basically. OK, but but but but the thing that Crighton describes and he wrote this in the 80s, we wrote this almost a decade before I read it, is he describes this like it's almost like all's quiet on the Western Front. He describes this kind of hollowing out of academic science where what happens is that if you want to be on the cutting edge, you have to do commercial science.

[00:50:23]

And it was it was it was you know, it's the it's the Japanese. The money comes from the Japanese. The the the captive industry is genetics and computers and and and academia is where you go to retire or where the second rate talent goes. Right. And and then like the excesses of this is what create this industrialized science. This is what creates the monsters, which is the the the the the scientists themselves who can't see outside of their system that that Malcolm and the the Malcolm critiques, which is the corporate sector right now.

[00:50:58]

Sorry, that line, which I think is from Jurassic Park about your scientists, were so busy figuring out how to do it that you didn't stop to ask if you said that that was probably a reference to, like, messed up incentives then in the scientific research community?

[00:51:12]

Yeah, absolutely. Because what what would that that. Yeah. And so but the the crazy thing that's happened and obviously it's not crazy in any way, is that this is this parallels exactly what's happened in. Always with the the Internet and with big tech companies, which is that if you want to work with the latest data and the fastest computers and these amusement parks, which are these platforms, the platform economy, you know, you you don't do that in academia because you you don't have access to the supercomputer.

[00:51:54]

You don't have access to the to to to the funding required. Right. Because if you're an academic scientist in 2019, like, I just wasted almost a month of my life writing a grant for four for a paltry amount of money.

[00:52:10]

And and so it and it has a it has a 17 percent chance of getting funded according to what the what the university tells me. And so when I first read Jurassic Park, that was my that's how I understood the tension between academic and industrial science will link to.

[00:52:29]

Well, Dorothy Park, for anyone out there who hasn't yet read it or who just wants to read it again, but view it through this lens and to several of your papers that we talked about and just your, you know, general research page will, I guess, linked to some of the pieces of the debate over the, you know, tech backlash and the extent to which it's justified that you've been contributing to.

[00:52:52]

Andy, thank you so much for coming on the show.

[00:52:54]

Thank you so much. Thank you for having me on. This concludes another episode of rationally speaking. Join us next time for more explorations on the borderlands between reason and nonsense.