Transcribe your podcast
[00:00:00]

Today's episode of Rationally Speaking is sponsored by Livewell Give Oil takes a data driven approach to identifying charities where your donation can make a big impact. Give all spends thousands of hours every year vetting and analyzing nonprofits so that it can produce a list of charity recommendations that are backed by rigorous evidence. The list is free and available to everyone online. The New York Times has referred to give well as quote, the spreadsheet method of giving give. Those recommendations are for donors who are interested in having a high altruistic return on investment in their giving.

[00:00:30]

Its current recommended charities fight malaria, treat intestinal parasites providing vitamin E supplements and give cash to very poor people. Check them out at Give Weblog.

[00:00:53]

Welcome to, rationally speaking, the podcast, where we explore the borderland between reason and nonsense.

[00:00:58]

I'm your host, Julia Gillard, and I'm here with today's guest of Ovadia of Eve is the former chief technologist for the Centre for Social Media Responsibility at the University of Michigan. And for the last couple of years now, he's been working on tackling the problem of misinformation and disinformation and other related threats to our democracy and the stability of our civilization. So that's what we're going to be talking about today of you. Welcome to rationally speaking. Thanks for having me on.

[00:01:27]

So there is this very widely shared article about you in BuzzFeed earlier this year, and it described how you had sort of tried to sound the alarm about what they referred to as fake news to Silicon Valley pretty early on, like before the 2016 election. And your warnings were, as they said, not taken all that seriously. Can you describe like in what way did you sound the alarm and how did people react?

[00:01:51]

I talked publicly and privately to people at the different platforms around how the information ecosystem was being weaponized and in particular, how the incentives resulting from the way the platforms were working at the time was leading to sort of a rapid increase in the amount of misinformation that was being shared online.

[00:02:10]

And by the incentives, you mean misinformation was likely to be shared because it was know sensational or people, you know, got really worked up about it or something.

[00:02:21]

And so that since that made it more likely to be shared, that created an incentive to produce misinformation.

[00:02:27]

Yes. So that that's one huge component of it. Just the fact that if something is sensational, you're more likely to share it. Right. But there's also the recommendation engines themselves reward that sort of behavior.

[00:02:40]

And in what sense were they not all that compelled by the warning?

[00:02:45]

Well, there's always this like we aren't responsible for our impacts. Or if we do anything, it'll look like we're biased or this is really hard, like all of the sort of standard answers or like this is just not what we do.

[00:03:00]

Right. So it was it was less about them denying that the that this pattern existed and more about like this is not like our department or not tractable for us to tackle.

[00:03:11]

Well, there also wasn't awareness, again, of the sort of scale. And so, you know, anyone can say, oh, this isn't something that we need to focus on. This isn't our problem when something is small, but when something is like ginormous, go for it.

[00:03:25]

Then that is or should be a word.

[00:03:27]

Yeah, yeah. It's that's sort of a different story. Yeah. So this term fake news that the BuzzFeed article about you used in the U.S. is in common parlance now seems to me that it encompasses a bunch of different things. So talk separately about the different things under the fake news term. It seems to me there's three categories where Category one is like false articles, articles that are saying some things that have clearly, unambiguously false, like, I don't know, there was an article going around and I guess 2016 that the pope had endorsed Donald Trump.

[00:04:03]

That was definitely false. You know, verifiably false. Yes. Many on the order of a million shares. Oh, wow. Or engagement's at least. Yeah. And I would also in this false news category, count things that are technically labeled as satire but are clearly intended to be to fool people, and especially one shared out of context without, you know, exactly the satire tag clearly visible.

[00:04:24]

That's Category one. And then Category two would be like biased or misleading news, which is arguably like a majority of news. I don't know, Fox News like disproportionately covering or more harshly covering crimes by immigrants or liberal media, talking about something really bad that Donald Trump did and making it sound like it's a new thing unique to Trump. But, you know, maybe other presidents have all done it, too. And then Category three would be what I would call artificial news.

[00:04:52]

So like deep fakes, videos that make it look like Obama is announcing that he's going to, like, confiscate everyone's guns or something. I made that example up. But and this, I guess, is not not super common yet, but on the horizon. So I'm curious if you are concerned about, you know, one or two of those categories specifically or all of them and feel free to dispute my categorization?

[00:05:16]

Yeah, I think that's a decent way of sort of breaking down the problem area. I mean, there's also like specifically this weaponized disinformation used by foreign actors, which you can throw into that mix, is in addition to the stuff that's weaponized internally. Right. It's like another access. Right. And then there is just sort of the rumors that can spread. And so like that, for example, we're seeing people being killed in villages in India because there are of rumors that, oh, you know, there's all these child kidnappers around.

[00:05:46]

And so some people come in, you know, give food to to children and then they're just. Mobbed by thousands of people and killed and so so those that type of there's in different environments, there's different ways in which sort of misinformation, disinformation can manifest. Can you give an example of of what your concerns are for each category or pick the category you're most concerned about?

[00:06:13]

I think they all blend together in many ways, like they all impact our ability to make an accurate model of the world, both as individuals and as a society or as governance, as as decision makers more broadly.

[00:06:28]

And so whether it's, you know, health misinformation, meaning that you don't vaccinate your kids or that you take like some weird supplements that really are not good for you, or if it's political misinformation that leads you to elect leaders who are just not going to be confident and begin going to get you into major wars. There's this very, very broad ways in which misinformation can negatively impact people on a personal and societal level.

[00:06:54]

Does one of the categories false news, biased news or artificial news seem more tractable or at all tractable?

[00:07:02]

I mean, Fox News is the most tractable in some ways because you can just like say, oh, this is false. Here is why. Here's some evidence for it. I mean, if you're if you come from the lake, there is no truth, then it becomes a little bit harder. But but that isn't really a practical way to approach life for civilization building. And so most decision makers don't don't do that. Neither do most humans in terms of their day to day life.

[00:07:25]

So so far, this is in a sense the most tractable.

[00:07:28]

So identifying it and then there's like spreading the word that it's false in a way that is compelling to people. That's right.

[00:07:36]

And this all really depends on sort of the communication infrastructure that we have.

[00:07:40]

So if you're working in an environment like Facebook where he was centrally controlled and we have all the data on everything you do all the time, that's that's, let's say, an easier way.

[00:07:51]

It's much easier to sort of handle, let's say, false information, especially if it's something that is actually so dangerous that it threatens societal stability or could lead to like a World War three situation versus if you have some sort of super decentralized network of information sharing where everything is encrypted and in that environment. Well, actually, it may be very, very difficult to actually address any sort of false news that even if it is societally or existentially destabilizing.

[00:08:17]

So I definitely, at some point in this conversation, want to touch on the question of how easy is it, in fact, how tractable is it to decide what actually is false news or to to separate, like, normal biased news from overly unfairly biased news, that kind of thing. But it sounded like you were about to talk about synthetic or what I was calling artificial news. I'm I'm curious first to hear how tractable that seems to you. So the synthetic media.

[00:08:44]

So just to be clear about what that means. So A.I. advances allow us to get closer and closer to mimicking reality without actually having real footage of something. So I can, for example, make anyone appear to be saying anything or make it appear as if any particular person is taking any any action so you can film your friend doing something and then replace it, open up on the net. And we don't have the technology to do all the super well yet, but we're very, very rapidly developing that capability.

[00:09:14]

And there are particular ways in which we can already do it, sort of so that decreasing the barrier to entry to doing that sort of video manipulation because without requiring a production budget of a million dollars to pull off something just enables a broader set of actors to both weaponize this and also to just emergently fool others at scale and in a way that's much more compelling than just reading media. But it also enables people to say, oh, that thing that actually is real is not real.

[00:09:42]

So at the end, I think that's even in many ways the deeper threat is that you lose your ability to hold people accountable in ways that are potentially necessary for certain levels of societal function, like Trump saying that the what was the name of that video where he talks about grabbing women? Access Hollywood. Access Hollywood. Yeah, like saying that the voice of the manipulated or something like that, like that is a it was not true in this case, but it's a plausible claim in this age.

[00:10:10]

Yeah, it's and thankfully, the technology is not currently out to be able to do that sort of synthetic creation, at least, you know, out of the lab. But being able to use such evidence is crucial both in the court of public opinion and in the courts more broadly.

[00:10:28]

So I was kind of assuming, hoping all this time that verification technology would just keep pace with falsification technology and that we could there could just be like a little browser plugin that would or like just a feature baked into Facebook or Twitter or whatever, that would have a little, you know, red false tag or fake, you know, synthetic tag in the corner of videos that are detectable as being deep fakes. Are you not as optimistic as I am about that?

[00:10:58]

Well, that requires a lot of things falling into place.

[00:11:02]

First of all, this is a cat and mouse game, right?

[00:11:04]

So you're going to. Have your figures who are if they're able to also use your detectors or if there's some sort of real time detection, then they can train on your detector.

[00:11:14]

So so based off the speed at your detector, they can just be like, oh, let's let's train our system so that specifically fools that detector. And so, I mean, there is there is some sort of disagreement within the research community around whether or not this is tractable in general, whether or not the sort of attackers always win without the offenders, always win. Yeah, my stance is that the attackers will win a significant proportion of the time in sort of the the steady state.

[00:11:42]

Yeah, they'll always be they'll always be able to be a little bit ahead of the game. And so that that doesn't lead me to be super optimistic. But it does mean that there's other infrastructure we you be building in order to help defend our information commons.

[00:11:58]

Do you have any ideas about what that infrastructure could look like? So part of that is I think about this is sort of this knowledge pipeline where you've got your media, which is there's the creation of that media, there's the distribution of that media.

[00:12:16]

There's a way affects policy formation and then the way that it affects action at each of those trenches, you need to sort of make that component more resilient to this sort of attack. So at creation time, you can have strong incentives to like create timestamps and even place stamps that are like auditable.

[00:12:35]

And, you know, there might be able to throw in a little bit of lockshin for that. But it really is not a be all and end all solution for this.

[00:12:40]

And I was going to ask about boxing because I know I'm supposed to.

[00:12:44]

Yeah, I know that there are little sprinkling of it that you can you can throw in here and there that address very minor aspects like, you know, if you for some reason don't trust, you know, three different independent bodies that you could all be sending timestamps to, you could put on the block chain and that that is even more verifiable. But so, yeah, for time in certain nice, there's maybe a few other minor areas where it could where it could be fidan but.

[00:13:07]

But it's. The core here is around how you even do that sort of time, place verification and that in the first place, like how do you deal with the analog hole right where the local the analog hole is, where you just you're just taking a video of a screen in a sense, so you can you can save something happened a time and a place.

[00:13:27]

But actually it's a video. You are taking a video of something that is on the screen anyway.

[00:13:31]

So the thing didn't actually happen at that time in place.

[00:13:34]

And you're saying this makes this makes it harder to to know that, oh, this thing that you said happened at a time and place actually did because someone could have just pre-recorded it from any other time and place.

[00:13:45]

And there's still, like you can sort of a lower balwinder of a or river. But let's just say that this is part of it. That creation step is important.

[00:13:53]

But there's also the distribution stuff where if you have, you know, significant disincentives to create and spread this information or create and spread like, let's say, weaponize synthetic media, that can again increase that sort of barrier to or the likelihood that someone is going to going to be trying to promulgate that content. And I think you can at a high level, think about this as you want to increase the cost of making you want to decrease the cost of proving something that's actually real is real.

[00:14:23]

And that's sort of the overall framework. There's other things that might be valuable but are definitely more controversial. So, for example, if it's so easy that you can just take an app on your phone and record your voice as anyone else's voice and then use that to, for example, call people and ask them for money as their kids likes, which already happened, just doesn't happen as effectively, then there there's some incentive to say, well, maybe Napster shouldn't let you install an app that lets you mimic anyone's voice without that person's permission.

[00:14:55]

So only apps that sort of have gone through some sort of quality check of like, hey, this is this cannot be easily weaponized in terms of the way it's it can create synthetic media.

[00:15:05]

And so, again, there's this is based off the fact that there is this sort of external control over what can show up in app stores. And so it's almost the authoritarianism of the App Store is the thing that you're using to prevent this sort of mal democratization.

[00:15:18]

Right. This democratization of, oh, you can do all this amazing creative work, but maybe the net impact of that is actually negative. Maybe it's very, very significantly negative. And so you can think about this not just as democratization. Anyone can do anything but malha democratization. Anyone can do anything in a way that's significantly harmful. And then again, there's thresholds for that. And it's you know, there's definitely argument to be had about how bad that is.

[00:15:45]

And I would argue that it's probably there's a likelihood that it's pretty bad.

[00:15:49]

So I know I've read that fact checking in general isn't all that effective like it. Correct me if I'm wrong, but I think that Facebook earlier this year, last year tried to label links that people shared as being false or potentially false and that researchers found they didn't have that much of an effect on whether people, I guess, shared the story. I don't know if they checked, whether people believe the stories. But the the take away in my mind, from what I read, was that it didn't have that much of an effect.

[00:16:17]

But do you think that we have reason to be more optimistic about fact checking of just like is this video synthetic or not? That seems like the kind of thing people would be more receptive to that they would consider more trustworthy than fact checking along the lines of.

[00:16:32]

Are the claims in this article roughly accurate, according to me, the fact checker?

[00:16:36]

Yeah, I'm hopeful that there is a little bit of a difference there that makes it a little bit more receptive, at least in some domains.

[00:16:42]

Yeah, because as you say, it is it is slightly different to say this was literally not a thing that happened. Right.

[00:16:48]

Where like other people would be say that, oh, this is literally a thing that happened.

[00:16:52]

And you can sort of say, well, you can prove that that person wasn't at that place. OK, then this this is not legit. That said. Motivated reasoning still does kick in and just not trusting the people who are doing the verification does kick in. Yeah, and so it's still not a sort of and all and be all. OK, so turning now to the other two categories of false news and biased news, you and I have had some conversations before about this where I was a lot more pessimistic than you about how tractable it is to label and dis incentivize false and biased news.

[00:17:31]

I previously called it a complete plus problem, which our listeners, if you call something I complete, it means that that problem is in order to solve it, you essentially have to create like human level general intelligence. So people will talk about like machine translation, being able to do an effective translation of a work of fiction, for example, or any kind of nuance work, preserving the connotations and the mood and all the background contexts. That would be an AI complete problem.

[00:18:00]

And so I I saw this this issue of like identifying dis incentivizing bias and false news as being a complete plus in that like having a human level intelligence would be necessary, but not sufficient to solve it.

[00:18:17]

And the reason that I thought that is, you know, humans can't even agree on what, except for extreme examples like the pope did not endorse Donald Trump. Most of us can agree on that, although, as you say, there always be like a small cadre of people who just, like, won't buy anything you say if you're a liberal or conservative or whatever.

[00:18:36]

But except for those extreme examples, it just seemed to me that, like no one can agree, like it's extremely rare for someone who's liberal to say, like, yes, that article is biased in a liberal direction or or someone who's conservative to admit like, yes, that article, that pro conservative article is biased. And so the question of of tagging things as being like extremely biased or biased, they could be considered false again, except for extreme examples would just be it would just collapse into people stating, you know, agreeing with liberal articles if they're liberal and conservative articles, if they're conservative.

[00:19:09]

So that's like the summary of my of my disagreement. I was just thinking about it more recently since our last conversation. And I think one one crux for me and I'm curious about your take on this, is that it seems to me that those extreme cases, the like literally false this did not happen. Cases are not all that impactful in the grand scheme of things that like, yes, people do share articles about like the Pope and Donald Trump and so on, but they don't have that much impact on our political beliefs or on, I don't know how we vote or things like that, whereas the biased news really does like no Fox News giving people an overinflated sense of how bad immigrants are or whether their guns are going to be confiscated.

[00:19:52]

And that stuff is the stuff that's like much harder to combat. So the yeah. So the impactful stuff is is hard to combat and the non impactful stuff is slightly less hard to combat. That's my picture. Does that seem wrong to you?

[00:20:03]

I think it is at a high level, pretty accurate. I think that there's sort of two things I'd throw out there. One of them is just because something is hard, that doesn't mean we shouldn't attempt to do it. And the other one, just because the traffic of not doing it is so high and the other is that even taking on those extreme cases actually has an impact on the biased ones.

[00:20:27]

So this isn't a reason to not take on the biased ones, too. But if you think about the incentives here, there's a competition for attention and it's even less. People say, oh, Facebook and, you know, YouTube, whatever, competing for attention. That's sort of true. But if you talk about who's really doing that, it's the it's the publishers. And they're they're doing that within the framework of these platforms.

[00:20:50]

And you can actually see this explicitly. You have the person who who's just sort of taking over HBO.

[00:20:55]

We want to get as much time, as much engagement as possible, but very explicitly talking about this on and off the record session that was made public.

[00:21:02]

We want to get those dopamine hits like they're explicitly trying to pull people off other platforms. In order to do that, you need to have it the more emotionally salient then even, let's say that extreme thing, the the ways in which online misinformation has sort of made that race to the bottom, like given it even a new bottom and a new bottom below that bottom has, I think, affected the entire information ecosystem. Can we talk about some ways that people have started trying to tackle this problem, like I mentioned earlier, Facebook trying to flag lynxes as suspicious or potentially false?

[00:21:39]

I think that's earlier this year, Facebook announced they were changing their news feed algorithm to prioritize like personal news, like personal posts, more than like political or sort of professionally produced content. Do you know of other approaches? And and I'm also curious, like if you know whether Facebook's news feed change had any impact on misinformation. Yes. So just to clarify what what's happened with Facebook? There's many, many changes, more than we thought would be hours of conversation to go into all of the high level is that explicit labels weren't super effective.

[00:22:14]

OK, but they found that just showing related content that provided additional context and in some cases were debunks. In addition to still warning people if you're going to share something and it was previously debunked, they weren't you. And if you see something in your feed that was previously debunked, it'll have additional context about why that might have been debunked. And so that's still happening. It's just not like, oh, this is likely to be false being so front and center.

[00:22:45]

And is that's more effective than the original? I believe that was what they found was also a third party, researchers found. Was it like how likely people were to share it or to click it or.

[00:22:55]

I don't remember the specifics of that. And some of that is not public. And even if it wasn't, I don't want to. OK, that's fair. So going back to what I was saying might be a crux of disagreement between us about the tractability of this problem. One part of what I said was that it seemed like the impact of literally false news. Wasn't that great. Have you I'm sure you read more about this than I have. What what do we know so far about measuring the the actual impact of false news?

[00:23:26]

So it really depends on where if you're talking about a place like India, you can point to a number of people who are no longer live. Right, because real life mobs spurred by misinformation killed them. And that's just lynchings. That's not even counting. The broad polarization doesn't have the same level of drastic impact. We're talking about the Ukraine. You can talk about sort of the constant information war that's being waged that impacts the stability of the region, likelihood of physical war and getting what information war between sort of Ukraine and Russia.

[00:23:57]

Ukraine is sort of where a lot of Russian tactics were originally tested out before they were applied to America or continuously being utilized because there's Russia sort of wants more and more influence in the region. They're basically trying to take it over in various ways. And what kind of misinformation are people sharing?

[00:24:12]

Everything. I mean, I don't even know how to how to scope it. That's if you think about what happened in 2016 with the U.S. That's like a tiny, tiny, tiny fraction of what Russia's interest in the Ukraine is and how the type of information war that's been waged there for a number of years.

[00:24:28]

OK? We're talking about some place like Myanmar misinformation helped tip the scales of public opinion enough to support what's essentially ethnic cleansing. Right. And it's hard to measure that type of conflict, too.

[00:24:40]

But I guess we can tell very plausible stories. But how do we actually measure how much it was the misinformation itself?

[00:24:48]

Right. And that that is very, very difficult. Usually this sort of work will happen many years after the fact with like a bunch of, you know, historians and maybe in this case, data scientists working together to sort of try to piece together again. It's still going to be a story because causality is not easy to determine here. So I can tell that story. I cannot. I can and I can point to lots of scenarios around the world where there is a compelling story and where the people who are living that story experience it and where there is evidence to support different correlations.

[00:25:24]

But you cannot really prove causality in the same way that you can say, oh, this particular thing started World War One. Yeah. And then, I mean, just continuing to go to the U.S. there, we can definitely see that there's a significant amount of misinformation that's being shared. But again, causality is hard to determine and without some sort of rating system to actually decide what is misinformation, what is it you still can't really tell even that correlation very effectively because you don't know the amount that's being shared.

[00:25:50]

You don't know who that's influencing if you can't say what it is.

[00:25:54]

But I mean, researchers have done this like they've picked examples of of articles that are like unambiguously false. And then they've, you know, surveyed people like how many people have seen it, how many people believed it. I think there is a study that looked at that likelihood to vote or it was some sort of more impactful like outcome metric.

[00:26:12]

Yeah. So there are there are studies that attempt to sort of we are separate these different components. One of the challenges of them is that they run against the sort of, again, the credibility rating issue of like, well, we can look at things that are just explicitly false, but those are only a very small part of the problem. Right.

[00:26:30]

And there's no way no one's really done a great analysis of sort of that broad space of sort of questionable sources or like things that have more bias. And so it's very, very hard to tell the influence of that. You can't even describe what it is or measure it. And so this is sort of a prerequisite for that sort of research.

[00:26:45]

Are there other approaches? It seems to me I'll say that there are more approaches to combating the effect of misleading, false and biased news altogether that don't rely on the evaluative step on the step of like assigning a score or identifying which stories are false. For example, approaches that make people in general less receptive to misleading or biased news like approaches to reducing political partisanship. Not that many of them are easy. I'm just saying this isn't they're all hard. But this one hard opinion might not be our only avenue to approaching the problem.

[00:27:21]

Yeah, and you can make some progress on that. You can say, like in this constrained environment with this set of like known stories that are false and and true whatever, and with this sort of priming, they were less likely to sort of believe that thing. And so those are valuable. Right. But if you actually want to measure the true impact of something like that, you're going to see how people interact in the real world. And then again, you're back to that same problem.

[00:27:46]

I wouldn't say the credibility scores or ratings are a solution. They're just core infrastructure because they allow you to do that sort of measurement and study. If you're going to try, let's say, changes in your design of your platform, that might decrease the amount that sensationalism is actually driving. Sharing, right. How do you know if it's succeeding? Right. Because if you can't tell at scale what stories are being shared or not, do you want to look at the effect of media literacy campaign?

[00:28:12]

How do you tell if you won't be able to immunize people, let's say, by showing them how they have been fooled and then you still need to show them how they've been fooled, seem to have this list of sort of low scores, things that they've already looked at.

[00:28:26]

In order to point them to that.

[00:28:28]

So you're saying the the evaluative step that I was trying to root around in my in my is core infrastructure, it's core infrastructure for testing whether or not your route arounds are effective.

[00:28:40]

Yes. No matter what those routes are. Right. I mean, you can avoid doing it, but you're just not going to have anything as meaningful to talk about. You won't really know the impact of that.

[00:28:48]

I mean, you could look at the impact of, like, measured political partisanship or. Yeah, in different ways. Like how negatively do the people from different political parties feel about each other? General political literacy, I guess probably some measures of motivated reasoning about politics.

[00:29:04]

Yeah, no, I think that there definitely are other things that are useful to measure. And I don't want to sort of say that those are irrelevant, but all of those are sort of indirect and there's many, many different factors that fall into them. If you want to be able to sort of say, OK, this actually impacts the degree to which people share this type of information, then need to actually know what this type of information is.

[00:29:25]

What do you think about the I'll call it the hide the vegetables approach. So this is sort of what Upworthy was trying to do. They were like, well, you know, all of this negative, polarizing news is like it has an edge in being shared because it's so like outraging and sensational.

[00:29:42]

And so what if we, you know, create good content that's that's positive and about important things. But we just, like, make it sensational enough that people will just share it anyway. And so Upworthy hasn't been doing great recently, in part thanks to Facebook's algorithm changes. But is there like a version of that that you think could be workable?

[00:30:06]

I think it is valuable, but. And I mean, there's pros and cons is there's this question of do you want to have those who are doing good work, always having to sort of compete with the stuff at the bottom, or do you just want to, like, make that bottom, have just less play in general or in a sense, disincentives things from being at that bottom layer?

[00:30:32]

Yeah, I guess this just comes down to like how tractable each of us thinks the different hard parts are. Right. So do you think it's like really hard to incentivize to effectively just incentivize misleading news then? Upworthy style approaches might seem, relatively speaking, more promising.

[00:30:47]

I mean, with the caveat that it's so much cheaper, so much easier right now to create sort of a fake Upworthy Upworthy than a real Upworthy. Yeah, right. And so you're if you think about sort of the resource allocation and how this all would play out, it just the real the real Upworthy is going to lose dramatically real to the fake one unless you can change those incentives. What do you think the stakes are here, like on one perspective?

[00:31:14]

The stakes are, well, if we can't solve this problem, the world gets somewhat worse, like there are more instances, like the ones you talked about in India of of people getting lynched for things they didn't do.

[00:31:27]

Politics in the US gets, you know, more divisive and intractable and.

[00:31:37]

The parties are just extremely polarized and we don't get very much done, and that's bad, but it's not like the end of the world is that you're smiling.

[00:31:49]

Is that your sense of the stakes or.

[00:31:51]

No, I think that it is closer to I don't know, but end of the world explicitly but dramatically increasing the probability that we have the end of the world. Oh, in what sense? So I know many of your listeners may be caring about sort of risks or catastrophic risks. And this is actually risks are existential risks like risks to the survival of humanity.

[00:32:15]

Yeah, and I would probably put it this more in sort of the the catastrophic risk camp or at least the indirect catastrophic or existential risk. And not only that, I put it in sort of the urgent category of that, where you have a very, very limited window. A way to think about this is that new communications media. Often destabilize societies more broadly, so think about the printing press and the ways in which Europe was basically engulfed in war for like hundreds of years.

[00:32:45]

You can think about radio and the connection between that and things like World War Two and the ways in which radio was able to create this sort of new type of nationalism to some extent. And then there's lots of caveats to this nuance. And, you know, I can't express it in the two minutes here. But new communication media affect how people organize, talk to people, make sense of the world, and they affect the resulting societal stability. And we're also living in a world now where let's say stability isn't quite as quite where it was, where individuals can have far more influence on sort of the overall stability of the world and where you have a whole bunch of really tricky challenges up ahead within the next five to 20 years that could easily derail even a very, very well functioning civilization.

[00:33:35]

So you're in this environment and now you're making everyone dumber, you're making them less capable of handling it both at an individual level and a societal level you can think about this is sort of your like you got you're like civilization driving its car down the road and now it's starting to take LSD. And it's like seeing the hallucinations all over the place. And it's still trying to drive this going to some level, some amount of LSD or some kind of like of hallucination that you can still sort of drive without crashing.

[00:34:07]

But there's going to be some level where you can and we're we're just increasing that as we speed that car up, as the road gets windier, as more obstacles show up on that road. And so that that's sort of the the sort of broad framework for why our ability to make sense of the world is so crucial. And one very vivid metaphor.

[00:34:28]

I can feel my heart rate, everything. Well, one additional sort of even level on top of that is really relating to that is this international cooperation is going to be really crucial to handle some of the other sort of catastrophic risk that we'd be expecting over the next five to 15 years.

[00:34:43]

And how does international cooperation work in a world of increasing misinformation, sensationalism, tribalism, where those who provide misinformation are more likely to succeed, where are more likely to be attacking things that aren't there, then attacking things that are and actually handling real challenges? What's the chance that we can actually get a lot of the most powerful countries in the world to agree to handle a real problem if they're all just saying everyone else is bad? And that's one of the things that has resulted from the rise of misinformation more broadly.

[00:35:15]

Is there anything that you would recommend our listeners do if they want to get involved or find out more about the problem and and how you're thinking about tackling it?

[00:35:24]

Yeah, I definitely hope that people take this problem seriously. And if you're in one of these platforms, think about how you might build your system to mitigate that, mitigate some of these challenges. If you're interested in sort of contributing to efforts to address sort of misinformation writ large, both in its current form and in its future synthetic media form, you know, definitely reach out. There's many ways to contribute to sort of addressing these challenges. Yeah, feel free to to contact me.

[00:35:56]

And there's a number of other organizations that are sort of forming in this space to take these challenges on.

[00:36:01]

Do you want to give an email address or a website?

[00:36:03]

Yeah, you can reach me at Vyborg Me or on Twitter at Meadowview, MTA, Vivie. Right.

[00:36:11]

And the Veev, before I let you go, as you know, I like to ask all of my guests to nominate a source like a book or a blog or or other person who they have substantial disagreements with. But nevertheless, I think it's valuable to read or engage with. What would your pick be? So I'm going to say that both Tim and Dana Boyd are both amazing people, all of whom I also have disagreements with specific aspects of the way in which they think about this problem.

[00:36:40]

So to me, believes that breaking up the tech companies is sort of a solution in many ways while also being sort of the expert on the ways in which tech platforms work. Danah Boyd runs Data and Society, which is an incredible organization focused on understanding how data interacts with our society, as one might imagine. And her work on sort of the failures of media literacy is crucial to understanding what can and can't work in the space. Where we might disagree is on how necessary it is to actually have some of this core infrastructure around reading or labeling the quality of news sources, because I believe that sort of.

[00:37:26]

Necessary to do many of the other types of work that are crucially important and. Understandably, this is very difficult, and I'm not sure that she's as convinced that that is as tractable in a meaningful way as I would be, right?

[00:37:42]

Yes. And and unsurprisingly, when I read the writing on the subject, it's resonated with me. But that lines up with with your description of it. Great. Well, we'll link is there a particular piece or or book or anything by either person that you think we should link to? I think Temmuz, is the First Amendment obsolete, definitely a provocative and interesting read, as is Danah Boyd, what has to be right?

[00:38:13]

OK, great. OK, great. Well, we'll link to both of those on the podcast website as well as to your site and of you.

[00:38:20]

Thank you so much for coming on the show. It's been a pleasure having you. Thank you. This concludes another episode of Rationally Speaking. Join us next time for more explorations on the borderlands between even and nonsense.