Transcribe your podcast
[00:00:00]

The question as to whether Donald Trump's speech on the ellipses that day met the standard for incitement under the First Amendment is the hottest debate topic.

[00:00:09]

Where are people coming down on it?

[00:00:11]

All sides. I mean, both sides. Interesting. Yeah. There's not a unity of opinion within the First Amendment community on that one. You have really smart people.

[00:00:18]

Is that true in F. R. As well?

[00:00:20]

Yes.

[00:00:21]

Freedom of feet.

[00:00:23]

Fundamental rights. Freedom of conscience.

[00:00:25]

Academic freedom.

[00:00:27]

Freedom of press. And the right to listen.

[00:00:30]

You're listening to So to Speak, the Free Speech Podcast, brought to you by F. R, the Foundation for Individual Rights and Expression. Welcome back to So to Speak, the Free Speech Podcast, where every other week we take an uncensored look at the world of free expression through personal stories and candid conversations. I am, as always, your host, Nico Perino. Now, over the summer, I received a pitch from regular So to Speak guest, Jonathan Rausch. He wanted to debate the idea that social media companies have a positive obligation to moderate the speech they host, including based on content and sometimes viewpoint. Jonathan recognized that his view on the issue may be at odds with some of his friends and allies within the free speech space, so why not have it out on so to speak? It was a timely pitch, after all. For one, the Supreme Court was considering multiple high-profile cases involving social media content moderation, two cases dealing with states' efforts to limit social media companies' ability to moderate content, and another case dealing with allegations that the government pressured or jawboned social media companies into censoring constitutionally protected speech. Of course, there was and still is an active debate surrounding Elon Musk's purchase of Twitter, now called X, of course, and has professed support for free speech on the platform.

[00:01:49]

Finally, a new book was published in June by Renee Duresta that takes a look at social media movements and their ability to destabilize institutions, manipulate events, and in her telling, destroy reality. That book is called Invisible Rulers: The People Who Turn Lies into Reality. Renee may be a familiar name to some in the free speech world. Her work examines rumors and propaganda in the digital age. And most famously, or perhaps infamously, depending on how you look at it, Renee was the Technical Research Manager at the Stanford Internet Observatory, which captured headlines for its work tracking and reporting on election and COVID-related Internet trends. And as part of that effort, the observatory submitted tickets to social media companies, flagging content the observatory thought might violate their content moderation policies. For that work, particularly the ticketing work, Renee is seen by some as a booster of censorship online, a description I suspect she rejects. Fortunately, Renee is here with us today to tell us how she sees it, joined, of course, by Jonathan Rausch, who is a writer, author, and senior fellow at the Brooklyn's Institution. He may be best known to our audiences as the author of the 1995 classic, Kindly Inquisitors: The New on Free Thought, and the 2021, The Constitution of Knowledge: A Defense of Truth.

[00:03:06]

Renee and Jonathan, welcome on to the show. Thanks for having me. Happy to be here. Jonathan, as the precipitator of this conversation, let's start with you. What is the general framework for how you think about social media content moderation?

[00:03:20]

Well, let's begin, if it's okay, with what I and I think many of your listeners and folks at FHIR agree on, which is that It is generally not a good idea for social media companies to pull a lot of stuff offline because they disagree with it or because they don't like it. That usually only heightens the visibility of the material that you are taking down, and it should be treated as a last resort. So one reason I wanted to do this is I noticed a steady disagreement I was having with friends in the free speech community, including a lot of fire folks who apply a first amendment model to social media platforms, X and Facebook and Instagram and the rest. And they use the term censorship as a way to describe what happens when a company moderates content. The lens they apply is unless this is an absolute violation of Allah, it should stay up because these companies shouldn't be in the business of quote, unquote, censorship. Well, there are some problems with that framework. One of them is that social media companies are a hybrid of four different kinds of institutions. One of them is, yes, there are platforms, which is what we call them.

[00:04:49]

There are places for people to express themselves. And in that capacity, sure, they should let a thousand flowers bloom. But they are three other things at the same time. The First is their corporations, so they have to make a profit, which means that they need to attract advertisers, which means that they need to have environments that advertisers want to be in and that therefore users want to be in. Second, they are communities, meaning that they are places where people have to want to come and feel safe and like the product. And third, they are publishers. Publishers aggregate content content, but then curate it in order to assemble audiences to sell to advertisers. Now, in those three capacities that are not platforms, they're not only allowed to moderate content and pick and choose and decide what's going to be up there and what's not and what the rules and standards are going to be. They must do that. They are positively obligated to do that. And if they don't, they will fail. So that means this is a wicked hard problem because on the one hand, yes, free speech values are important. We don't disagree about that.

[00:06:02]

But on the other hand, just saying content moderation, bad, boo, his, that will never fly. So we are in a conversation where what we need to be talking about is getting content moderation right. We're doing it better, not doing away with it.

[00:06:17]

Renee, I'd love to get your thoughts on content moderation, generally. Presumably, you think there's some ethical imperative, like Jonathan, to moderate content. That was part of your work with the Stanford Internet Observatory, right?

[00:06:31]

Yes, though not in the way that some of you all have framed it. So let's just dive right in with that. So I agree with John and what he said. Content moderation is an umbrella term for a absolutely vast array of topical areas that platforms choose to engage around. Some of them are explicitly illegal, right? Csam, child sexual abuse materials, terrorist content. There's rigorous both laws and platform determinations where they do take that content down. Then there are the things that are what Daphne Keller refers to as lawful but awful, things like brigading and harassment, pro-anorexia content, cancer quackery, things that are perceived to have some harm on the public. That question of what is harmful, I think, is where the actual focus should be, what defines a harm and how should we think about that. There are other areas in content moderation that do refer to particular policy areas that the platforms choose to engage in, oftentimes in in bounded ways. I think it's also important to emphasize that these are global platforms serving a global audience base. And while a lot of the focus on content moderation here in the US is viewed through the lens of the culture wars, the rules that they put in place must be applicable to a very, very broad audience.

[00:07:48]

So for example, the set of policies that they create around elections apply globally. The set of policies that they created around COVID applied globally. I don't think that they're always good. I think that there are areas where the harm is too indeterminate, too fuzzy, like the lab leak hypothesis, the decision to impose a content block on that during COVID, I think was a very bad call because there was no demonstrable harm, in my opinion, that was an outgrowth of that particular moderation area. When they moderate, as John alludes to, they have three mechanisms they can use for enforcement. There is remove, which he alludes to as the takedowns. I agree for years I've been writing that takedowns just create a backfire effect. They create knowledge. They're largely counterproductive. But then there's two others which are reduce, where the platform temporarily throttles something or permanently throttles it, depending on what it is. Spam is often rigorously throttled. And then inform is the last one. And inform is where they'll put up an interstitial or a fact check label or something that, in my opinion, adds additional context. That, in my opinion, again, is a form of counter speech.

[00:08:54]

These three things, content moderation at large, have all been reframed as censorship. That's where I think you're not having a nuanced conversation among people who actually understand either the mechanisms or the harms or the means of engagement and enforcement around it. You're having what we might say a rather propagandistic redefinition of the term, particularly from people who are using it as a political cudgel to activate their particular political fandom around this particular form of the aggrievement narrative that they spun around it.

[00:09:26]

Well, what I want to get at is whether normatively, Why do you think content moderation is an imperative, Jonathan? Because you talk about how it's essential for creating a community, to maintain advertisers for other reasons. But you can build a social media company around a model that doesn't require advertisers to sustain itself. For example, a subscription model. You can build your community by professing free speech values. For example, Twitter, when it first got started, said it was the free speech wing of the free speech party. I remember Mark Zuckerberg gave a speech at Georgetown, I believe it was in 2018, talking about how Facebook is a free speech platform, making arguments for the imperative of free speech. So they can define their communities. It seems like they're almost trying to define them in multiple ways. And by trying to please everyone, they're not pleasing anyone. So for a platform like X now, where Elon Musk says that free speech must reign, and we've talked extensively on this podcast how sometimes it doesn't reign. But do you think it's okay to have a platform where pretty much any opinion is allowed to be expressed? Or do you see that as a problem more broadly for society?

[00:10:40]

Yes, I think it is good to have a variety of approaches to content management, and one of those certainly should be if companies want to be, I'm a gay Jew, so I don't enjoy saying this, but if one of them is going to be a Nazi, white supremacist, anti-Semitic content platform, It can do that. Normatively, I don't like it if that's what you're asking. But on the other hand, also normatively, it is important to recognize that these are private enterprises. And the First Amendment and the ethos of the First Amendment protects the freedom to edit and exclude just as much as it does the freedom to speak and include. And that means that normatively, we have to respect both kinds of platforms. And the point that I think Renee and I are making is that the big commercial platforms, which are all about aggregating large numbers of users and moving large amounts of dollars of content, are going to have to make decisions. About what is and is not allowed in their communities. The smaller platforms can do all kinds of things.

[00:11:50]

Renee, I mean, presumably you have a normative position on topics that the social media company should moderate around. Otherwise, why would the election integrity partnership or your virality project be moderating content at all or submitting tickets to these social media companies? Not moderating content yourself, of course, but submitting tickets to these social media companies, identifying policies or posts that violate the company's policies. Presumably, you support those policies. Otherwise, you wouldn't be submitting URLs to the companies, right?

[00:12:24]

So the Election Integrity Partnership, let me define the two projects as they actually are, not as, unfortunately, Unfortunately, your blog had some erroneous information about them as well. So the Election Integrity Partnership was started to look at election rumors related to the very narrowly scoped area of voting misinformation, meaning things that said, vote on Wednesday, not on Tuesday, text to vote, suppression efforts, that thing. It did not look at what candidate A said about candidate B. It did not have any opinion on Hunter Biden's laptop. It did, in fact, absolutely nothing related to that story. The other big topical that election integrity partnership looked at was narratives that sought to delegitimize the election absent evidence, so preemptive delegitimization. That was the focus of the election integrity partnership. The platforms had independently set policies, and we actually started the research project by going and we made a series of tables. Again, these are public in our 200 and something page report that sat on the internet for two years before people got upset about it. We coded basically, here's the policy, here are the platforms that have implemented this policy. You see a lot of similarities across them.

[00:13:34]

You see some differences, which to echo John's point, I think that in an ideal world, we have a proliferation of marketplaces and people can go and engage where they choose and platforms can set their terms of service according to, again, moderating explicitly illegal content, but they can set their own speech and tolerance determinations for other topic areas. Within the realm of this rubric of these are the policies and these are the platforms that have it, as we observed election narratives from about August until late November of 2020, we would every now and then, so students would... This was a student-led research project. It wasn't an AI censorship death star super weapon or any of the things you've heard. It was students who would sit there and they would file tickets when they saw something that fell within our scope. Meaning this is content that we see as interesting in the context of our academic research project on these two types of content. The the procedural and the delegitimization content. When there were things that began to go viral that also violated a platform policy, there were occasionally determinations made to tag a platform into them.

[00:14:43]

So that was us as academics with free speech rights, engaging with another private enterprise with its own moderation regime, if you will. We had no power to determine what happened next. As you read in our election integrity partnership report, After the election, after the election was over in February or so of the following year, we went and we looked at the 4,000 or so URLs that we had sent in this escalation ticketing process. What wound up happening was 65% of the URLs, when we looked at them after the fact, had not been actioned at all. Nothing had happened. Of the 35% that had been actioned, approximately 13% came down, and the remainder stayed up and got a label. So overwhelmingly, the platforms did nothing, which is interesting because they have policies, but they don't seem to be enforcing uniformly, or we got it wrong. It's one of this possible, too. But when they did enforce, they erred on the side of, again, what I consider to be counter speech, slapping a label, appending a fact check, doing something that indicates. And oftentimes when you actually read the text, it says this claim is disputed. It's a very neutral, This claim is disputed.

[00:15:56]

Read more in our election center here. And that was That was the project, right? So was it a... The ways that people have tried to make this controversial include alleging that it was government-funded. It was not. That I was a secret CIA agent.

[00:16:11]

But the government was involved in it in some respects, right?

[00:16:14]

The government was involved in it in the sense that the Global Engagement Center sent over a few tickets under 20 related to things that it thought were foreign interference. Now, keep in mind, we also exercise discretion. So just because the Global Engagement Center sends a ticket doesn't mean that we're like, Oh, let's Let's jump on this. Let's rush it over to a platform. No, that didn't happen at all. And what you see in the Jira, which we turned over to Jim Jordan, and you very helpfully leaked, so now anyone can go read it, is you'll see a ticket that comes in. And again, just because it comes in doesn't mean that we take it seriously. Previously, we are doing our own independent analysis to determine whether we think, A, this is real and important, B, this rises to the level of something a platform should know. So there are several degrees of analysis. And again, you can see the very, very detailed in the Jira. Again, mostly by students, sometimes by postdocs, or I was an analyst, too, on the project. I was a second-tier analyst. Then a manager would have to make a determination about whether it rose to the level of something that a platform should see.

[00:17:12]

The other government figures that we engaged with was a nonprofit, actually, that engaged with state and local election officials. When state and local election officials are seeing claims in their jurisdiction, and again, this is all 50 states represented in this consortium, the Election Integrity ISAAC. When they see something in their jurisdiction that says, for example, a tweet saying, I'm working the polls in Pennsylvania and I'm shredding Trump ballots, is an actual thing that happened. That's the thing where they get very concerned about that. They get very upset about that. They can engage with CIS who can file a ticket with us. Again, the ticketing was open to the RNC. It was open to the DNC. It was a very broad invitation to participate. What they could do when they sent something to us is, again, we would evaluate it and we would decide if we thought that further action was necessary. There is nothing in the ticketing in which a government agency sends something to us and says, You need to tell the platforms to take this down. So again, for a lot of what fire wrote about in the context of jawbone, we had no power to censor.

[00:18:16]

We're not a government agency. We weren't government-funded. I'm not a secret CIA agent. I don't think we said that. No, but the other people that, unfortunately, have been boosted as defenders of free speech very much did say that. And that's why when I'm trying to explain what actually happened versus the right wing media narrative and the sub-sdekarati narrative of what happened, it's not borne out by the actual facts.

[00:18:43]

I think what people have a problem with Because telling people to vote on a certain day and doing it with the intent to deceive is not first amendment protected activity. The first amendment makes some exceptions for this. And of course, I'm just talking about the first amendment here broadly. I know these are private platforms. They can do what they please. Csam material is not protected. Child sexual abuse material is not protected under the first amendment. I think what people have a problem with is the policing of opinion, even if it's It's wrong-headed, it's dumb, and it can lead to deleterious effects throughout society. It can destabilize things. So when you're talking about your work in the election integrity project, and you're starting by saying people deceiving about where voting locations on the dates you vote, or text in your vote here, that makes sense. But submitting tickets about or trying to delegitimize the election before it happens, that's an expression of opinion. Now, we all in this room, I suspect, think that that's a bad idea, and it's dumb, but it's still the expression of opinion. I think that's where folks get most frustrated.

[00:19:54]

Can I go back to you in this for a second? Yeah. You made in your amicus brief in Net Choice, you I have a sentence in there that says describing the state theory in Net Choice. First, it confuses private editorial decisions with censorship. So let's be totally clear. We had no power over Facebook. I have no coercive power over a tech platform. If anything, as you've seen in my writing over the years, we're constantly appealing to them to do basic things like share your data or be more transparent. So first, there is no coercive power. Second, the platform sets its moderation policies. The platform makes that decision, and you, in your... Not you, you, personally, but you fire, have acknowledged the private editorial decisions, the speech rights of the platforms, the right of the platforms to curate what shows up. So if the platform is saying, We consider election de-legitimization, and again, this is not only in the United States, these policies are global. We consider election de-legitimization to be a problem. We consider it to be a harm. We consider it to be something that we are going to action. And then we, as a private academic institution, say, Hey, maybe you want to take a look at this.

[00:21:05]

But you agree with them. Presumably, otherwise, you wouldn't be coordinating with them on it.

[00:21:09]

Well, it wasn't necessarily coordinating with them.

[00:21:12]

Well, I mean, okay, so you're an academic institution. You can either research something, right, and learn more about it and study the trends. But then you take the second step, whereas you're finally- Where we exercise our free speech.

[00:21:22]

Yes.

[00:21:22]

And nobody's saying that you shouldn't be able to do that.

[00:21:25]

Many people have been saying that I shouldn't be able to do that. That's why I've been under Congressional subpoena and sued multiple times.

[00:21:30]

I'm not saying you shouldn't be able to do that. And in fact, we have reached out to certain researchers who are involved in the project who are having their records foyed, for example. And we've always created a carve out for public records.

[00:21:44]

Can I try a friendly amendment here to see if we can sort this out. You're both right. Yes, people get uncomfortable, especially if there's a government actor somewhere in the mix, even in an advisory or informal capacity, when posts have to do with opinion. Use the word police opinion. I don't like that because we're not generally talking about taking stuff down. We're talking about counter speech. Is that policing opinion? But on the other hand, the fact that something is an opinion also does not mean that it's going to be acceptable to a platform. There's a lot of places that are going to say, We don't want content like Hitler should have finished the job. That's an opinion. It's constitutionally protected. And there's a lot of reasons why Facebook and Instagram and others might want to take it down or disamplify. And if it's against their terms of service, and we know it's against their terms of service, it is completely legitimate for any outside group to go to Facebook and say, this is against your terms of service. Why is it here? And hold them accountable to their terms of service. All of that is fine.

[00:22:55]

It's protected. If I'm at fire, I'm for it. If the government's doing it, it gets more difficult, but we can come back to that. So question for you, Nico, how much better would you and your audience feel if everything that a group like Rene, say an academic group of outsiders, calling things to platform's attention was done all the time in full public view. There's no private communication with platforms at all. Everything is put on a public registry as well as conveyed to the platform. So everyone can see what it is that the outside person is calling attention to the platform. Would that solve the problem by getting rid of the idea that there's subterfuge going on?

[00:23:37]

Well, I think so. I might take issue with the the word problem, right? I don't know that academics should be required to do that, right? To the extent it's a voluntary arrangement between academic institutions and private companies. I think the confusion surrounding the election integrity project-The problem is the confusion, not the practice. And the fact that there is a government somewhere in the mix. Now, of course-We'll set that aside, separate issue. Yes.

[00:24:03]

But just in terms of people doing what the Stanford Internet Observatory or other private actors do of bringing stuff to platform's attention, would it help to make that more public and transparent?

[00:24:16]

I'm sure it would help, and I'm sure it would create a better sense of trust among the general public. But again, I don't know that it's required or that we think it's a good thing normatively. I think it probably is, but I can't say definitively, right?

[00:24:31]

On the government front, I think we're in total agreement, right? You all have a bill or bill template. I'm not sure where it is in the legislative process, which I agree with, just to be clear. And as I think I had a... Yeah, me too. It's the right framework. I think I argued with Greg about this on threads or something. No, I think it's the right framework. Look, the platforms have proactively chosen to disclose government takedown requests. We've seen them from Google. You can go and you can There's a number of different areas where when government is making the request, I think the transparency is warranted, and I have no problem with that, with it being codified in law in some way. The private actor thing, it's very interesting Because we thought that we were being fairly transparent in that we had a Twitter account, we had a blog, we had... I mean, we were constantly... I was constantly communicating- Yeah, you had a 200-page report. 200-page report where you can... I mean, the only thing we didn't do, we didn't really release the Jira, not because it's secret, but because now that Jim Jordan has helpfully released it for you, you can go try to read it, right?

[00:25:36]

And you're going to see just a lot of people debating, Hey, what do we think of this? What do we think of this?

[00:25:41]

The Jira is your internal loan?

[00:25:42]

The Jira is just an internal ticketing system. It's a project management tool. And again, you can go read it. You can see the internal debates about, Is this in scope? Is this of the right threshold? I'll say one more thing about Verality Project, which was Verality Project was a different type type of project. Verality Project sought to put out a weekly briefing, which again went on the website every single week in PDF form. Why did it happen? Because I knew that at some point we were going to get some records request. We are not subject to foia at Stanford, but I figured that, again, because the recipients of the briefings that we were putting up did include anyone who signed up for the mailing list, but government officials did sign up for the mailing list. So people at Health and Human Services or the CDC or the Office of the Surgeon General signed up to receive our briefings. We put them on the website. Again, anybody could go look at them. What you see in the briefings is we're describing the most viral narratives of the week. It is literally as basic as, here are the narratives that we considered in scope for our study of election rumors.

[00:26:48]

There they are. We saw the project as, how can we enable officials to actually understand what is happening on the internet? Because we are not equipped to be counter speakers. We are not We are not physicians. We are not scientists. We are not public health officials. But the people who are don't necessarily have the understanding of what is actually viral, what is moving from community to community, where that response should come in. We worked with a group of physicians that called themselves This is our shot. Just literally a bunch of doctors, frontline doctors, who decided they wanted to counter-speak, and they wanted to know what they should counter-speak about. So again, in the interest of transparency, the same briefings that we sent to them, sat on our website for two years before people got mad about them. And then this, again, was turned into some, Oh, the DHS established the virality project. Complete bullshit. Absolutely not true. The only way that DHS engaged with it, if at all, is if somebody signed up for the mailing list and read the briefings.

[00:27:49]

So there wasn't any Jira ticketing system associated?

[00:27:52]

The Jira ticketing system is so that we internally could keep track of what was going on.

[00:27:56]

But it wasn't sent on to the platforms, the tickets?

[00:27:59]

There were in And for the Rurality project, I think there were 900 and something tickets. I think about 100 were tagged for the platforms, if I recall correctly.

[00:28:06]

And what was it?

[00:28:06]

And those are also out there.

[00:28:08]

And what were those tickets associated with?

[00:28:09]

So one of the things that we did was we sent out an email to the platforms in advance as the project began, and we said, these are the categories that we are going to be looking at for this project. And I'm trying to remember what they are off the top of my head. It was like vaccine safety narratives, vaccine conspiracy theories, metal, it makes you magnetic, the mark of the Beast, these sorts of things. Where are the other ones? Narratives around access, who gets it and when. Just, again, these big overarching long term vaccine hesitancy narratives. We We told the narratives that we looked at from past academic work on what sorts of narratives enhance vaccine hesitancy. What we did after that was we reached out to the platforms. We said, These are the categories we're looking at. Which of these are you interested in receiving Leaving tickets on when something is going viral on your platform? Again, that seems to violate your policies because you'll recall they all had an extremely large set of COVID policies. Lableak was not in scope for us. It was not a vaccine hesitancy narrative. In that capacity, again, there were a hundred or so tickets that went to the platforms.

[00:29:20]

And again, they were all turned over to Jim Jordan, and you can go look at them all.

[00:29:24]

This conversation is coalescing around can but should. I think we're all in agreement that the social media companies can police this content. The question is, should they? Should they have done what they did?

[00:29:35]

I prefer moderate to police. Okay.

[00:29:37]

But they are out there looking for people posting these narratives or violating their terms of service. We could debate semantics, whether police is the right word, but they're out there looking and they're moderating surrounding this content. Should they? Should they have done all the content moderation they did surrounding COVID, for example?

[00:29:56]

Well, I've said some of it, I think, like the lab leak, I thought was rather pointless. Again, I didn't see the risk there, the harm, the impact that justified that particular moderation area. Facebook Oversight Board, interestingly, wrote a very comprehensive review of Facebook's COVID policies. And one thing that I found very interesting reading it was that you really see gaps in opinion between people who are in the global majority or the global south and people who are in the United States. And that comes through, and this, again, is where you saw a desire for, if anything, more moderation from some of the people who were representing the opinions of Africa or Latin America saying, no, we needed more moderation, not less, versus where moderation had already by that point, beginning in 2017, become an American culture war flashpoint. The very idea that moderation is illegitimate had been established in the United States. That's not people see it in Europe. That's not how people see it in other parts of the world. So you do see that question of should they moderate and how being in there. I want to address one other thing, though, because for me, I got into this in part looking at vaccine hesitancy narratives as a mom back in 2014.

[00:31:17]

My cards have always been on the table around my extremely pro-vaccine stance. And one of the things that I wrote about and talked about in the context of vaccine, specifically, for many, many, many years, it's all in Wired, you can read it, was the idea of platforms have a right to moderate. In my opinion, there's a difference between what they were doing for a very, very, very long time, which was they would push the content to you. So You as a new parent had never searched for vaccine content. They were like, You know what you might want to say? You might want to join this anti-vaccine group over here. So there's ways in which platforms curate content have an impact further downstream. The correlation between the rise in vaccine hesitancy online over a period of about a decade, actually, six years, give or take before COVID began, is something that people, including platforms, were very, very concerned about because of its impact on public health long before the COVID vaccines themselves became a culture war flashpoint. So do I think that they have an obligation to take certain, to establish policies related to public health?

[00:32:29]

I think It's a reasonable, ethical thing for them to do, yes. And where I struggle with some of the conversation from your point of view, I think, or maybe what I intuit as your point of view based on FHIR's writings on this, is that you both acknowledge that platforms have their own free speech rights. And then I see a bit of attention here with, well, they have their own free speech rights, but we don't want them to do anything with those speech rights. We don't want them to do anything with setting curation or labeling or counter speech policies. We just want them to do nothing, in fact, because then you have this secondary concern or maybe dual concern about the speech rights of the users on the platforms. And these two things are in tension for the reasons that John raised when we first started.

[00:33:09]

Well, do you worry that efforts to label block, ban content based on opinion, viewpoint, what's true or false, creates martyrs, supercharges, conspiracy theories. You would mention, Jonathan, like the forbidden fruit idea, or maybe that was you, Renee. I worry that But doing so rather than creating a community that everyone wants to be a part of, creates this erosion of trust. I suspect that the actions taken by social media companies during the COVID era eroded trust in the CDC and other institutions. I think if the goal is trust and if the goal is institutional stability, it would have been much better to let the debate happen without social media companies placing their thumb on the scale, particularly in the area of emerging science, Jonathan. I remember we were at a University of California event right as COVID was picking up, and we were talking about masks, and we were talking about just regular cloth face masks. And I think it was you or I who said, Oh, no, those don't actually work. You need an N95 mask. And then that changed. Then the guidance was that you should wear a cloth mask, that they do have some ameliorating effect.

[00:34:28]

Andrew Callahan He created a great documentary called This Place Rules About January sixth and Conspiracy Theories movements. And he said in his reporting that when you take someone who talks about a deep state conspiracy to silence him and his followers, and then you silence him and his followers, it only adds to his credibility. Now, here we're not talking about deep state, we're talking about private platforms. But I think the idea surrounding trust is still there. So I'd love to get your guys' thoughts on that. Sure, we all have an agreement around COVID or the election, for example, but the moderation itself could backfire. Or the ways we address it.

[00:35:11]

I'll make a big point about that and then a narrower point. The big point is I think we're getting somewhere in this conversation because it does seem to me, correct me if I'm wrong, that the point Rene just made is something we agree on. That there are tensions between these roles that social media companies play. The first thing I said is there are multiple roles and tensions between them. That means that simple pat answers like they should always do X and not Y or not and YX are just not going to fly. If we can establish that as groundwork. We're way ahead of the game until now, which has been all about what they should, should not ever, ever do. I'm very happy with that. Then there's the narrower question, which is a different conversation about what should they be able to do in principle, and that's what should they do in practice, which is Jonathan, Nico, and Renee all sitting here and saying, Well, if we were running Facebook, what would the best policy be? How do we build trust What did we do with our audience? Did we do too much or too little about this and that?

[00:36:19]

The answer to those questions is, I don't know. This is a wicked hard problem. I will be happy if we can get the general conversation about this just to the point of people understanding this is a wicked hard problem, that simple bromides about censorship, freedom of speech, policing speech, won't help us. Once we're in the zone of saying, Okay, how do we tackle this in a better way than we did until COVID. And I'm perfectly content to say that there were major mess-ups there. Who would deny that? But once we're having that conversation, we can actually get started to understanding how to improve this situation. And thank you for that.

[00:36:59]

Well, let's take a real-world example, right? The de-platforming of Donald Trump. Do you think that was the right call? That's the first question. The second call, the second question is, was that consistent with the platform's policies?

[00:37:15]

That was an interesting one. That was an interesting one because there was this question around incitement that was happening, right? In the context in which it happened, it was as January sixth was unfolding, as I recall, it maybe it was 48 hours later that he actually got shut down.

[00:37:33]

I should add that the question as to whether Donald Trump's speech on the ellipses that day met the standard for incitement under the First Amendment is the hottest debate topic.

[00:37:44]

Where are people coming down on it?

[00:37:46]

All sides. I mean, both sides. Interesting. Yeah. There's not a unity of opinion within the First Amendment community on that one. You have really smart people.

[00:37:53]

Is that true in FHIR as well?

[00:37:55]

Yes, it is.

[00:37:56]

That's interesting. So within tech platform, sorry, tech policy community. I mean, there were literally entire conferences dedicated to that. I felt like as a moderate, I maybe was maybe punted. I wrote something in Columbia Journalism Review as it came out about with great power comes great responsibility. And one of the things when I talk about moderation enforcement, again, one of the reasons why with election integrity partnership, when we went and looked after the fact to see had they done anything with these things that we saw as violative and found that 65% of the time the answer was no. This was an interesting signal to us because when you look at some of the way that moderation enforcement is applied, it is usually the lower level, like nondescript ordinary people that get moderated for inciting type speech. Yeah, you talk about- Or borderline speech. Yeah, When the President of the United States does not, right? There's a protection. You're too big to moderate past a point. Yes.

[00:38:50]

You got public figure privilege.

[00:38:52]

Yeah, yeah, yeah.

[00:38:53]

I believe actually some of the platforms had that, that they wouldn't take down world leaders. Absolutely, they did.

[00:38:56]

Yeah. This was the one thing where occasionally you did see Every now and then something interesting would come out of the Twitter files, and it would be things like this internal debates about what to do about some of the high-profile figures, where there was questions about whether language veered borderline incitement or violated their hate speech policies or whatever else. So there is this question. I think if I recall correctly, my argument was that if you're going to have these policies, you shouldn't force. And it seemed like this was one of the areas where there was, you have to remember also in the context of the time, very significant concern that there would be continued political violence. Facebook had imposed what it calls the break glass measures. I think I talk about this in the book, too. Yeah, you do in the book. Yeah. And that's because this is, I think, something also worth highlighting, which is that the platforms are not... Curation is not neutral. There is no baseline of neutrality that exists somewhere in the world when you have a ranked feed, and there just can't be, right? This is a very big argument in tech policy around how much transparency should they be obligated to disclose around how they curate, not only how they moderate, but how they curate, how they decide what to show you, because that's a different form.

[00:40:14]

Well, you have some states that are trying to mandate that by laws that they release their algorithms. That from FHIR's perspective would create a compelled speech.

[00:40:23]

But see, this is an interesting thing, Rick. I was going to raise that with you. Like the AB 587 court case is an interesting one, right? The California transparency disclosures. Platforms have their first amendment rights, but they can't be compelled to actually show people what's going on. But also maybe they shouldn't be moderating. But if they moderate, they should be transparent, but they shouldn't be compelled to be transparent. We wind up in these weird circles here where I feel like we just get nowhere. We just always point to the first amendment. We say, no, we can't do that.

[00:40:51]

In the political sphere, you have some transparency requirements, for example, around political contributions and whatnot.

[00:40:57]

Right, exactly. And I think the question around How should transparency laws be designed to do the thing that you're asking, to do the thing that John references also, which is if you want to know how often a platform is actually enforcing or on what topics, right now, that is a voluntary disclosure. That is something that an unaccountable private power decides benevolently to do. In Europe, they're saying, No, no, no, this is no longer a thing that's going to be benevolent. It's going to be mandated for very large online platforms.

[00:41:28]

It's going to be a topic that's litigated quite extensively. And it again comes back to in a free society, there are things that the law shouldn't compel, but that we as individual citizens should advocate for. And where is that line? And we can feel uncomfortable advocating for things that the law doesn't require. But I think that's just part of living in a free society as well. Jonathan, I would like to get your take on the Trump de-platforming.

[00:41:58]

Oh, my take is I don't have a take. It depends on the platform and what their policies are. My general view is a view I got long ago from a man named Allan Charles Kors.

[00:42:09]

Our co founder.

[00:42:10]

Yeah, you may have heard of him. And that's in reference to universities, which is private universities. What you promise. Yeah, private universities ought to tell us what their values are and then enforce those commitments. So if a university says, We are a free speech university, the robust and unhindered pursuit of knowledge through debate and discourse, they should not have a speech code. But if they want to say, We are Catholic University and we enforce certain norms and ideas, fine. Yeah, or. So the first thing I want to look at when anyone is deplatformed is, okay, what are the rules? And are they enforcing them in a consistent way? And the answer is, I don't know the particular rules in the particular places relating to the Donald Trump decision. People that I respect who have looked at it have said that Donald Trump, now we're talking about Twitter, specifically, as it was then called, that Donald Trump was in violation of Twitter's terms of service and had been multiple times over a long period and that they were coming at last to enforce what they should have been, what they should have enforced earlier. I can't vote for that view.

[00:43:25]

Reasonable people said it. Okay, so what do you think?

[00:43:27]

Well, I think there needs to be truth in advertising. If you're looking at some of these social media platforms, we had talked about Mark Zuckerberg before. I think I have a quote here. He says, We believe the public has the right to the broadest possible access to political speech, even controversial speech. Says, Accountability only works if we can see what those seeking our votes are saying, even if we viscerally dislike what they say. Big speech at Georgetown about free speech and how it should be applied on social media platforms. Then at the same time, he gets caught on a hot mic with Angela Merkel, who's asking him what he's going to do about all the anti-immigrant posts on Facebook. This was when the migrant crisis in Europe was really at its peak in the mid-2010s. I think what really frustrates people about social media is the perception and maybe the reality of double standards. And I think that's what you also see in the academic space as well. So you have Claudian Gay going before Congress, and I think giving the correct answer from at least a first amendment perspective, that context It does matter anytime you're talking about exceptions to speech.

[00:44:33]

In that case, they were being asked about calls for Jewish genocide, which was immediately proceeded by discussion of chants of intafanta or from the river to the sea, which I think should be protected in isolation if it's not part of a broader pattern of conduct that would fall under, for example, harassment or something. With the social media companies and Twitter, for example, you have Donald Trump get taken down, but Iran's Ayatollah Khomani is not taken down. You have Venezuelan President, Nicolas Maduro, who's still on Facebook and Twitter. The office of the President of Russia still is operating a Twitter account. Twitter allows the Taliban spokesperson to remain on the platform. You have the Malaysian and Ethiopian Prime Minister not being banned, despite what many argue were incitement to violence. So I think it's these double standards that really erode trust in these institutions and that lead to the criticism that they've received over the years. And I think it's why you saw Mark Zuckerberg responding to Jim Jordan and the House Committee saying, We're going to stop doing this. We're going to get out of this game.

[00:45:42]

Well, first, I would retreat to my main point that I want to lead people with, which is that this problem is wicked hard, and the simple templates just won't work. But in response to what you just said, I would point out that the efforts to be consistent and eliminate double standards could lead to more lenient policies, which is what's happened on Facebook, or less lenient policies. They could, for example, have taken down Ayatollah Hamaneh or Nicolas Maduro or lots of other people. I'm guessing FHIR would have said, bad, bad, bad, leave them up. I don't know. But the search for consistency is difficult. If you take your terms of service seriously, and if you're saying we're a community that does not allow incitement or hate, or we're a community that respects the rights of LGBT people and defines to that as a trans woman is a woman, and to say differently is hate, well, then that means they're going to be removing or disamplifying more stuff.

[00:46:39]

Well, it depends how they define some of those terms.

[00:46:42]

Well, that's right. But the whole point of this is that It's going to be very customized processes and that what I'm looking for here is, okay, at least tell us what you want, what you think your policy is, and then show us what you're doing so at least we can see to some extent how you're applying these policies. Therefore, when we're on these platforms, we can try to behave in ways that are consistent with these policies without getting slapped in seemingly capricious or random or partisan or biased directions.

[00:47:16]

Do you think the moderation that social media companies did during the pandemic, for example, has led to vaccine hesitancy?

[00:47:25]

That's a really interesting question. I don't think I've seen anything. I I don't think that study has been done yet. The question is, is it the... It's very hard, I would say, to say that this action led to that increase. One of the things that has always been very fuzzy about this is the idea that, is it the influencers you do hear that are undermining the... The vaccines became very partisan, very clear lines you can see expressed in vaccination rates, Conservatives having a much lower uptake. Is that because of some concern about censorship, or is that because the influencers that they were hearing from were telling them not to get vaccinated anyway? I think it's also important to note that there was no mass censorship in terms of actual impact, writ large on these narratives. You can open up any social media platform and go and look, and you will find the content there. If you were following these accounts, as I was during COVID, you saw it constantly. It was not a hidden or suppressed narrative. This is one of the things that I have found curious about this argument, the idea that somehow every vaccine hesitancy narrative was summarily deleted from the internet is just not true.

[00:48:47]

The same thing with the election stuff. You can go and you can find the delegitimization content up there because, again, most of the time they didn't actually do anything. So I have an entire chapter in the book on where I think their policies were not good, where I think their enforcement was middling, where I think the institutions fell down. I don't think that COVID was... Nobody covered themselves in glory in that moment. But do I think that the backfire effect of suppression led to increased hesitancy? It'd be an interesting thing to investigate.

[00:49:16]

Do you have any insight into whether content that was posted on YouTube, for example, or Facebook that mentioned the word COVID during COVID was deamplified? Because there was a big narrative.

[00:49:26]

I don't. We can't see that. We can't see that. And this, again, is where I think the strongest policy advocacy I've done as an individual over the last seven years or so has been on the transparency front, basically saying we can't answer those questions. One of the ways that you find out why Facebook elected to implement break glass measures around January 6, for example, comes from internal leaks from employees who work there. That's how we get visibility into what's happening. And so while I understand First Amendment concerns about compelled speech in the transparency realm, I do think there are ways to thread that needle because the value to the public of knowing, of understanding what happens on unaccountable private power platforms is worthwhile. It is, in fact, in my opinion, a It meets that threshold of the value of what is compelled being more important than the first amendment prohibitions against compulsion.

[00:50:25]

Incidentally, footnote, some of the momentum around compulsion for transparency could presumably be relieved if these companies would just voluntarily disclose more, which they could, and they've been asked to do many times, including by scholars who made all kinds of rigorous commitments about what would and would not be revealed to the public?

[00:50:47]

Doesn't X open source its algorithm?

[00:50:51]

That doesn't actually show you.

[00:50:53]

How they might. Yeah, sure. I mean, it shows how the algorithm might moderate content, but it presumably wouldn't show how human moderators would get involved, right?

[00:51:02]

Well, there's two different things there. One is curation, one is moderation. The algorithm is a curation function, not a moderation function. So these are two different things that you're talking about that are both, I think, worthwhile. So some of the algorithmic transparency arguments have been that you should show what the algorithm does. The algorithm is a very complicated series. Of course, it means multiple things depending on which feature, which mechanism, whether they're using What they're using machine learning for. So there's the algorithmic transparency efforts. And then there's basic, and what I think John is describing more of transparency reports. Now, the platforms were voluntarily doing transparency reports. I know that Metta continues to. I meant to actually check and see if Twitter did last night, and I forgot. I was on my list of-You're still calling it Twitter, too. Sorry. Sorry. I know. I know.

[00:51:57]

Some people refuse to call it that.

[00:51:58]

No, no, it's not a refusal. It's just where it comes to mind. Twitter actually did some... Sorry, X, whatever, did some interesting things regarding transparency, where there was a database or there is a database called Lumen. You must be familiar with Lumen. Lumen is the database where if a government or an entity reached out with a copyright takedown under DMCA, for a very long time, platforms were disclosing this to Lumen. So if you wanted to see whether it was a movie studio, a music studio, or the government of India, for example, requesting a copyright, using copyright as the argument as a justification for a takedown, those requests would go into the Lumen database. Interestingly, when somebody used, I believe, that database and noticed that X was responding to copyright takedowns from the Modi government. This was in April of last year, I think.

[00:52:53]

It was a documentary about Modi, for example.

[00:52:55]

Yes. Twitter X did comply with that request. There was a media cycle about it. They were embarrassed by it because, of course, the rhetoric around free speech relative to what they had just done or what had been revealed to have been done. And again, they operated in accordance with a law This is a complicated thing. We can talk about sovereignty if you want at some point. But there's a... What happened there, though, was that the net effect of that was that they simply stopped contributing to the database. So what you're seeing is the nice to have of like, well, let's all hope that our benevolent private platform overlords simply proactively discuss. Sometimes they do, and then something embarrasses them or the regime changes, and then the transparency goes away. I actually don't know where Twitter is on this. We could check, put it in the show notes or something. But it is an interesting question because there have been a lot of walkbacks related to transparency, in part, I would argue, because of the chilling effect of Jordan's committee.

[00:53:58]

Maybe for now, the best clutch is going to be private outside groups in universities and nonprofits that do their best to look at what's going up on social media sites and then compare those with their policies and report that to social media companies and the public. That's exactly what Renee was doing.

[00:54:19]

Look what happened.

[00:54:20]

Look what happened. What happened next. Incidentally, if we want to talk about entities that look at what What private organizations are doing regarding their policies, looking for inconsistencies between the policies and the practices, and reporting that to the institutions and saying, You need to get your practices in line with your policies, we could talk about FHIR because that's exactly FHIR's model for dealing with private universities that have one policy on speech and do something else. It's perfectly legitimate, and it's in many ways, Very constructive.

[00:55:00]

Well, that's one of the reasons that we criticize these platforms normatively, right? Is because you do have platforms that say we're the free speech wing of the free speech party or we're the public square. Or you have Elon Musk saying that Twitter, now X, is a free speech platform. But then censors Censorship happens that we think, or you guys don't use the word censorship in the context of online private platforms, but moderation happens that would be inconsistent with those policies. And we will criticize him as we have, we'll criticize Facebook as we have in the past.

[00:55:31]

Yes. I jumped on your use of the word policing earlier. You mentioned it's a semantic difference. And I don't think it is because I think it would be unfair and inaccurate to describe what FHIR is doing, for example, as policing. I think that's the wrong framework, and that's really the big point I'm trying to make.

[00:55:50]

So you have an interesting... One of the things that I always think about is you have the private enterprise and you have state regulators, and Everybody agrees that they don't want the state regulators weighing in on the daily adjudication of content moderation. It's bad. I think we all agree with that in this way.

[00:56:07]

The DOJ just came out with a whole big website, I believe, and best practices for its-Yeah, for how government should engage.

[00:56:13]

And I think that that's a positive thing. Then the other side of it is private enterprise makes all those decisions. And there's a lot of people who are also uncomfortable with that because then you have... Normally, when you have unaccountable private power, you also have some regulatory oversight and accountability. That isn't really happening here, particularly not in the US. The Europeans doing it in their own way. So you have, nobody wants the government to do it, nobody wants the platforms to do it. Nobody wants the platforms to do it. One interesting question then becomes, well, how does it get done? When you want to change a platform policy, when somebody says, hey, I think that this policy is bad, I'll give a specific example. Early on in COVID, I was advocating for state media labeling. That was because a lot of weird stuff was coming out from Chinese accounts and Russian accounts related to COVID. And I said, hey, again, in the interest of the public being informed, not take these accounts down, but just throw a label on them, right? Almost like the digital equivalent of Foreign Agent Registration Act. Just say like, Hey, state actor, just so that when you see the content in your feed, you know that this particular speaker is literally the editor of a Chinese propaganda publication.

[00:57:15]

That, I I think is, again, an improvement in transparency. Those platforms did, in fact, for a while, move in that direction. I've never seen those. Yeah, and they did it in part because academics wrote up ads in the Washington Post saying, Hey, it'd be really great if this happened. Here's why we think it should happen. Here's the value by which we think it should happen. So that wasn't a determination made by some regulatory effort. That was civil society and academia making an argument for a platform to behave in a certain way. You see this with the advertiser, advertiser boycott. I've never been involved in any of those in any first-hand way. But again, an entity that has some ability to say, Hey, in an ideal world, I think it would look like this. The platform can reject the suggestion. The platform can Twitter impose state media labels and then walk them back after Elon got in a fight with NPR. So again, there's, okay, so now you can see it on Facebook and you can't see it on Twitter. These are things that happen because different participants in the market make arguments and the platform either does it or doesn't do it in accordance with what it thinks is best for its business and its role in society.

[00:58:31]

Sure.

[00:58:31]

I know, for example, that they don't always listen to us, but we'll reach out to them on occasion. But I think we're all in agreement that the biggest problem is when the government gets involved and does the so-called jaw-boning. I just want to read Mark Zuckerberg's August 26th letter to Jim Jordan and the Committee on the Judiciary, in which he writes in one paragraph, In 2021, senior officials from the Biden administration, including the White House, repeatedly pressured our teams for months to censor certain COVID-19 content, including humor and satire, and expressed a lot of frustration with our teams when we didn't agree. Ultimately, it was our decision, whether or not, to take content down, and we made our own decisions, including We own our decisions, including COVID-19-related changes we made to our enforcement in the wake of this pressure. I believe the government pressure was wrong, and I regret that we were not more outspoken about it. I also think we made some choices that with the benefit of hindsight and new information, we wouldn't make. Then in some of the releases coming out of this committee, you see emails, including from Facebook. There's one email that show Facebook executives discussing how they managed users' posts about the origins of the pandemic that the administration was seeking to control.

[00:59:46]

Here's a quote, Can someone quickly remind me why we were removing rather than demoting labeling claims that COVID is manmade? Asked nick Clegg, the Company President of Global Affairs, in a July 2021 email to colleagues. We were under pressure from the administration, others to do more, responded a Facebook vice President in charge of content policy. Speaking of the Biden administration, we shouldn't have done it. There are other examples, for example, of Amazon employees strategizing for a meeting where the White House openly asking whether the administration wanted the retailer to remove books from its catalog, Is the admin asking us to remove books, or are they more concerned about the search results order or both? One employee asked, and this was just in the wake of the Sally incident where the platform or Amazon, in this case, removed a book on transgender issues and got incredible backlash. And so they were reluctant to remove any books, even books promoting vaccine hesitancy. So I think we're all in agreement that this activity talked about there.

[01:00:49]

It's in a different category, and this is frankly not a difficult problem to address. Fires bill is in the right direction. There are lots of proposals, and they're all versions, as I understand it, you guys can correct me, of the same thing, which is instead of having someone in some federal agency or the White House pick up the phone and yell at someone at some social media company, there should be a publicly disclosed record of any actions that should be done in a systematic and formal way. And those records should be, after some reasonable interval, inspectable by the public. And that's it. Problem solved.

[01:01:24]

Yeah, I would agree with that also. I mean, and this is where we wrote amicus briefs in Murthy because Because my residual gripe with that is that this is an important topic. This is an important area for there to be more jurisprudence. And yet that particular case was built on such an egregiously garbage body of fact and so many lies that unfortunately it was tossed for standing because it just wasn't the case that people wanted it to be. And my frustration with FHIR, no offense, and others, was that there was no reckoning with that. That even as Greg was talking about the bill, which I absolutely support that type of legislation, it was like, well, the Supreme Court punted and didn't come down on where they needed to come down on this issue. They didn't give us guidance. That happened because the case was garbage unambiguously. I thought it was very interesting to read Fires Ambitus in that where it's pointing out quite clearly and transparently over and over and over again, the hypocrisy of the the attorneys general of Missouri and Louisiana, the extent to which this was a politically motivated case. I wish that we could have just had both things, we can hold both ideas in our head at the same time, that job owning is bad, that bills like what you're broaching are good, and also that We could have had a more honest conversation about what was actually shown in Murthy and the lack of evidence where there are very few, I think none, in fact, for many of the plaintiffs, most of the plaintiffs, actual mentions their names in a government email to a platform.

[01:03:02]

That through line is just not there. I think we had a lousy case and it left us worse off.

[01:03:11]

Bad cases make bad law. Just restating my earlier point, this decision should be made in Congress, not the courts. This could be solved statutorially. And by the way, I don't believe necessarily job owning is bad. If it's done in a regularized, transparent way. I think it's important for private actors to be able to hear from the government. I'm a denizen of old media. I came up in newsrooms. It is not uncommon for editors at the Washington Post to get a call from someone at the CIA or FBI or National Security Council saying, If you publish this story as we think we're going to publish it, some people could die in Russia. And then a conversation ensues, but there are channels and guidance for how to do that. We know the ropes, but it is important for private entities be able to receive valuable information from the government. We just need to have systems to do it.

[01:04:05]

I'm going to wrap up here soon, but I got an article in my inbox that was published on September 10th in the Chronicle of Higher Education, I believe. Titled Why Scholar Should Stop Studying Misinformation by Jacob Shapiro and Shon Norton. Are either of you familiar with this article? I don't know. Let's see if there's a byline here at the bottom. No, I don't have it here at the bottom. Jacob and Shapiro. Anyway, the argument The second element is that while the term misinformation may seem simple and self-evident at first glance, it is in fact highly ambiguous as an object of scientific study. It combines judgments about the factuality of claims with arguments about the intent behind their distribution and inevitably leads scholars into hopelessly subjective territory. Continues later in the paragraph, it would be better, we think, for researchers to abandon the term as a way to define their field of study and focus instead on specific processes in the information environment. Now it's like, okay, this is interesting. Just so happen to be having a misinformation researcher.

[01:05:04]

No, I'm not a misinformation researcher. I hate that term. I say that in the book over and over and over again. This is echoing a point that I've made for years now. It is a garbage term because it turns something into a debate about a problem of fact. It is not a debate about a problem of fact. And one of the things that we emphasize over and over and over again, the reason I use the word rumors and propaganda in the book is that we have had terms to describe unofficial information with a high degree of ambiguity past from person to person. That's a rumor. Propaganda, politically motivated, incentivized, often where the motivations are obscured. Speech by political actors in pursuit of political power. That's another term that we've had since 1300. So you wouldn't call the vaccine has The competency or the anti-vaccine crowd that got you into this work early on as engaging in misinformation because you did study that, right? I did. Here's the nuance that I'll add, right? Misinformation, and the reason I think it was a useful term for that particular content is that in the early vaccine conversations, the debate was about whether vaccines caused autism.

[01:06:11]

At some point, we do have an established body of fact, and at some point, there are things that are simply wrong. Again, this is where, and I try to get into this in the context in the book, the difference between some of the vaccine narratives with child with routine childhood shots versus COVID is that the body of evidence is very, very clear on routine childhood vaccines. They are safe, they are effective. Most of the hesitancy-induced content on the platforms around those things is rooted in false information, lies, and a degree of propaganda, candidly. With COVID, and the reason that we did these weekly reports, it was to say people don't actually know the truth. The information is unclear. We don't know the facts yet. Misinformation is not the right term. Is there some sloppiness in terms? Have I used it? Probably. I'm sure I have in the past, but I have to go read Jake's article because Again, why do we have to make up new terms? Malinformation was by far the stupidest. Malinformation is true, but insidious information, right? It's the way that... I mean, you can use true claims to lie, This is actually the art of propaganda going back decades, right?

[01:07:34]

You take a grain of truth, you spin it up, you apply it in a way that wasn't intended, you decontextualize a fact, right? I don't know why that term had to come into existence. Again, I feel like propaganda is quite a ready and available term for that thing.

[01:07:50]

I want to end here, Jonathan, by asking you about your two books, Kindly Inquisitors and the Constitution of Knowledge.

[01:07:55]

Available at fine bookstores everywhere.

[01:07:56]

Yes, and on fire's bookshop here.

[01:07:58]

And in the show notes.

[01:07:59]

And in the show notes, which Sam will make sure to include them there. In Kindly Inquisitors, you have two rules for liberal science. No one gets final say, no one has personal authority. My understanding is that the Constitution of Knowledge is an expansion on that, right? Because we're talking here about vaccine hesitancy and it's vaccine's connection with autism. I guess I'm asking, how do those two rules, no one gets final say and no one has personal authority, affect a conversation like that? Because presumably, if you're taking that approach and you want to be a platform devoted to liberal science, you probably shouldn't moderate that conversation, because if you do, you are having final say or giving someone personal authority.

[01:08:47]

Well, boy, that's a big subject. So let me try to think of something simple to say about it. So Those two rules and the entire constitution of knowledge, which spins off of them, set up an elaborate, decentralized, open-ended public system for distinguishing over time, truer statements from falser statements, thus creating what we call objective knowledge. I'm in favor of objective knowledge. It's human beings' most valuable creation by far. As a journalist, I have devoted my life and career to the collection and verification of objective knowledge. I think platforms in general, not all of them, but social media platforms and other media products, generally serve their users better if what's on them is more true than it is false. If you look up something online, a fact, and the answers are reliably false, we call that system broken normally. And unfortunately, that's what's happening in a lot of social media right now. So the question is, of course, these platforms are all kinds of things, right? They're not truth-seeking enterprises. They're about these other four things I talked about. Would it be helpful if they were more oriented toward truth? Yes, absolutely. Do they have some responsibility to truth?

[01:10:20]

I think yes, as a matter of policy. One of the things they should try to do is promote truth over falsehood. I don't think you do that by taking down everything Everything you think is untrue, but adding context, amplifying stuff that's been fact-checked as Google has done, for example. There are lots of ways to try to do that. I, unlike Renee, I'm loathing to give up the term misinformation because it's another way of saying, of holding, of anchoring ourselves in an important distinction, which is that some things are false and some things are true. And it can be hard to tell the difference. But if we lose the vocabulary for insisting that there is a difference, it's going to be a lot harder for anyone to insist that anything is true. And that, alas, is the world that we live in now.

[01:11:10]

All right, folks, I think we're going to leave it there. That was Jonathan Rausch, author of the before mentioned books, Kindly Inquisitors and the Constitution of Knowledge.

[01:11:19]

Available in fine bookstores everywhere.

[01:11:21]

Available in fine bookstores everywhere.

[01:11:22]

I got to get better at doing that.

[01:11:23]

Well, I'll do it for you, Renee. There you go, thank you. Renee DiResta has a new book that came out in June called Invisible Rulers: The People Who Turn Lies into Reality. Jonathan and Renee, thanks for coming on the show.

[01:11:34]

Thank you. Thank you for having us.

[01:11:36]

I am Nico Perino, and this podcast is recorded and edited by rotating roster of my fire colleagues, including Aaron Ries and Chris Maltby, and coproduced by my colleague, Sam Lee. To learn more about So To Speak, you can subscribe to our YouTube channel or Substack page, both of which feature videos of this conversation. You can also follow us on X by searching for the handle Free Speech Talk, and you can find us on Facebook. Feedback can be sent to So To Speak at the, that is so2speak@thefire. Org. And if you enjoyed this episode, please consider leaving us a review on Apple Podcasts or Spotify. Reviews help us attract new listeners to the show. And until next time, thanks again for listening.