Happy Scribe Logo

Transcript

Proofread by 0 readers
Proofread
[00:00:01]

That pot Kenny show on Newstalk. Here's a question, did Bill Gates polio vaccination program lead to the paralysis of almost half a million children in India? Could colloidal silver cure Ebola? Do doctors really think that quarantine harms public health? Well, the answer to all of these questions is an emphatic no, but that doesn't mean you won't see these and other stories like them popping up on your Facebook timeline.

[00:00:30]

In fact, research by an outfit called Avaaz shows that three point eight billion people were exposed to misinformation last year. Ahvaz is a global web movement to bring people powered politics to decision making everywhere. Campaign director at others, Christoph Schott, joins me on the line to tell us more about these findings.

[00:00:48]

Christoph, good morning. Good morning. It's good to be here.

[00:00:52]

Now, we are told that Facebook have all sorts of methodologies in place to stop this kind of thing. Are they simply not working or is Facebook not doing enough? So it's say Facebook isn't doing enough. They are, especially during this pandemic, we've seen them actually step up. They've shown more information from the WTO and from our national health authorities. What we see is the major problem at the moment is that while they do this basically behind the back and these efforts are being undermined by the algorithm, which is still boosting out health misinformation and actors sharing health misinformation far too wide.

[00:01:28]

And so they've reached a three point eight billion viewers over just a year. And that is actually really dramatic, especially now that we are in maybe one of the biggest or their biggest pandemic in our lives.

[00:01:39]

So three point eight people got what is now termed fake news. Can you explain to us how the algorithm works so that it automatically puts stuff out in spite of the best efforts of, if you like, the police, men and women that Facebook is employing to stop it? So to put it very simple, Facebook is the algorithm, so whenever you open your phone, you open your Facebook app and what you see is curated for you by an algorithm or multiple algorithms within Facebook to decide based on what you've looked at before and what kind of pages you follow or what's just being interacted with a lot.

[00:02:15]

And that is one of the major problems here, is that often these kind of misinformation content is being written and done in a way that is provocative, sensationalist, and a lot of people interact. And then the algorithm thinks, oh, that's an interesting piece of information. Let's show to more and more people. And so it keeps spreading until basically at some point maybe a fact checker will find it, fact check it, and then Facebook would put a kind of a label on it.

[00:02:39]

So you would know. And but until then, it kind of almost spreads really in many cases. And we think that needs to be reined in in a way that you show people more correct information and access to a lot of this kind of misinformation. They will be also not as not be shown as much anymore to that millions and billions of people.

[00:02:58]

So the question about the way Facebook will select materials that you might be interested in, obviously, if you're an anti-tax campaigner, you're against vaccinations and that is known to the algorithm. If more stuff comes up, you're going to be exposed to it.

[00:03:17]

Isn't that it? Yes. And that one of the major problems that makes me actually most afraid is that a lot of these groups, especially the activist groups and there's there's other studies and that is they're very good at reining in people who are not yet sure who are just like that. Sure, they have the concerns about vaccinations and they good to have and then they rein them in with these kind of stories about about parenting and kids. And then at one point they start to show more and more of this misinformation.

[00:03:42]

And that's why I think it's actually kind of dangerous, especially now that we're in the middle of this pandemic and maybe the end of the year, maybe early next year, we might have a vaccine. But if in the end nobody takes it, we'll never reach our immunity. And we don't know the influence of misinformation on that specifically. But we've seen they are growing generally. And that makes me basically really nervous if we'll ever get there. And if people get the good information that they need to make good decisions on their own health and their children's health.

[00:04:09]

Now, there are so many Facebook users and so much information emanating from each and every one of those. So there's a colossal amount of information.

[00:04:17]

So it depends on, you know, the design of the algorithm, effectively robotic, perhaps A.I. driven machinery to do all this. And can it really do it? Because even, you know, something that appears to be anti Trump might, on the face of it, be able Trump, but then contain all sorts of Trump stuff within. And how can an algorithm detect this stuff and say that should be marginalized? But other stuff is not to be marginalized.

[00:04:51]

And we don't think it should be the the algorithm that decides it should be there should be clear guidelines. Right. It's like we have road traffic and there's clear guidelines for people driving cars. We just let people do that. And one of the major is that we have two ideas, one, to correct the record. So the idea would be to work with independent fact checkers. And if they say a specific piece of misinformation, then you don't just have to put a label on it and stop its distribution as much.

[00:05:16]

But you also go back to all the people of the false information. Right. So we tell the algorithm you've shown this piece of misinformation to a million people. Now, also the corrected fact check, two million people. That's the idea that the platforms take responsibility for what the algorithm is, boosting them to the world so they don't take responsible for the content. But once it's out in the algorithm, it's boosted it that far. They should also be responsible to show the correct information to people.

[00:05:39]

And that is just one way of, we believe, at least giving people more information to make a better decision on on these crucial issues right now.

[00:05:48]

Doesn't it depend, though, on the fake news being reported to a human being somewhere along the chain and the human beings then have to instruct the algorithm to do this so the stuff can be dispersed, it can go viral long before a complaint is made. Yeah, that is one of the Osia, that's one of the major issues here, is that many of these these this piece of information of misinformation go by really fast in a day or two. And fact check takes time.

[00:06:18]

If you actually do it well, you might know as a journalist, it's not easy to really get it right. And therefore, one, it is really important that people get the correct information if it is found to be untrue. And the second one is what we found a study. It wasn't actually three point eight billion views of health misinformation, but three point eight billion views of pages and groups that repeatedly have shared health misinformation and misinformation. And so by now, Facebook should know a lot of these actors, actually, and they keep spreading the same similar misinformation.

[00:06:47]

And Facebook does not really don't rank them. And that's what we think is one of the second kind of most important solution is that if an actor, a page of website is found to be systematically and repeatedly showing misinformation, it should not be posted out as much anymore because we know that it's often actually the same pages that do this kind of stuff. And then often it's the the uncles and aunts would share it at some point. But often we have a finite set, I would say, of of of these kind of actors who spread disinformation to reach many people.

[00:07:16]

And that's something you can actually tell the algorithm, a simple algorithm. This page is shared. Ten piece of misinformation just last month. Maybe don't boost it out to billions of people anymore.

[00:07:27]

And let's see if they start to to come to terms with not just false information, but simply they open another Facebook account under a different identity, whether it's an email address or a mobile phone number, a burner phone or whatever. And they can keep doing this if they're stopped.

[00:07:44]

And sometimes money talks, though. And President Trump has been muttering about the idea of labelling and designating all of these social media platforms as publishers, in which case, if you publish something that's damaging or untrue, you actually could be taken to court and it would cost you.

[00:08:06]

And that works in some jurisdictions. Some people say it's too draconian. And the labor laws in Ireland, they say, are too draconian. But still, it does stop people from saying things that are untrue in the printed press and radio and television. I want to be honest, like the freedom of speech is a very high good in our societies, especially on the Internet and on social media. So what we believe is less that at this point, at least, Facebook and other platforms are not publishers because they don't publish the content, but they're also not neutral kind of hosts of content, right?

[00:08:39]

They do. The algorithm does decide what content to accelerate. And so we believe there should be legislation that holds Facebook accountable for what content is being accelerated, especially misinformation. And then if it does, they should have a responsibility to also issue corrections, to also downgrade those who shared it multiple times. And that is, I think, the way that we can allow people to spread. And it's a it's your right to not say the truth all the time.

[00:09:05]

But if it then was being shared millions of people, then it should be corrected and people should be informed. And that's how I think we can actually both protect freedom of speech and protect our democracies, because I'm actually worried that there's now some some laws in countries like Turkey that go strongly against the people who spread the false information, and that could be easily turn into a censorship type of bill. And that's, I think, could be even more dangerous than the opposite.

[00:09:29]

And therefore, democracies like Ireland, like the European Union, have to step forward and find ways to actually regulate this without overstepping and censoring people.

[00:09:39]

Campaign director at Avascular Christoph Schott, thank you very much for joining us.