Transcribe your podcast
[00:00:00]

This episode of rationally speaking is brought to you by Stripe Stripe builds economic infrastructure for the Internet. Their tools help online businesses with everything from incorporation and getting started to handling marketplace payments to preventing fraud. Stripe's Culture puts a special emphasis on rigorous thinking and intellectual curiosity. So if you enjoy podcasts like this one and you're interested in what Stripe does, I'd recommend you check them out. They're always hiring. Learn more at Stripe Dotcom. Welcome to, rationally speaking, the podcast, where we explore the borderlands between reason and nonsense, I'm your host, Julia Gillard, and it is my pleasure to introduce you to today's guest, Peter Eckersley.

[00:00:53]

Peter was until recently, the chief computer scientist for the Electronic Frontier Foundation, which is a non-profit focused on promoting privacy and free speech and autonomy on the Internet. Now, he is the director of research at the Partnership on AI, which is an organisation that includes a lot of the major tech companies focused on developing best practices around artificial intelligence as it develops. And his Ph.D. is in computer science and law at the University of Melbourne. Peter, welcome to rationally speaking.

[00:01:25]

Thank you, Julia. So you focus on a number of topics, including privacy and artificial intelligence and safety and regulations around artificial intelligence. So we're going to cover a lot of that. But I thought we'd start with privacy. So there has been an increasing amount of attention, public scrutiny of privacy in ways that tech companies have been failing to protect our privacy, especially attention, especially since the Cambridge Analytica scandal. And I was wondering what you think about how well the public's attention on this issue is allocated.

[00:02:00]

Like, do you think that that we are basically most concerned about the most important privacy problems? Or are we, you know, overreacting to some things and underreacting to others? What's your take? I think it's hard enough to answer the question theoretically for ourselves as experts about how important privacy is. It comes the way we think about privacy comes from a very animal place. I think of it as being you're out on the savanna, around a campfire, and then when you see eyes in the darkness watching you, that feels really dangerous and really bad.

[00:02:35]

And I think that type of psychology is the mechanism that's at work amongst people who care a lot about privacy. And they therefore, you know, are very cautious about what they share with whom, and then try to get control of the way that technology starts to collect data about them. But, of course, that technology is so complicated that it's really hard for most humans to have any notion as they are using an app or a phone or a website, what the real implications of data that they're sharing or that's being revealed about them invisibly by the devices might be.

[00:03:14]

So I think we struggle psychologically with our animal reaction and it's a very complicated technological world. And then you add an extra layer over the top, which is one of the societal consequences of failure is to protect privacy in various ways. And those seem to be really very diverse. And you can argue about how important they are, but they include things like in the United States where the criminal justice system is is massively overreaching and incarcerates, you know, order of millions of people who shouldn't probably be incarcerated.

[00:03:53]

There's a consequence to privacy violation, which is that it leads to more arrests and more imprisonment. And if you do some math on that, it looks very serious. There's a different consequence in people's personal lives when their partners or their families learn things about them and then have problematic power relationships with those people. You know, maybe you don't want your conservative family to know that you're trans, OK? Maybe you don't want your partner who's kind of domineering to know about all of your social life, et cetera.

[00:04:29]

And those are very high stakes problems that certain people face. They're very different from the the criminal justice case. And then I think in your comment about Cambridge Analytica, you were getting at the third and perhaps craziest example of this, which is that. We built the Web the first time around to reveal almost everything that people were reading and thinking all of the time to websites and third party tracking companies, and there was sort of this theoretical concern about this.

[00:05:00]

But we finally kind of sort come home to roost. And I have a story to share about this. Maybe 10 years ago, I worked at E.F. and I had this amazing colleague, Corey Doctor, who people may have heard of his birth, an activist and a science fiction writer. And he came to me and one of my colleagues that if we were technologists and said, I want to write a story about how Google turns evil, this is like two thousand and seven, maybe two thousand eight.

[00:05:34]

What would Google do if it were evil? That is a great generator for a story like that general kind of question. What happens if this thing turns evil? What would that look like? That's right. In fact, everyone, you should all just pause your podcast right now and go and think about your own answer to this question. But come back to the podcast. Yes. If you don't come back, you won't get to hear the one that I gave Corey, which he turned into a short story, actually.

[00:05:59]

So the answer I came up with was, oh, Google would mess with politics. It would figure out how to swing elections and totally gain control of the political system. Peter, how sure are you that your idea wasn't the inspiration for Google turning evil? Seems like an information hazard, potentially. I don't think you should just assume that you caused really large effects in the world by answering a question from from Corey Doctora, though it's hard to rule that out.

[00:06:29]

I was just last night looking through lists of inventions that were inspired by science fiction. And so I'm primed to suspect you in this case. But anyway, go on. So Corey wrote the story. Yes. And in fact, he wrote the story a little. My suggestion was, well, Google could read all the candidates email and know how to, like, understand the motives of all the humans and mess with their campaigns or help them. But but Corey's version was, oh, it's the way that the candidates are perceived by the public that's totally shaped by such results.

[00:07:01]

And so if they want to help you, they'll show you all these great search results about you. And if they don't, they'll surface sordid things, perhaps even fictitious, sordid things about, you know, it turned out it wasn't. Google completely led the charge on this. It was Facebook and it wasn't deliberate. It perhaps happened by accident to a large degree. So there may be certain people at Facebook who had some notion of what was happening, but we wound up in that world.

[00:07:30]

And I have an apology, which is after that conversation, I didn't do anything to try and stop it.

[00:07:38]

Great. So you're outlining various categories of consequences of what happens if privacy isn't is is the example of Google hypothetically or Facebook actually swaying the results of elections or swaying public opinion on various topics? Is that about privacy or I mean, it seems like collecting a bunch of private data, they could still, you know, decide to curate your news feed, to prioritize, you know, things that are favorable to one party versus another. How does that relate to privacy?

[00:08:12]

Well, I think the thing the dimension that turned out to be privacy oriented here is driven by a lack of privacy, is that the platform sees what you're interested in and engaged with and it can abcess the user interface and then do machine learning on each human specifically to figure out, well, what type of stories is Julia interested in in particular? And it then turns out to be. Way more effective to radicalize people around propaganda, if you can customize that messaging to each individual, right?

[00:08:46]

Yeah, that makes sense. Interestingly, also, I think Google has been accused of having done this in certain ways as well, especially YouTube, where the reports are that the the related videos or suggested videos tab over on the right hand side would stop if you started with a reasonable educational video. It was really easy to wind up clicking through a series of videos that went through conspiracy theories all the way to. Radical Islam or the world is flat, right, and these things are just optimized for, well, it seems like someone who watch this thing might watch this next thing and be engaged by it.

[00:09:29]

And it turns out that documentary content making arguments for those things can be really compelling. And so you just showing people a series of documentaries that tells them that they live in a world that's flat. Yeah, interesting. So I've been trying to think about the the forces working against privacy, which in the case of a government might just be, you know, a increasing power, be wanting to increase security, like the desire to catch criminals or terrorists is sort of in conflict with the privacy of citizens and a lot of ways.

[00:10:03]

And then for companies, it seems like the forces working against privacy are the desire to optimize ad revenue and then also the desire to just make their services work better for people. And, you know, in order to like give you like an experience that you enjoy and like we'll keep coming back to. And I guess we could certainly argue about whether apps that people like keep coming back to are, in fact, apps that are good for them, but still apps that people want to use in order to make that happen.

[00:10:29]

That requires or it's like helpful to have a lot of personal data about people and their usage habits. Those seem like the the tensions to you and the forces working against privacy. Did I miss anything? Those are the big ones. And there are some ways in which we could imagine addressing some of these and others that seem more profoundly just we're just being confronted with the fact that it's really convenient to give up our privacy to giant technological platforms and then really hard to verify that they're actually working in their interests.

[00:11:00]

Yeah.

[00:11:00]

So which of those anti privacy forces feel most tractable to you can see, I don't want to say super tractable, but conceivably the advertising revenue incentive could be addressed. We could buy subscriptions or for instance, so we could you know, more people could choose to buy products from companies like Apple and I part of me is very uncomfortable saying this for other reasons, because they're a very close proprietary technology company. But they they do a slightly better job of representing the interests of their users because they users are paying them a lot of money for those services, whereas other platforms will tend to be free.

[00:11:41]

But it's not just subscriptions and paid models, but we could look at maybe the United States will never do this, but it's well within European, Canadian, Australian tradition to have public funding producing large amounts of important media content and supporting authorship. You know, every time you borrow a book from a public library in most countries in Western Europe, in Canada or in Australia, the government will pay some amount of money, maybe a dollar to the author, sometimes the publisher of the book.

[00:12:08]

We could look at models like that to decrease the necessity of relying on advertising and therefore very intrusive advertising for technology makers and and authors and content producers news sites. I think it's a little bit of an unfashionable idea in the era of neo liberal economics, but we should seriously consider those options, at least encourage our European friends to seriously consider them. So you can sort of imagine a way past the advertising incentives to spy on everyone but the the piece where it's just more convenient to us.

[00:12:46]

Yeah, that's hard to fix. Yeah. So do you see part of your mission? I mean, you're no longer it yourself, but your mission is just like generally someone working for for a better Internet. Do you see a part of that mission as being just convincing people that privacy is more important than they think it is and that it's not worth making the trade off of more convenience for less privacy? I don't think convincing people is necessarily the way I thought of that more.

[00:13:14]

Just making sure that they've heard the argument like his what he's the choice you're making if you choose to not run a track, a block or an ad blocker on your browser, here's the choice you're making. If you share information on Facebook, you might still make those choices, but make them infinitely. Yeah, I tend to think a lot about ways that our reasoning and decision making faculties are are not as perfectly adapted for, you know, the modern context or our our goals as individuals as opposed to our goals are Gene's goals are, you know, Ancestors' goals.

[00:13:50]

And it seems like one big category is a kind of little decisions that each one of which is not that consequential, where the costs are not that noticeable or consequential. But over time, they kind of add up and probabilistically make us a lot worse off in the long run. But that's not very salient or visible to us, you know, each time. So this is seems like why we, you know, have trouble saving money for the future or dieting or whatever, because on the margin, each choice, the benefits of eating that piece of cake are like very salient and immediate.

[00:14:20]

And the costs are kind of probabilistic and and indirect and in the future. And so giving up pieces of privacy feels very much like like it falls into this difficult to reason about category. Absolutely. One of the things I guess I would often find myself saying, you know, from my my position, if was there would be much more of a market for product for privacy. Sorry if you could buy it in retrospect. Yeah, that's a really good point.

[00:14:48]

If you if you realize, oh, I just lost my job or I just was subject to like this form of social like ostracism or identity theft. In the extreme case, I can time travel back now and take privacy decisions differently. I think people would take more of them. Going back for a moment to the that distinction that I made between the incentives of governments to try to push the boundaries of privacy or violate privacy and the on the one hand and the incentives of companies, are you more concerned about one versus the other?

[00:15:22]

Do you think the consequences of companies versus governments being able to violate people's privacy is a greater. We obviously have historical models of situations where governmental violations of privacy were so severe that they probably sit at the top of the list. You know, if you lived in eastern Germany or the Soviet Union or these other totalitarian states in those situations, your entire life and liberty were at stake all the time as a result of privacy violations. And people in those countries had to think about sharing their political views, even in private, with their partners, for instance, because their partners might be informants.

[00:16:09]

And sir, in a setting like that, the technologically assisted version of that, I think I think one way of thinking about it is it couldn't be that much more intrusive than Eastern Germany, but it could be much cheaper for nation states in the future to be that intrusive. Yeah, you know, East Germany was effectively spending a huge fraction of its GDP on monitoring its people. And I think we'll see other states do that much more cheaply with video camera surveillance and phone monitoring and email monitoring, etc.

[00:16:39]

. China is probably the laboratory for for how far you can go with that, but maybe other states as well. And so I think that there's a a real like, very high variance if you ask how important are these effects in the United States? They look pretty large because of the criminal justice problems. And then there's another term in the equation for what is the risk that we wind up in a totalitarian version of the United States. That's extremely bad.

[00:17:06]

Maybe there are enough institutional protections that we're going to hold off. Trends in that direction when they happen, I think it's a year when people probably wouldn't gamble super aggressively on that claim and saw that those fancy moves look really bad. If we look at the corporate Fadima, it's corporate private privacy failure modes. For the most part. They don't go as extremely out in the bad direction. They're not as catastrophic except for the swinging elections thing, which then just turns around and feeds into governmental failure mode.

[00:17:39]

Yeah. I want to ask you to do another comparison now between not industry and government, but instead between different tech companies. So Facebook, Amazon, Google, maybe Twitter, WhatsApp, what do these companies do you feel are are doing a pretty good job of protecting their users privacy and which not so much. I get cautious about trying to make categorical comparisons between these companies, OK, for a few reasons. They often make different things. And then if you look inside that product lines, even when you can find comparable products, you'll see places where each of them is doing the best job in a specific way and then doing a poor job on other fronts.

[00:18:24]

So I think of them almost more as being like countries that have their own problems and governance structures and they get certain policy areas. Right. And certain public policy areas wrong. I can give random examples like, you know, having praised Apple before, I'd say one thing that Apple really doesn't do right is iCloud backups, which contain almost everything on your iOS devices. Those are totally readable by Apple and disclosed to governments in response to law enforcements, law enforcement or surveillance requests.

[00:18:57]

So all of the FBI drama about getting into a San Bernardino iPhone was kind of theater because the FBI had full access to the very recent backups of everything on that phone. Well, wait, why did they bother with the theater then? Because they wanted a legal precedent that said they could compel Apple in the future. Interesting. And did they choose that case? Because it was an especially bad dude? And so they had more there'd been a dozen that they'd chosen not to use before the one that they chose?

[00:19:26]

Uh, interesting answer in that specific product direction. Android recently launched an encrypted backup feature that much stronger than iCloud backup. So for now, on that specific access, Google is doing a better job, but we can name probably half a dozen other places where that's not true. I mean, this could just be my imagination, but I feel like some of these companies are just like sketchier or less trustworthy than other companies. Like I get a bad vibe from Facebook and I don't get as bad a vibe from like Google, for example.

[00:20:03]

And I could totally change. But in terms of like how how hard I think they're actually working behind the scenes to do what they say they would do. But that seems like it's sort of a company level variable that that differs between companies. I think that's especially true in these companies that are still. Significantly run by their founders and the personalities and inclinations of those humans have shaped the way the company operates. You don't like in the case of Facebook, Mark Zuckerberg has a history of having done things himself that I like not.

[00:20:39]

Super like encouraging on the the privacy and exercise of the power responsible exercise of the power that his platform gives him front. Now, how much has he listened to all the interventions that have been targeted at him? Right. I was going to say, you know, I saw all these photos of him feeding cows in Oklahoma and, you know, putting his hand on his heart in Baptist churches across the country. So surely he is a changed man.

[00:21:07]

I'm persuaded. So switching back a little bit, I assume it must be the case that our laws in the US on privacy are kind of woefully outdated, just in the sense that they were mostly written before the era of big data on the Internet. And so there have just got to be a lot of ways in which they don't they're not like appropriate for the current landscape of privacy risks. Would you agree? And and if so, what do you think are some of the main mismatches where our law is just not appropriate for the current world?

[00:21:40]

Just jokingly, I mean, I think maybe I would disagree with the premise that there are privacy laws that are way worse than I thought. Why do I think there are privacy laws? Sir.

[00:21:49]

Sir? There are some if you break it down by government versus commercial privacy in the commercial space, there's very little that there's enough law to say that if a company right to privacy policy and then violate it, companies are required as a result of Californian law, the right one. But it can be a vague motherhood statement. But it's a motherhood statement. A jargon might be an Australian English term like, you know, a general reassuring declaration or something like as if from a mother.

[00:22:25]

We care deeply about your privacy. Facebook cares deeply about your privacy, Twitter cares deeply about your privacy. Yeah, I think it's statements like that, but actually make me distrust companies more like like every time Facebook even gives me notifications. I like Julia. We care about you and your friendship, that kind of thing. Like, every time I see that, it just like downgrades my trust in the company a little bit. And then they'll say things like, we may collect data, including the following.

[00:22:52]

Yeah, OK. Yeah. So so the promises. So companies are required to follow what they say they'll do, but they have the freedom to word what they say they'll do and like a very vague and a non binding way. And then if they violate those policies, which is already, there's no reason for them to have done that. They could have written permission for themselves to do the thing they wanted to do. You can't sue them. You can go to the Federal Trade Commission, which is a a body in DC with limited resources that will triage and investigate a small number of these instances and then potentially, in some cases, find a company.

[00:23:31]

But those fines typically are very small compared to the amount of revenue that's at stake. Yeah. For the companies. So there are only incentive, really, is just PR backlash. Is that it? Yeah, well, I mean, and essentially the reason the FTC seems significant is largely because of the PR consequences of being investigated and fined by the FTC. So would you write new privacy laws like you seem? I could be misreading you, but you seem like, you know, generally wary of heavy handed regulation, sort of as a rule.

[00:24:07]

But in this case, it seems like it might be necessary to preserve people's freedom. How would you balance that? I probably would have written a rule that says you people have a way to need to have a way to really opt out little bit that look like. So we spent a lot of time trying to make this happen. It could have looked like a setting in your browser or in your phone that says when you turn this on people you have no relationship with.

[00:24:33]

Can't just give themselves permission to track into value. You have to specifically agree basically something like the regime the GDP has now created. Yeah. In Europe would have applied if you chose to opt out instead. It's this weird. I mean, we're probably going to have enough time to dive into all of this. It's this weird situation where Europe that was significantly more willing to to impose serious penalties, four percent of a company's revenue for failure to comply with its privacy rules that will get to write the privacy rules for the world because of that legislative willingness in the United States.

[00:25:15]

They were especially in the Obama is a whole bunch of bills in Congress that were trying to create some basic rules of the road, but they were never going to get 60 votes in the Senate to pass. And why do you say that Europe will get to write the world's privacy laws? Well, if you're a company and you want to do business on the Internet, it's a lot easier for you if you can reach all of those European users. Is this a good thing as a rule, like it seems like the general like maybe in this case this is good, but the generalization of this is that whatever country wants to be most restrictive, as long as there are moderately big country that sort of forces the rest of the world to follow the same standards, it's a weird game.

[00:25:57]

And I don't know whether it's good or bad. I think it can be played out in constructive or less constructive ways, in different spaces. As a as a policy person working on policy or doing activism, you always want to just look at the game board like that and say, oh, if I get California to do something, I can shape U.S. policy over Europe to do something, I can shape world policy. I thought I read something actually on the left blog recently about Article 13 and the I forget what the overall piece of legislation was in the in the EU that would require large platforms to have a database of copyrighted images and censor things on their platforms that violated.

[00:26:39]

This is terrible, terrible, sloppy thinking on behalf of Europe. I mean, a lot of this thinking came out of the the impact of the Internet on news organizations, traditional media organizations and the Internet has really. Massively shrunk the amount of revenue that's available to us, those companies, for somewhat subtle reasons. There's just a lot more places to put an ad and Google controls many of those places. And so the ads in your newspaper are no longer able to command the same premium that they used to.

[00:27:14]

And then the classifieds in your newspaper all disappeared and went to Craigslist or a local competitor. But in response to that, the newspapers, especially in Spain, said we want to be able to control what Google shows in Google News, which is that if you look at Google's products, is a tiny little piece of the stuff that Google does. And it's probably we don't know, but it's really quite small revenue wise. And so you're focused all this regulatory energy on how can we control what?

[00:27:42]

Google News. Displays from a Spanish newspaper, and they managed to get Google News shut down in Spain by trying to extract revenue there and Article 13, I think it's trying to generalize this this terrible theory of how European media outlets can claw back some revenue from Google. I think it would. There are a lot of other ways of approaching that question that could be more constructive. You know, maybe we do need to turn to the tech companies and say, how are you going to fund a healthy media landscape?

[00:28:12]

But it should be not tied to specific, absurd copyright claims. But is this the kind of thing that would force changes across the rest of the Internet? Because tech companies have to conform to the law in the EU and they can't just sort of selectively conform?

[00:28:32]

It depends, there's always a cost associated with building two sets of products and having them work differently in different places, and sometimes the companies look at the cost of complying with a regulatory regime and say, we are actually that's so annoying to us that we're either going to pull out of that country or we're going to go to the trouble of building a different product there and not having it be the same thing that we do everywhere. So Facebook did that with the GDP.

[00:28:59]

They just said, OK, there's a version of Facebook that GDP are compliant, there's a version that's not, and everyone else gets the non-compliant version. A lot of other companies will just make everything GDP are compliant. We might see with Australia in the UK seriously considering mandating back doors in encryption companies having to choose. Do they put a backdoor in for those governments or do they just stop offering them more secure messaging products and all the associated features in those countries?

[00:29:30]

Zooming out a little bit now. How have your views on privacy and the landscape of trade offs and strategies evolved in the years that you've been working on? It actually was one thing I wanted to add just before I answer that, which is all of the things I said about privacy law are about commercial privacy law. There's a separate area of privacy law where in the United States we do have the Fourth Amendment, which in theory secured people in their homes against search and seizure by the government.

[00:30:01]

Right. But those protections have been massively eroded by the war on drugs, in the war on terror. And so we still do nominally have a Fourth Amendment. It's just far flimsier and less useful as a protection protection against various forms of whether it's law enforcement, surveillance, NSA surveillance, etc., etc.. But actually, to jump in before you answer my own question, I was thinking when I asked the question about, you know, our privacy laws mismatch with the current landscape of things like email, which people now use as they used to use letters, and there were laws that you can't open someone else's mail, but, you know, your employer can read your email or they're not.

[00:30:40]

It seemed to me they weren't subject to the same kinds of strict privacy protections that letters were. Even though emails are now the modern letters, are there things like that that, uh, that you're concerned about or does that just sort of a trivial example that does? Workplace surveillance is an entirely third category almost from the two that I was talking about because. That's a relation, it's almost more like a family relationship where you have this very close relationship between one human and the surrounding group of humans and the surrounding group of humans gets to create a lot of norms and conventions.

[00:31:14]

And in most places, the law will back employers up in setting policies that require surveillance of an employee's communications. Whether that's actually good, I think a lot of us feel it probably isn't a lot of the time. It does depend on the nature of the work and the nature of the roles that people have. And I feel like we haven't gotten the right outcome in a lot of settings. And a lot of people have workplaces that are that are pretty psychologically intrusive as a result.

[00:31:49]

And some of that is about the law and some of it's about bargaining power. But it's also an area where I'm honestly less of an expert. OK, OK, so going back to the question of have your views on privacy and laws thereof changed in the years that you've been working on it? Sir, I think weirdly, I found myself at first working on privacy because I'm one of those people for whom privacy is an emotionally salient topic, as in the night.

[00:32:17]

That's right. Like, I get really bothered by the idea that people can see things on my computer. But I didn't necessarily think that the topic was as important as my emotional reaction to it. And I thought that other areas of Internet policy, like copyright law and access to knowledge were probably much more important until one day I did a back of the envelope calculation on this question and realized, oh, there are some places where privacy potentially in certain societies just becomes way more important.

[00:32:50]

They're all a little bit indirect. You know, it's never that privacy itself is the most important thing. It's just that sometimes it's a safeguard against something else terrible happening. And that then raises this weird question, which is how strong can that safeguard be, especially in an era where technology's made privacy invasion so easy? Are there any significant disagreements or debates within the community of people, you know, Jeff, and it's sort of sympathizers, are there any disagreements within that community about privacy?

[00:33:22]

I think that a huge one that people have been grappling with in the last few years is the relationship between privacy, freedom of speech and the political character of online debate and the consequences of platform design. There are a lot of places in there, in there where you see a tradeoff between privacy and competition. Competition, so if you wanted to encourage the creation of alternatives to Facebook, the first thing you would do is require Facebook to make a lot of APIs that let people get their data out, let competitors to Facebook.

[00:34:01]

Facebook users data out so that you can build something else. But those APIs are exactly the same sorts of APIs that Cambridge Analytica was able to exploit to exfiltrate vast amounts of user data. Why? Why is it a threat to my privacy to give me the right to extract my own data? Unfortunately, with social media networks, social media tools, it's not enough to get your own personal data. Much of the data that is things that relate to interactions with other people.

[00:34:31]

Oh, I see. Right. So I can't just extract my half of a conversation that I had with someone, because at the very least, you know that I had the conversation with that person, even if you block their comments and. Yeah, exactly. And then, you know, one of the big pieces, there is an address book. Can you get an address book? Right. Because then it shows that they know me, not just that I know them, that commutative relationship.

[00:34:55]

Right. That's tricky. I mean, do you have a position on that issue or is it is your position in a nutshell, but it's complicated. This is a personal one. It's certainly in no way represents the view of any organization. Actually, it's probably true of everything I've said today. The. For me, I think that the risks, the political risks of having massive privacy, invasion tied to machine learning algorithms of politics are really high.

[00:35:23]

And so I might erred on the side of trying to get the platforms to protect privacy while strongly over competition. But I don't think I'm not super optimistic that we can win that either. Like, I might be being irrational in thinking that this is the thing we should do, even though we can't accomplish it. And it also contains this question of, OK, so suppose you get Facebook to be better at protecting privacy, which means keeping all the data in a box itself and only using it itself.

[00:35:54]

How do we how do we get Facebook to use that data in a way that's democratically constructive? And that's a super hard governance question. Yeah. All right, well, maybe that's a good point at which to shift over to discussing A.I., which you've been working on well for a while, but especially now that you've moved to the partnership on AI. And I was just looking at a impressively detailed, comprehensive analysis that you did recently of progress and AI in different domains of AI from image recognition of speech to transfer and so on.

[00:36:31]

Can you talk a little bit about a does that project relate to your concerns about privacy? Is it a totally separate thing? And B, how do you go about measuring progress on the relationship? It definitely has informed some of my concerns and helped me to understand where we're at on political implications of AI, the the project that we did, we wanted to. And this is an F project you can find it at if after Auglaize Metrics and Progress Measurement Project.

[00:37:08]

The idea was to just get some high level sense of how when we start as a policy organization working on I, should we focus on these short term issues that are obviously relevant, like the use of machine learning prediction systems for sentencing and pre-trial detention in the U.S. courts, which are responsible for, you know, the machine learning algorithms are responsible for probably 500000 people being incarcerated. So should we focus on that or should we be thinking about the way that artificial general intelligence at some point might totally transform the planet?

[00:37:46]

What evidence is there or isn't there that that type of scenario is imminent? And the way we did this, we just like picked a methodology and tried it out. We said, let's make a list of problems that we know humans are able. To learn to solve and then some problems under some of those problems and then. For each problem, do we know any metrics or ways of measuring whether the machine learning systems are yet able to do this thing, learn to do this thing?

[00:38:17]

And we just made a list and we got order of, you know, a couple of hundred different sets of metrics for order of one hundred problems should have those numbers in front of me, but it's roughly one hundred. And then we went out and looked at the literature and said, OK, so what? How well. The current neural network architecture is solving this reading comprehension problem on this game playing problem or this vision task, and what we found was.

[00:38:53]

Progress across a lot of fronts has been very fast, especially in the last five years. There are almost three buckets of problems. There are the ones where progress has been really rapid and humans have already been surpassed in basic vision tasks. And Atari game playing in. A handful of very basic reading comprehension problems, we're starting to see human level performance, but those are sort of second grade reading comprehension tests. Then there's a second category of of tasks where.

[00:39:32]

The progress is happening, but it hasn't reached close to human level yet, things like answering questions about what's going on in a photograph. And that doesn't count as image recognition. It's in a fairly advanced immigration image recognition problem. So the basic task of. Is there a you know, can you tell me is there a microphone in this in this photograph or, you know, that traffic in this photograph, the ones that we keep helping the algorithms solve every time we answer Kaptchuk questions, exactly what those those look solved.

[00:40:07]

The the problem of show a a picture to a neural network and get it to either. Right. Correct. Long form sentence description of what's going on there called captioning or the problem of be given a picture and a question and answer the question about that picture. So this is not just a avision task but a hybrid vision and language. Tasca those problems and that one's called visual question. Answering those we see rapid progress, but you wouldn't really say that solved.

[00:40:44]

And then there's a third category of things that are just beyond the current state of the art. Like we don't yet have anything that looks like a real neural network, writing computer programs of a general sort or answering questions about the behavior of computer programs. There are certain advanced kinds of language tasks where you're really wanting to not just answer simple questions, but kind of get the deeper narrative out of a story. For instance, some of those tasks look too hard for the current state of the reading.

[00:41:19]

Scientific research papers would be another thing where you're like, yeah, maybe in specific domains you can do some feature extraction. But but being able to critically read a complex text is way beyond the state of the art. Yeah. I I noticed this kind of disconnect when I talk with people about progress between the people who are are quite bullish or just enthusiastic about the amount of progress that's been made so far versus the people who are kind of unimpressed. I guess there's two sources of the disconnect.

[00:41:51]

One, and I'll say both, and you can just respond to both of them. One objection from the following the Bears is they'll point to specific examples of an AI giving a really dumb answer to what should be a really easy question, like, you know, showing it a photo of some sheep grazing on a hillside and the eyes like that, the lady in the hat. And, you know, people post stuff like this on Twitter and they're like, yeah, I'm really not worried about a robot uprising.

[00:42:16]

So that's one type of objection. And the other is sort of a more general, you know. Yes, we may have made a lot of progress on these domains that we can measure, but these domains seem like very sort of narrowly defined and pretty small pieces of the totality of like what would constitute human intelligence. So, like, I don't know, being able to perform really well in the specific video game or being able to, like, accurately recognize images that come from the same or like images of things that you were trained to recognize with a certain set of images like of dogs or sheep or whatever.

[00:42:55]

And so it often seems like the debate is like the bulls will say, like, look, we're like, you know, ten times better than we were two years ago or something. And then the bears were like, yeah, but we went from like, you know, half a percent of the way to ajai to like two percent of the way to Adjei. And like, yes, that's ten times better. But like, we're still pretty far.

[00:43:15]

And I don't know, I've sort of been trying to resolve this debate. And it's tough because, you know, it's not like easy objective measure of like how what what percent of the total weight to general intelligence does like this type of image recognition constitutes. But do you have any thoughts on like if you're bullish about progress and then can you address, like, why these specific narrow tasks seem like an important cause for optimism? So one thing I will say is that if you go to that Web page, after all those metrics, you'll find a really deliberately hard to engage with incomprehensible document.

[00:43:55]

It's vast and hard to get a big picture from it. And I think we thought about and might still do, but we thought about the idea of making something much more simple and digestible. The cartoon version of this could be progress, like, you know, one of those windows. All right. When I saw things where you can see where 70 percent of the way there and it could go backwards as we learn more about how hard the problem is.

[00:44:18]

That's right. It would be exactly like one of those windows progress against the 99 percent and then just sits there and spins. Yeah, 15 years or so for various reasons. I think people are cautious about the possible overinterpretation. Yeah. Of that. And then a little bit also cautious about information hazards around making it super clear to everyone where we're at. But the intention of the project was to have the source materials for that view. And and so having worked on it closely, I kind of have an answer, which is I'm moderately bullish.

[00:44:54]

And doing that project made me morseu. And the reason for that is, you know, to address those arguments from the bears. It's these these tasks are specific, but. The interesting thing is that it was not that you learn to play, it was not that you you built a system that could play a specific Atari game or specifically play chess or specifically play Dota two. It's that you were able to teach a general purpose system like a reinforcement learning agent or some supervised neural network architecture to do this thing.

[00:45:35]

And so there is some general system in the in the progress. And you see papers that pop up that make significant progress in wildly different subfields of machine learning simultaneously sometimes, which is very interesting as well. It looks as though there's a generality to the types of neural networks that we have that may not be sufficient to get us all the way to ajai. But we can kind of we're starting to get some confidence that the intelligence can be made from things like these these deep neural networks.

[00:46:08]

So I think that's one reason for bullishness. And the other one is just that the progress that's happening is on so many fronts and it's continuing. And so it's super dangerous to try and extrapolate any kind of line out from that. But if you do and then you wave your hands a lot, you get something like 15. Oh, it could be you could convince yourself that 15 years of this trend would would close down the current list of problems that humans can learn to solve and machines can't right now.

[00:46:43]

Now, of course, you then should update to expect there will be new ones and hard things you hadn't anticipated. So maybe you wave your hand and you go from fifteen to twenty five years, maybe one tenth is a underappreciated forecasting technique. Do you say you worked on computer science and law is I know very little about the laws around like IP laws, around machine learning algorithms. Have people been patenting these algorithms and like what? What role does patent law play in like encouraging or or slowing down progress?

[00:47:22]

Well, the precedent we have is from the last 20 years of the software industry where we saw massive patenting of all sorts of ideas, simple algorithms, complicated algorithms in computer science. And but the total consequences of of that giant wave of patenting seem to have been very negative for the software industry. I don't think there's total consensus, but but there's sort of near consensus that these patents get stockpiled both by the companies that actually make products and buy these other types of entities, nonpracticing nonpracticing entities or or patent trolls is more accurately termed.

[00:48:07]

And what those companies do is they just extract money from people who are trying to do useful things, either small companies or large ones, and then run off with it. So their attacks on on the software industry, there's no evidence that the patenting process actually causes invention in software. It seems that people invent things in order to accomplish goals. Almost always. It's very rare to have the kind of. We're going to pull lots of money into this aren't abstract R&D thing to make an algorithmic step forward that happened.

[00:48:43]

Probably the closest you could get to that would be video file formats and other things where there's just a vast amount of work that goes into making an MPEG file what it is. But those are so exceptional. Almost all of the work that computer programmers and computer scientists do is towards a specific thing they're trying to build right now, and they solve the problems that are in that way. And as a result of that, and probably a lot of other dynamics that can talk about that, make software a little different from other fields of invention, the patents have been a massive problem for that industry, and it struggled and they tried to get significant patent reform to to shield itself from some of these problems.

[00:49:23]

But for various reasons, both political and dynamics within companies that didn't succeed hasn't succeeded yet. And so the prior we kind of would have from the software industry is that machine learning would be awash with patents in the same way, and it would wind up with the same problems. Yeah, we've seen it awash in the patents and I have not heard as many reports of trolling. And and the huge problems we saw with the software industry is machine learning, Eli.

[00:49:54]

I only have wild speculation about that question, that's what we're all about here as wild one wild speculation is that it might not be as easy to tell when you have a good target for a machine learning pattern.

[00:50:06]

Shakedown, huh? I don't know, I mean, I haven't game that out, the other dimension might be that there aren't as many products that are obviously machine learning. Products like the machine learning and in in tech products is often quite subtle, though that's not a very good explanation because they're ordinary people using them might not realize when machine learning is happening. I think experts probably do. Yeah. I mean, can companies be forced to share the algorithms that they used in a particular product during a lawsuit?

[00:50:40]

Potentially. Yeah. So you can like look at some other companies, product me like I bet they're using the algorithm that we developed and then you could see them. Yeah. Actually, the fact that there's guesswork in there. Yeah. Well it depends. Right. So in some cases when when the model, the neural network model that they're unfamiliar with that term, I'm just trying to listen to the audience. They all know that a model is like this, this term for a trained neural network that does a specific thing when the model for, say, A, you know, recognizing your friends in your firdaus, when that thing lives in the cloud, like on on, say, Google's computers or Facebook's computers, it can literally be quite hard to.

[00:51:21]

To get a copy of it and saw potential patent trolls might really have trouble knowing how to sue those companies, whereas if the functionality lives on your device and works offline. So if your phone can recognize your friend in a photo when you're not connected to the Internet, then in that case, potentially, if someone routes the phone or gets jailbreaks the phone, they could extract the model and figure out, oh, it works this way. Cool. And I guess last question about AI progress before we wrap up.

[00:52:07]

Have you been thinking at the partnership on AI about what kinds of regulations might make sense, either official like laws to regulate potential downsides of AI or like unofficial agreements between tech companies about how they're going to like what kind of safeguards they're going to put in place or, you know, like ethical practices to follow. Because I when I talk to people about work on safety and related topics, a big source of skepticism is like, yeah, in theory, you know, it would be good to have safer AI instead of less safe.

[00:52:41]

I agree there could be some risks, but it just seems so futile to think to try to think about regulating a technology that doesn't exist yet. I mean, if we're talking about, like general intelligence, not just like image recognition and, you know, there aren't historical precedents for this, et cetera. How do you feel about that? I think it's too soon to regulate general intelligence. Absolutely. But in specific domains of machine learning and AI, they're absolutely open, high stakes, urgent regulatory questions.

[00:53:08]

Like I can give you, for example. Sure, sir. In California at the end of August passed a bill called SB 10, which started out as being probably a very constructive reform to the criminal justice system. It abolished the money bail system that is responsible for large amounts of incarceration in California. And the last minute that legislation was amended to mandate that every county in California should purchase criminal justice risk assessment tools, which are essentially machine learning tools that predict whether someone's low, medium or high risk and then not in a completely deterministic way, but use those tools to make decisions about whether people incarcerated prior to trial or conviction.

[00:53:53]

Yeah, and we know from excellent research from ProPublica and following ProPublica from from various academic groups that those existing tools have massive bias problems that hugely disparate in that they're labeling of, especially African-Americans and other minorities, as high risk. And that's a giant problem. And the question is, will this bill kind of cause the propagation of these bias tools in California? And so I think we're thinking this is a hard question about what to tell to the Judicial Council.

[00:54:29]

It has this review body under this legislation about this new regime that's been set up. Is there a way to to get the other sort of standards that can be applied to tools to ensure that you're not using you're not purchasing tools that are massively biased in every county in California and then deciding 60000 people's fate with them? All right. So that's one example. Another example which we looked at, ATF, were proposals to mandate labeling of bots on the Internet.

[00:55:04]

And those proposals have a lot of failure modes to them. They can accidentally wind up forcing platforms to label a lot of humans as bots because they have to make a decision in a hurry. And the standards are super vague and there are risks, very, very high incentives to not fail to label abort as abort. Right. And the legislative legislative drafts also didn't distinguish between bots and what I might call a cyborg or a hybrid between a human and a bot.

[00:55:34]

And of course, many systems are not clearly either human or machine learning or chappe. You know, they're not clearly in the sense that we can't tell or that they literally are a combination of human and a but if you use the Gmail app on your phone, you'll notice that it starts to have suggested requires you to click one of those. Was that your voice or Gmail voice? But presumably, if you know Russia or some company is going to be using bots on Twitter, say, you know, the whole point is to save time and do it at scale.

[00:56:08]

So you're not going to have humans doing this stuff. On the contrary. What you want is for the bot to be really effective, and so you build a giant lookup table of conversational paths that you've seen before and you know what to say next fall. And then whenever you get a question or comment or get to a situation that you haven't seen before, rather than making up an answer, it's probably going to be terrible and will reveal you to be a strange, inhuman robot.

[00:56:40]

You hand it to a human and say, what should I say next? Well, since we're almost out of time, why don't we leave it at two examples, because I'm itching to ask you my classic end of the episode, rationally speaking question, which is, can you nominate a resource like a book or blog or even a person, you know, an author, thinker or whatever, who you that you have substantial disagreements with, but nevertheless think is valuable to read or engage with?

[00:57:09]

So I was thinking about this question and there are many different directions to try and answer then. But the one and I think I'm going to share is, is this book by David Graber called Debt the First 5000 Years. And it's an absurdly ambitious, certainly overly ambitious kind of grand narrative of history and the role of the way that markets displaced the preceding cultural human economies that existed in tribal societies and the inherent violence in that transition process and its implications and this book.

[00:57:49]

Tell so many great yarns about so many dimensions of of life. And it appears to back them up with a lot of strong citations, but I can't believe that it's all true. In fact, we've done any spot checks, epistemic spot checks, maybe one or two, and they've been mixed. And so what I what I almost really want is to have a liquified version of this book where we go back and say, OK, of all of these beautiful claims, how many of them really check out and how many of them turn out to be something else altogether?

[00:58:25]

Oh, nice. I do have a friend. I'll link to her blog. She does epistemic spot checks just like her personal blog. It's not a paid thing, but she'll take a book that makes a bunch of claims and just pick, you know, randomly 10 of them to check or something and use that as a barometer for how trustworthy the book is, which is the thing I think was I wish was more widely done. That's an excellent idea.

[00:58:44]

The other thing I will say, I'm sure a lot of your audience has read Steven Pinker's Better Angels of Our Nature. I read those two books together and I found the experience of reading them together to be really striking. How so? Well, maybe my politics, but Pinkert really annoyed me. So he just seems like so wildly overly optimistic and seems to just really and overconfidence is maybe the biggest thing. He's like just here's how it is. I'm here to tell you that violence is not a problem anymore.

[00:59:16]

And there are little subtle weaknesses in his argument that that make me very skeptical of it. What's example? So his numbers on warfare are all about battle deaths and he just excludes deaths from war outside of battle and says, well, they must be declining as well in proportion. But it's hard to get data about them. And it just seems, oh, that's not obviously necessarily true. It could well be the case that modern warfare causes many more, proportionally speaking, non battle deaths because it ranges much more widely because of mechanization, for instance.

[00:59:54]

Well, maybe that's not true, but it's a it's the kind of doubt that should suffuse the book and it doesn't. And then the thing that's striking about both of these books is they address a lot of topics that seem to be off topic. You know, they'll they'll into the topic of honor or witchcraft or because it's like it's like thematically related or sort of aesthetically related or they seem to both both authors seem to have claims about how these other concepts fit into that.

[01:00:27]

The main subject that they're arguing about and the thing that really struck me reading them together was that Greber and Pinkert is making this claim that violence is declining and that in particular, tribal societies had huge problems with violence. And then what Gravid does is goes through and tells specific contrasting anthropological stories about different tribal cultures and how they created norms that caused unmitigated violence. And you get this picture of, oh, wow, this was really complicated. There clearly was a huge problem with incentives to violence in Hunter-Gatherer, early agricultural societies.

[01:01:17]

But culture really responded to that set of incentives and some really creative ways in some places and not everywhere, does that contradict Pinker's thesis, though? It sounds like it could totally be the case that culture adapted to to respond to problem of violence, but not nearly enough to, you know, mess up the trend that Pinker is pointing out. I don't think it necessarily contradicts it. I just think it's grounds for a lot more caution than Pinker engages in.

[01:01:49]

Yeah. And like a lot more of a complex narrative than Pinkert seems willing to to tell. Well, this is fun. It's making me think that I should add or substitute the rationally speaking question to be a please criticize a work that we suspect many of our listeners will be a fan of because that seems like a good practice or mix it up each time I mix it up. Yeah, shot and Qasar. Cool. Well, we'll link to both of those books as well as to our blog posts and and other articles that we've discussed during the episode and to yourself and the partnership on I.

[01:02:28]

Peter, thank you so much for joining us on rationally speaking. It's been great having you. Thank you, Julia. That was a lot of fun. This concludes another episode of rationally speaking. Join us next time for more explorations on the borderlands between reason and nonsense.