Transcribe your podcast
[00:00:00]

Today's episode of Rationally Speaking is sponsored by Livewell Give Oil takes a data driven approach to identifying charities where your donation can make a big impact. Give all spends thousands of hours every year vetting and analyzing nonprofits so that it can produce a list of charity recommendations that are backed by rigorous evidence. The list is free and available to everyone online. The New York Times has referred to give well as quote, the spreadsheet method of giving give. Those recommendations are for donors who are interested in having a high altruistic return on investment in their giving.

[00:00:30]

Its current recommended charities fight malaria, treat intestinal parasites, provide vitamin supplements and give cash to very poor people. Check them out at Give Weblog.

[00:00:52]

Welcome to, rationally speaking, the podcast, where we explore the borderlands between reason and nonsense. I'm your host, Julia Gillard, and our guest today is my friend Helen Toner.

[00:01:03]

Helen is the director of strategy at a new think tank at Georgetown University called the Centre for Security and Emerging Technology, or Sylhet, which researches and advises policymakers on the security impact of technologies like artificial intelligence. Before that, Helen was a senior research analyst at the Open Philanthropy Project, and she just got back from living in China for nine months getting to know the ecosystem there. We're going to talk about a bunch of things, including common misconceptions about China and its strategy and just how to think about a topic like this, how to analyse complicated strategic decisions with a lot of uncertainty in them.

[00:01:39]

So, Helen, welcome to rationally speaking. Thanks so much.

[00:01:43]

So first, give us a little more detail on what you were doing in China, aside from the most important thing you did there, which was, at my request, paying a visit to Beijing's replica of the coffee shop from friends. And I really appreciate you making that pilgrimage on my behalf. It was a pleasure, Julia.

[00:01:59]

And I feel like that that visit in itself says a lot about sort of China today. I think as I as I showed you in pictures, the coffee shop is a beautiful replica. They serve only foods that were mentioned on the show. They have the couch.

[00:02:15]

And the lovely old albums on the walls are a perfect match for the show's strenuous attention to detail. Right.

[00:02:23]

And it's on the sixth floor of a when I was there, basically abandoned shopping mall.

[00:02:29]

So throw on the doors and the doors. So it was sort of in one way a beautiful and perfect in another way, sort of completely wrong.

[00:02:38]

And is that your encapsulation of of what about China?

[00:02:43]

I mean, I think it's a little bit of an unfair encapsulation of China. I think certainly there are many more shiny high rises where it was a little bit unclear what a shiny high rise was doing in that place than you find.

[00:02:56]

So incongruity really is the is the. Yeah.

[00:03:00]

And also like this sort of strange, like techno, modern centrally planning was sort of visible in various places. So, you know, this is an example, right? Like it's a it's a shiny modern skyscraper, but it seems like someone just decided it should be there rather than there being a really strong business case for many businesses wanting to visit a lot of money into this.

[00:03:23]

I think this was you, but it's possible. It's another one of my friends who was in China recently. They were describing a similar experience of a a sort of perfect replica of something in the West, but with some of the details slightly off. And I think it was a start, but this might have been me.

[00:03:38]

So this was like, yeah, it was a really nice I was in Shanghai. I'd been living in Beijing, which is a little bit less foreigner friendly. And Shanghai has has more Westerners. And so that's more Western stuff. They have the French concession, which was, you know, previously under French control, pastries and things. So it's a lovely coffee shop with a sort of high ceilings, polished wood, great espresso machine. You were able to choose where you came from, which I've actually never encountered anywhere else.

[00:04:04]

Anyway, very hipster coffee shop. I think I had a bookstore attached, whatever.

[00:04:07]

And as an Australian who had been living in the U.S., I'm sure you were missing your your well-made coffee. Absolutely.

[00:04:14]

And they were playing, if I remember right.

[00:04:16]

The Star Wars soundtrack, which was just incredible.

[00:04:20]

Well, exactly. Incongruity. Yeah. Well, anyway, I sidetracked us dreadfully there. My original question was going to be not what we had coffee shops have gone to, but but what were what were you doing day to day in China? What was your reason for being there?

[00:04:34]

Yeah, so my day job, as I thought of it, was being in an intensive Mandarin Chinese language program. So this was based at Tsinghua University, one of the sort of biggest and most respected universities in China. It's in Beijing. And that was 20 hours a week of small group classes, which I then sort of also supplemented, especially early on for the first couple of months. I did a lot of self study as well. I'm just super interesting.

[00:04:58]

I really love learning languages. And so you and I could chat for a whole hour or just about like learning Chinese and what a great language it is and how fascinating that was. And so, as I said, I thought of that as my day job. And so then my side hustle was trying to meet with people in China who were involved in AI and machine learning in some way.

[00:05:16]

I should just clarify for background that one of your main focuses, when you were a senior research analyst at the Open Philanthropy Project, was advising, I guess, grant makers or policymakers or researchers about development. So this wasn't coming completely out of the blue. That's right.

[00:05:32]

So I'd been getting interested in sort of the policy side of AI at the Up and Philanthropy Project, and specifically because I think a policy is sort of this very, very broad term, specifically the national security angle of what are the implications of this progress and machine learning that we're seeing. And so. China just immediately pops out in that field, you know, if you start asking, like, you know, how should the US be thinking about it from a security perspective?

[00:06:00]

China is sort of the first word on everyone's lips here in D.C.. Right. And so it kind of kept coming up in my conversations and sort of a big part of that trip. Part of it was personal interest and just sort of being at a good moment in my personal and professional life to to go spend a year overseas. But part of it was very much the sort of professional relevance. And so that was what I was trying to get out on my side.

[00:06:20]

Hustle, which included setting up meetings with machine learning professors and talking to them about how they were thinking about the longer term future of this technology and the sort of social implications it might have also included. It's very easy to to meet up with other foreigners, of course. And lots of lots of people in China are really interested in sort of thinking about AI and and sort of big idea and the future of technology and the future of the geopolitical order and so on.

[00:06:49]

So talking to those people was also interesting. I also had a fun time with my two language buddies who I found there. So these two young women who I met once a week each and had lunch with and, you know, would speak half an hour, English, half an hour of Chinese, they were both machine learning masters students. And so that was a really good way to just kind of talk to them about what their lives look like, how they thought about their careers and sort of all of that in a pretty low-key setting.

[00:07:16]

And so they became became good friends. And that was a great opportunity as well.

[00:07:20]

That's great. So from your experience, both talking to scientists and also talking to, you know, young people in the field and just getting to know how how in China works, is there anything that surprised you compared to your preconceptions or compared to media portrayals of of A.I. in China? Yeah, I mean, I think I've seen you write about this before, Julia, that, you know, sometimes people ask you about big ways you change your mind and there are just lots of little adjustments.

[00:07:47]

I think there are plenty of little adjustments and plenty of cases where I had sort of a vague view and it became much more concrete or much more detailed. An example of a very sort of crisp change in my thinking is that I went in. I feel like the West has this portrayal of the company Biju as like the Google of China.

[00:08:06]

And when you think Google, you think like super high tech, enormous, like has many products that are like really cool and really widely used. You know, like Google has search, obviously, but it also has Gmail, it also has Google Maps and it has a whole bunch of other things, Android. And so I feel like this sort of term, you know, the Google of China gets applied to Baidu and all kinds of ways. And, you know, in fact, it's it's sort of true that Baidu is the main search engine in China because Google withdrew from China.

[00:08:33]

But it kind of all the other associations we have with Google don't fit super well into Baidu. It is, you know, maybe other than it is one of China's largest tech companies. That is true. But just in terms of my sort of overall level of how impressed I was with it as a company or how how much I expected them to do cool stuff in the future went down by a lot just based on sort of, you know, there's no Baidu maps exists, but no one really uses it.

[00:08:59]

The most commonly used maps app is a totally different company. There's no Baidu male, there's no Baidu docs. They just there's a lot of stories of sort of management dysfunction or a sort of feudalism internally. So that was that was one of the kind of clearest, clearest updates I made interesting.

[00:09:16]

Why? What do you think that wasn't captured by the media? I feel like by these very high profile journalists should in theory, be the uncovering stuff like that. Am I wrong? I don't know.

[00:09:29]

I mean, it's a I feel like the Chinese Internet is a hard thing to get a sense of from the outside because it is so walled off. So this is definitely an area that's an example where I kind of had a broad idea going in and I now just have a much more sort of concrete day to day impression of like what is it like to use WeChat sort of all day every day from literally everything you do, which is something that most Chinese, almost all Chinese people do or like which companies are people kind of talking about?

[00:09:55]

Are they sort of excited about or not excited about? And that's very much where I got this impression of Baidu as opposed to there being sort of concrete, verifiable fact, if that makes sense.

[00:10:04]

Got it. Did you talk to your friends in the Machine Learning Program about or any other Chinese people you got to know about the social credit score phenomenon? Oh, yeah. That seems like another thing where there might be a lot of misconceptions, where it might be better or worse than we in the West think.

[00:10:23]

Yeah, that's actually a great example of another or an area where I feel like I can't say that that was something that I changed my mind about because it just wasn't receiving much attention before I went to China. Really, the story really blew up kind of during twenty eighteen. And I do think it's one where the reporting in the West has been pretty sort of overblown and misleading. It's actually I find it interesting to compare two stories. I feel like two of the biggest stories about China over the past year or so have been the social credit system story and also the the wigger imprisonment and the oppression of the the weaker minority in China's far west.

[00:10:59]

And for me, that's an interesting contrast because so in the social credit system side, it's tricky because it's not that the reporting has been drastically factually wrong. It's just been kind of misleading and a whole bunch of ways. So the story, the picture that most of my friends have we haven't spent any time in China is that the Chinese government is rolling out on a massive scale the system that is going to look at every single aspect of your life, who you're friends with on WeChat, who you message about, watch what you're buying, how you're spending your time, how many kids you have, and give you a single unified numerical score.

[00:11:32]

And that score will determine all kinds of things about your life, for example. Oh, my goodness. Did you know some people are banned from taking high speed trains or planes? Is that kind of a maybe slightly dramatized version of the picture you have not dramatized now?

[00:11:47]

And so this is kind of pulling together and mashing together all of these different threads and then sort of hyperbolize it, because when you mash them together, it gets more scary and more black muresk. So the sort of two big things that are going on here. One is that China doesn't really have any kind of existing credit system, credit score system, like straight up how credit worthiness this person should we give them a loan. And so there are now several sort of commercial efforts to try and figure out ways of doing that, some of which do involve some pretty sketchy information.

[00:12:19]

So there's an app that actually this Taiwanese American venture capitalist talks about, which you download onto your phone and you apply for a loan. And it looks at all kinds of basically all the data it can get from your phone, including how much battery you have left, you know, what model of phone you have and things like this, which seem like they shouldn't. You know, maybe that's a little bit concerning as a way of try. If you should get a loan, but so that's sort of the commercial side, and so that sometimes does involve having a numerical score, but that's really not that different from the US credit score system, right?

[00:12:49]

Mm hmm. And then on the other side, you have this big government push. So social credit is definitely a big idea that the Chinese government is really interested in promoting and using.

[00:13:01]

But this system of like looking comprehensively at your whole life and giving you a single numerical score is really, as far as I can tell, not based in reality. So what's happening instead is that it's sort of and again, I should clarify, I don't want to sound like I think there's nothing concerning about this. I think there's plenty that's concerning about this. You just think that the way it's portrayed is so misleading that it's very frustrating to try and talk about it because people immediately go to sort of the wrong place, if that makes sense, right?

[00:13:27]

Yeah.

[00:13:28]

I mean, after after years of of kind of following the media coverage of concerns about safety just in the West, I'm I'm already like on a hair trigger for misleading representations. So, you know, you see enough, like, images of the Terminator in response to to like much more nuanced positions and you start to get suspicious.

[00:13:53]

Right. So what the Chinese government is very interested in doing when it comes to social credit is sort of thinking about how to apply existing laws in ways that are sort of more suited to like the digital age that we live in. So it is true that there have been a large number of Chinese citizens who have been barred from using high speed trains or planes. But as far as I know, this is generally because it's much more of a thing where if you do something wrong, there is some clear sort of punishment or clear reaction to that specific wrongdoing that you did.

[00:14:28]

That makes sense. So in the in the high speed train or plane situation, I think it's usually either because you misbehaved on a high speed train or because you have some unpaid debt to a court, for example. So you've been fined something or you've gone to court and haven't paid your fees or something like this. And so the idea then is, oh, well, if you are so poor that you can't pay your court fees, you shouldn't be able to like, it seems like you surely can't afford these expensive train and plane tickets.

[00:14:53]

So you have to just buy the slower trains which go to alternate places.

[00:14:56]

You just it's just slower and cheaper because that's sort of the sort of still and there are systems that are doing things like using facial recognition to check if you're jaywalking or like automatically recognizing which cars are parked in the wrong places using their license plates, that that last one is really not that different from stuff that happens in the West. And again, I don't want to say that none of these are problematic. I think there is plenty about it that's sort of concerning.

[00:15:22]

But just the whole wrapping it up in this black mirror, like the government gives you a number and that rules every aspect of your life is just like not at all an accurate picture.

[00:15:30]

Got it. Yeah. Oh, and sorry, before I lose my train of thought, I'll just say that it's been interesting because I started talking about like social credit in comparison to the changing situation with the right. It's been really interesting for me as someone I think something I found really valuable about the time in China, nine months is obviously way too little time to learn everything about a country or gain a really deep understanding. But it did help me get really familiar with the community of Westerners.

[00:15:55]

You know, they sometimes call themselves China watchers who have made a whole career out of this, have spent a lot of time in China as speak much better Chinese than I ever will. And so it was really interesting watching them repeatedly get irritated by how the social credit system was portrayed and how that was sort of distorted and, you know, used for sort of political purposes in comparison to how they've reacted to the Muslim wigger oppression situation, which is that that just seems to be reported completely accurately as far as anyone can tell.

[00:16:24]

It seems to be really a horrible situation that that is, in fact going on. So that was sort of interesting as a sort of sociological exercise as well.

[00:16:33]

Yeah, it's a useful kind of standard of comparison to have that right.

[00:16:39]

I mean, there isn't just this like superiority complex of oh my goodness, like non-violence. You could never understand.

[00:16:44]

Exactly. Yeah. Although I'm not glad to hear that the reporting on the internment camps is roughly accurate. That's not good news. Right. Another general misconception about a potential misconception about China that I'm curious if you agree with, might just be the idea that we can know things about it with confidence. So I've been reading some warnings lately that our information about China is just far more unreliable than we realize, including everything GDP figures, education, crime populations to districts.

[00:17:18]

Is that your impression, too?

[00:17:19]

Yeah. I mean, I think my impression is that this is a bit of a debate that goes on within the China watching community. I think there is rough consensus that any Chinese government data should be treated as at least a little bit suspicious, I think. And there are some people who know much more about it than I do who. Definitely think, for example, the GDP figures are really suspicious because they just sort of show this continuing growth and sort of way that becomes increasingly implausible the longer it continues.

[00:17:48]

I have really wondered about that just because China is such a big part of the overall decline and poverty story that everyone I know lauds and shares on Twitter. And I am quite confident that there has been a large decline in poverty. But the fact that China is such a big chunk of it and the stats in China seem not totally trustworthy, just makes me worried.

[00:18:12]

Yeah, I mean, I think there's like I think that's almost a separate issue, actually. So I think for me, the skepticism I've seen about Chinese GDP numbers sort of increases over the last five or 10 or maybe 15 years of the sort of continuing growth. I think in terms of the the reductions in poverty, there's both like a sort of economic story. You could tell for that. And there's also the just sort of lived experience of people who spent time in China and got to go to China and say the seventies or the early eighties and then go back to China now and see this massive, massive differences in prosperity levels.

[00:18:45]

So sorry, the latter was the lived experience case or what you can just actually see in China.

[00:18:50]

Yeah, although we don't know how big it is right now, like there has been a large decline in poverty, but we just don't doesn't share.

[00:18:57]

There could be a wide range. Yeah, sure. I think I think in terms of like how could China be such a large proportion of the range.

[00:19:04]

It really is notable how completely the Chinese Communist Party just totally ruined their own economy in the the the 50s and 60s. That is like so, you know, there's a strong there's a strong case to be made or a strong sort of causal story to be told for how there could be such giant growth just because there was starting from such a sort of self-inflicted point of great weakness. If that makes sense. I do think I do think that it does make sense to have sort of general skepticism about any numbers coming out of the Chinese government or to some extent also other sources in China.

[00:19:41]

Got it in your conversations in particular with the scientists. So you've got to meet in China. What did you notice? Did anything surprise you with it? Were their views different in any systematic way from the American scientists you've talked to?

[00:19:56]

Yeah, so I should definitely caveat that this was a small number of conversations. It was maybe sort of five conversations of any decent length.

[00:20:05]

Oh, you also went to at least one conference, I know, in China. Yes, that's true. I would I would be difficult, much more difficult to have sort of substantive in-depth conversations there. Sure. I think a thing that I noticed in general among these these conversations with more technical people in the West, in similar conversations that I've been a part of this often been a sort of part of the conversation that was dedicated to, you know, and how do you think I will affect society?

[00:20:35]

What do you think the important sort of potential risks or benefits or whatever? And maybe I have my own views and I sort of share those views. And usually the person doesn't 100 percent agree with me and maybe they'll sort of provide a slightly different take or a totally different take. But they usually seem to have a reasonably well thought through picture of like what is I mean, for society, you know, what might be good or bad about it.

[00:20:56]

The Chinese professors that I talk to, and this could totally just be a matter of relationships and they didn't feel comfortable with me, but they they really didn't seem interested in engaging in that part of the conversation. They sort of seemed to say, I want to say things like, oh, it's just going to be a really good tool. So we'll just do what humanity we'll just do what people, you know, its users will want it to do.

[00:21:17]

And that's sort of all I would kind of ask about risks. And they would say, oh, it's not really something that I've thought about. There's sort of an easy story you could tell there, which is which might be correct, which is basically Chinese people are taught from a very young age that they should not have or that it's dangerous to have strong opinions about sort of how the world should be and how society should be. And that's the important thing, is just to sort of fall in line and and do your job.

[00:21:40]

So that's that's one possibility for what's going on. Of course, I might have just had a selection bias or they might have thought that I was this strange foreigner asking them strange questions. They didn't want to say anything, you know as well.

[00:21:49]

I mean, another possible story might just be that the the sources of the discourse around I risk in the West just haven't permeated China. Like there's this whole discourse that should got signal boosted with Elon Musk and and so on. So there's been all these conversations in our part of the world that just maybe aren't happening there. Sure.

[00:22:09]

But I feel like plenty of the conversations I'm thinking of in the West happened, you know, before that was so widespread. And often the pushback would be something along the lines of those kinds of worries are not reasonable. But I am really worried about employment. And like, here's how I think it's going to affect employment or things along those lines. And that just didn't come up in any of these conversations, which I found a little bit surprising.

[00:22:29]

Yeah, interesting. OK, so you got back from China recently. You became the director of strategy for seats at Centre for Security and Emerging. Can you tell us a little bit about why is this that was founded and what you're currently working on? Yeah, so we were basically founded because so our name center security and emerging technology gives us some ability to be broad in what kinds of emerging technologies we focus on. For the first at least two years, we're planning to focus on AI and advanced computing and that may well end up being being more than two years, depending on how things play out.

[00:23:06]

And so the reason we were founded is essentially because of seeing this this gap in the supply and demand in terms in D.C. or the appetite for analysis and information on how the US government should be thinking about AI and all kinds of different ways. And so the one that we wanted to focus in on was the security or national security dimensions of that, because we think that they're so important and we think that missteps there could be could be really damaging. So that's sort of the the basic overview of it in terms of what we're working on.

[00:23:37]

And so so it sounds like the reason that you decided to focus on specifically out of all possible emerging technologies is just because the supply and demand gap is especially large there. That's right.

[00:23:47]

And so that's right. And so what we work on in the future will similarly be determined by that. So certainly on the scale of 10 or 20 years, I wouldn't want to be opening starting an organization that was definitely going to be working on AI for that length of time. Right. So depending on how things play out, we have room to move into different technologies where the government could could use more in-depth analysis and it has sort of time or resources to pursue.

[00:24:12]

And when you talk about AI, are you more interested in specialized A.I., like the kinds of things that are already in progress, like deep fakes or drones, or are you more interested in the longer term potential for, like, general superintelligence?

[00:24:26]

Yeah, so a lot of our work is focused on a big input into what we work on is what policymakers and other decision makers and the government would find most useful. So that kind of necessarily means that we focus a lot on technologies that are currently in play or might be in play in sort of the foreseeable future. More sort of speculative technologies can certainly come into our work if we think that that's relevant or important, but it's not our bread and butter.

[00:24:56]

I saw that one of the main areas of interest at CES, that is how I interact with other technologies. Can you give an example of that? Yeah, I mean, there are several.

[00:25:05]

So a couple of sort of obvious important ones would be how A.I. interacts with nuclear technology. And so this this could have several again, several sort of branches. Right. So could be how does I interact with the nuclear command control line. So should we be worried about sort of cyber ways that cyber attacks on nuclear command and control or does that not matter? Is it all and safely like I don't know the details of that. Or you could be interested in sort of aiyaz effects on nuclear deterrence.

[00:25:35]

And is that going to change sort of this like extremely unusually stable balance we've had at the international level for the past half century or so? Another area that where I overlaps with existing technologies is in this sort of cybersecurity area of cyber operations. So this is how, for example, nation states, so not only nation states basically hack each other. And to the extent that we think that they are machine learning or reinforcement learning or what have you is going to make it possible to attack or defend computer systems in new ways that could be relevant for that space.

[00:26:13]

For example, in your interaction so far with American policymakers about AI, has anything surprised you about their views? Do you have any like have there been any any key disagreements that you find you have with the U.S. policy community? I mean, I think an interesting thing about being in D.C. is just that everyone here or so many people here, especially people in government, have so little time to think about so many issues and there's so much going on and they have to try and keep their heads wrapped around it.

[00:26:44]

So this means that kind of inevitably sort of simple versions of important ideas can be very powerful and can get sort of really stuck in people's minds. So I see a few of these that I kind of disagree with and I kind of understand why they why they got embedded. But if I had my druthers, I would embed a slightly different idea. So an example of this would be in terms of I thought the idea that data is this super, super important input to machine learning systems and sort of step one of the argument and step two of the argument is, and China has a larger population and weaker privacy controls, so Chinese companies and Chinese government will have access to more data is step two.

[00:27:27]

And then so therefore conclusion, China has this intrinsic advantage in AI, right?

[00:27:33]

Yeah, I've heard that framed in terms of in terms of a metaphor where data is like oil. Right. Like natural resource that will make it more powerful. Exactly.

[00:27:44]

And again, this is like each step of the argument is not completely false. So certainly data is important for many types of machine learning systems, though not all. And certainly China does have a larger population and it does seem to have weaker privacy controls. In some ways they're not another. So actually, an interesting comparison between China and the US seems to be that US or American citizens are very concerned about their privacy from the government, but are much more willing for companies to have access to that, to their data.

[00:28:13]

That means they can get better products or whatever. Chinese citizens are much more concerned about whether companies can access their data and are much happier with the government sort of had access.

[00:28:22]

Interesting. You know why? I don't really I mean, you can tell a story about the US in terms of its sort of, you know, the overall attitudes towards government here and so on. And similarly, you know, you could go back to the same story about China with regard to the government being sort of a big player in the massive boom and prosperity that they've had. But I can't get more detail than that. I don't really have evidence for that.

[00:28:45]

Makes sense. Right. So why is the story flawed? Yeah.

[00:28:48]

So I think firstly, I think the first argument about the importance of data for machine learning overstates how important data is and certainly overstates how monolithic data is. So I guess that maybe gets to the second argument, which is about the key fact here is how many people you have in your country and how easily you can access data about those people, which I mean, I think it's tricky because the argument is often made with the sort of hand wavy like scary face at the end of like therefore all the bad things happen.

[00:29:19]

But if you're looking at what bad things might happen, it seems like there are plenty of other types of data where the US has a huge advantage. So anything to do with sort of military data or whether it be satellite imagery or data from other sensors that the US government has, the US is just really going to have a big advantage. The whole Internet is in English. From what I've read, sort of self-driving car input data tends to be much stronger in the US and in China.

[00:29:45]

You know, there's just sort of many, many types of relevant data. And what's relevant for any given machine learning system will be different from any other depending on application. And so just to go from consumer data to sort of all data, seems like it misses a lot. Aside from the whole question of like, how do the privacy controls actually work and how well can Chinese companies actually sort of integrate data from different sources and so on?

[00:30:04]

Right, right. No, that's a good point. I read somewhere that in addition, it seems like data has steeply diminishing marginal returns. So China might even just have much less of an advantage, even setting aside the factors you mentioned than people think, does that sound right to you?

[00:30:22]

Yeah, I'm not sure that I have a strong enough grasp of the machine learning like state of the art understanding of this to confidently say I wouldn't be surprised if it was the case that data has a diminishing marginal returns. If you have sort of static amount of hardware that you're applying to training, but that you can continue to get value from data. If so, if you can add more hardware so that the story might be a little bit more complicated, but I'm not sure.

[00:30:45]

Yeah. So another part of this data as oil metaphor is, is that kind of insights and an expertise from the programmers is, I guess, less important. Is that another part of the story that you think is flawed?

[00:31:01]

Yeah, I think that one is more something that we're just pretty uncertain about. So that's sort of inherently making a forecast about how interesting are advances in machine learning going to be and where will they come from. So someone who is very prominently associated with this and has made the argument in his book is a the same Taiwanese American venture capitalists I mentioned earlier. Actually, he's also an AI former researcher, I should say, as well. So. Stuff, but he is he has made the claim that that sort of returns to ingenuity are going down at this point and that it's really just about sort of grunt work and that China has a big advantage in putting in the grunt work because it's highly condensed version of his argument.

[00:31:42]

And I just I just don't really see that. It kind of seems to me like things like the the recent Starcraft result from from deep mind where they had their algorithm or had their system rather beat top professional players. It was a somewhat restricted version of the game, but it was not extremely restricted for this release from Open AI, which put out two, which was a text generating system where you sort of give it a prompt, you give it, say, the first sentence or the first paragraph, and it can then generate this like long text sort of however long you like.

[00:32:15]

That is really much more plausible and much more humanlike than anything we've seen before. They seem like not trivial advances to me. And they both came out of two of the labs that are known for having the very best people in the world. So that says to me that, you know, the returns to excellent people are seem to still be there, as far as I can tell. But it's basically projecting, predicting the future is so hard to say.

[00:32:36]

Right. Going back for a moment to the U.S. government and they're thinking about A.I. It has seemed to me that the U.S. government has not been very agency when it comes to anticipating the importance of AI. And by agency, I mean like like planning ahead, kind of taking proactive steps in advance to pursue your goals. Is that your view as well?

[00:33:02]

I mean, I think, again, with the. You know, having moved to D.C. and getting used to things here and and so on, it seems like that's kind of true of the US government on basically all fronts.

[00:33:13]

I'm not sure if you disagree, but, you know, it's a it's a gigantic bureaucracy that was designed two hundred and fifty years ago. You know, that's not completely true. But the the blueprints were created two hundred and fifty years ago. It's just it's enormous and has a huge number of rules and regulations and different people in different agencies and different other bodies with different incentives and different plans and different goals. So I think to me, it's more like it's kind of a miracle when the US government can be agency about something.

[00:33:44]

I can't. And I feel like it's kind of not fair to expect anything else of a structure that is the way that it is.

[00:33:52]

Have we been getting more agency in any significant ways? Well, that's a really interesting question, I would have to ponder that and get back to you. Well, I mean, one thing I'm interested in is I had heard, I don't know, a couple of years ago now that that we don't the U.S. government doesn't have very many people who have both technical knowledge and also who speak Chinese, which seems like a big gap to, you know, if there are any attempts to to fill that gap.

[00:34:20]

Yeah.

[00:34:21]

So it's a tough thing to comment on because I think the people who know this for sure usually know it by having a security clearance and access to classified information. It certainly does seem like the process that we have in place for how clearances are done makes it very difficult to have spent a long time in China, certainly extremely difficult to go back to China while you hold your clearance, which seems like it would push in this direction. But I don't feel comfortable sort of making any blanket statements just because it's it's hard to say sort of who isn't isn't employed.

[00:34:54]

Sure, that's fair. Maybe one example of it, the US government trying to become a little bit more agent. He actually is an interesting case. You probably heard of the project maven situation. Basically, Project Maven was a project set up within the Department of Defense where the whole motivation was sort of AI is going to be really important. We need to start using it. Our existing procurement processes are not at all designed for anything remotely software related.

[00:35:17]

You know, they take years and years and involve this contractor bidding process. Or even becoming a contractor involves a huge amount of annoying and slow paperwork. So why don't we try a totally different set up that is much more, you know, supposed to be inspired by the sort of Silicon Valley agile type setup? Mm hmm. And so Project Maven was very deliberately designed to be let's sort of set up and a project within the Department of Defense that uses machine learning, that uses a kind of, you know, commercially mature application of machine learning.

[00:35:47]

So they chose computer vision. I think as far as I know, they deliberately chose an application that didn't involve killing people. So it was analyzing imagery, basically, because, you know, the Department of Defense has just this huge amount of imagery that it collects from satellites and drones and other sensors. I think in this case, it was drone imagery that was being analyzed. And so employing people to sort of scan through that imagery and look for important things, you know, interesting buildings or interesting whatever military bases or other things that one might want to see in those images takes a lot of manpower and a lot of time, a lot of money, and they still can't get through all of it.

[00:36:23]

So they thought, great, this is an application of machine learning. The technology exists. It's not speculative. It doesn't involve killing people. Perfect trial case. So I was giving that as an example of a way that relatively agency thing that I think the Pentagon put together and got going on. Of course, the reason that most people are familiar with it is that Google, after being somewhat involved for a while, faced a lot of employee pressure not to work with the Department of Defense on it, I think largely because it was largely because it involved drones.

[00:36:54]

And the the drone program is obviously pretty unpopular in many circles that overlap heavily with sort of Google employee circles. So Google ended up pulling out of that project. So I don't know, maybe maybe it wasn't so agency after all. But I don't know. I don't really feel.

[00:37:09]

Give them credit. Yeah. Yeah. Give them a couple of agent points. I saw that another one of these planned areas of focus is researching which measures provide the clearest view of A.I. capabilities in different countries. I'm quoting the the founding director there, just McGeeney. So I'm curious. One thing that people often cite is that China publishes more papers on deep learning than the US does and deep learning. Maybe, maybe we explain that already. It's like the dominant paradigm in AI that's generating a lot of, you know, powerful results.

[00:37:43]

So would you consider that, you know, a number of papers published on deep learning? Would you consider that a meaningful metric?

[00:37:49]

I mean, I think it's meaningful. I don't think it is the be all and end all metric. You know, I think it contains some information. I think the thing I find frustrating about how central that metric has been is that usually it's mentioned with no sort of accompanying. I don't know, this is such a I feel like this is a very rationally speaking.

[00:38:08]

I think it's a good one and not another one. But it's always mentioned without sort of any kind of caveats or any kind of context.

[00:38:17]

For example, how are we counting Chinese versus non Chinese papers? Because often it seems to be just doing it via is their last name Chinese, which seems like it really well, is going to miscount.

[00:38:30]

There are a bunch of people with Chinese last names working at American companies. Correct.

[00:38:36]

Many of whom are American citizens. Yeah. So I think I've definitely seen at least some measures that do that wrong, which seems just completely absurd. But then there's also, you know, if you have an American sorry, a Chinese citizen working in an American university, how should that be counted? Is that a win for the university or is it a win for China? It's very unclear. Right. And they also. These these counts of papers have a hard time sort of saying anything about the quality of the papers involved and look at citations, but that's not a perfect metric, but it's better for sure.

[00:39:06]

And then lastly, they don't you know, they rarely say anything about the different incentives that sort of Chinese and non Chinese academics face in publishing. So actually, my partner is he's a chemistry PhD student and he's currently spending a month in Shanghai. And he mentioned to me spontaneously that he finds it kind of it's clear to him that his professors are maybe it got mentioned explicitly that his professor's salary is dependent on how many papers come out of his lab.

[00:39:35]

So that's just a super different setup. You know, obviously in the US, we have plenty of, you know, maybe exaggerated incentives for academics to to publish papers. I like that sort of another level.

[00:39:47]

It is it's I mean, is the salary I don't know if you know this, but is the salary just a function of the number of publications, or is it because in the system they care about, in theory, they care about the quality of the journal that you publish?

[00:40:03]

I don't know. I assume it's some kind of bonus and I have no idea how they account for quality.

[00:40:09]

I heard about a cafe and I think it was Beijing that gives you free meals for every paper that you get. Oh, I remember it was they take a discount off of your meal proportional to the impact factor of your last paper that was published.

[00:40:25]

That's amazing.

[00:40:26]

Many interesting insights that you talk about distorting incentives. Yeah.

[00:40:32]

So a few minutes ago, we were talking about some of the flaws in the metaphor of data as oil. There's an even bigger metaphor framing device that you hear in the discussion of in China, and that is an arms race. And I you've talked a little bit about and written about why the idea why thinking of of technological development as an arms race is kind of flawed. Can you say more about that? Yeah, I mean, I think there's a few I have a few different issues with this framing so I can try attacking it from a few different directions.

[00:41:05]

I think one important difference here is just that I is not a single technology, right. It's a it's a sort of underlying type of algorithm or something like this that can power many, many, many different types of applications in many, many, many different ways. And so I think. A lot of American thinking on sort of defense strategy and the geopolitical order and so on is kind of. Naturally inspired by the Cold War and by the the models that that people are used to thinking in based on sort of our recent history and so, you know, nuclear weapons are really, really massive feature of the Cold War.

[00:41:46]

Right. And there you could very, very much be like you could very much ask the question of does this country have nuclear weapons? How many do they have? What kinds? You can add them up. And that just really doesn't work with A.I. because it's so, you know, firstly, because even in the military domain, I think it's much more like electricity and that it will sort of power eventually, you know, power and sort of seep through all possible domains.

[00:42:08]

So not just just powering actual weapons, but also in logistics and planning and transportation and all of these other domains. So it's not really something it's going to be much more difficult to sort of easily compare capabilities, even if you purely restrict it to the military domain. But then, of course, it also is not really reasonable to think about it as an arms race, because I is so as I think Jack Clark, that policy director at opening, has termed or at least popularized the phrase omni use so it can be used in just every possible domain.

[00:42:40]

And again, here, I think the electricity analogy, while not perfect, I think is pretty good here as sort of giving the sense of how broadly these technologies could spread. And so, you know, any technology that can be used so broadly is surely going to have some applications that have this sort of, you know, relative dimension, sort of who is better, who has more, for example, and sort of number of nukes. But it's also going to have these massive non-zero sum components.

[00:43:09]

For example, you know, if Google builds some new Apple, I don't like like the Google assistant or whatever Apple's Siri and uses to make that better, that's just going to be a sort of a boon to consumers around the world in a very absolute sense and not at all relative.

[00:43:24]

So that's that's certainly one one big way in which I find the arms race framing misleading, something I'm confused about with the arms race framing, especially, you know, when it's us versus China, which is the usual context, is that most of the major development is happening in private companies. Right. Not government, which is, again, a analogy with the Cold War.

[00:43:46]

So how does it even help the U.S. geopolitically if an American company is developing powerful A.I.? Yeah, so I think this is an area that we would really like to dig into what he said, actually, because I think it's I think it's really interesting and I think it is relevant how domestic industry is doing. I think that is sort to finish the thought. I think that is relevant to kind of military strength or hard power is it might get cold.

[00:44:12]

It sometimes gets contrasts between hard power and soft power. Soft power is more the sort of cultural and fuzzier side of things, influence based side of things. But from my perspective, it seems like so. As well as talking about arms, races are really common framing that I hear, as well as talking about sort of competitiveness or competition. And I think another thing that's sort of going on there is that there is this underlying concern about the rise of China, sort of four factors that have very little to do for reasons that have very little to do with really just the overall macroeconomic shifts that we're seeing, meaning that China is becoming larger, more powerful.

[00:44:52]

And there's a lot of anxiety about that and about that displacing the United States is kind of unusual position. It's held for the last 30 or so years as basically the only world power. So I think I sort of stepped in in this moment that has made it possible for many of those anxieties to be mapped on to AI as a technology and for A.I. to kind of bear the mantle of, well, whoever, you know, as Putin famously said, that we actually said it to schoolchildren in a science fair, trying to encourage them, whatever, you know, whoever rules and I will rule the world or whatever it was so funny how that how that a quote in a very innocuous situation got turned into a big thing.

[00:45:32]

Right.

[00:45:33]

I mean, maybe think it does really I think it really does express, you know, perfectly this sort of mapping of overall concerns about geopolitical balance to onto AI as this like single technology that is the be all and end all. Got it.

[00:45:48]

Yeah. So it's just a much broader and fuzzier notion of arms race that includes economic strength and other things, too. Right.

[00:45:55]

And again, I don't want to make it sound like I think there's nothing to be concerned about here. I think there's plenty. For example, I think the Chinese government is extremely authoritarian and is going to use these technologies to cement that and will be perfectly happy to sell those technologies then to other countries if they'll if they'll pay them money to to also use them on their own populations. I think that's extremely concerning. I just think it's different from sort of AI is this one technology and you just have to be best at AI and then you'll win in some like sort of.

[00:46:23]

Extremely undefined sense of winning. Are there any historical situations that strike you as being more usefully analogous to the development of A.I. than than the nuclear arms race was?

[00:46:36]

Yeah, I mean, I think there is a I'm interested to dig more into this electricity analogy, because I think it's a pretty good one. I think it also an implication I've been thinking about a little bit, especially as I've been sort of, you know, moving in and sort of defense related circles, is to think in terms of, you know, if you're interested in how electricity can, you know, affect your military or improve your military, you're going to have to do this very, like, wide ranging, sort of completely rebuilding our infrastructure to make it compatible with how electricity works.

[00:47:08]

Right. And I think there's sort of a similar thing that could be said about about A.I., where I think a big thing that if the Department of Defense, for example, is really serious about implementing AI, the first thing it's going to need to do is just improve all of its digital systems, which are extremely outdated and, you know, haven't been invested in. So I think that, you know, the electricity example is is a pretty good one, or I think you can get some some juice out of it.

[00:47:32]

I don't know. The other the other one that sort of is often tossed around that I think is not terrible is just the the industrial revolution as a whole, though it's a little bit less clear there. What what A.I. is like is the same as the steam steam engine.

[00:47:44]

Yeah, but something tough. It's tough finding good mappings and analogies that don't fall apart.

[00:47:51]

I mean, I guess I was thinking I was thinking more specifically about the geopolitical ramifications of development. And like the like the game theory in the Cold War was was so simple.

[00:48:03]

And it just no simple game theory never maps super well into the real world. But still, it was kind of a clear framework to use to think about strategic considerations.

[00:48:13]

And I'm just wondering if we have anything, anything like that for for our current very messy situation now, which, as you described, has many different kinds of AI for different situations than it has companies and governments and so on and so forth.

[00:48:28]

Yeah, I don't know. Again, sort of the best thing I can come up with in the moment is the the electricity analogy or sort of the changes in technology that were happening at the start of the 20th century, roughly. I think an interesting something that I know some people are concerned about with A.I. is the risk of unintended escalation. So perhaps you have some kind of automated systems in some kind of battlefield context and they interact with each other in an unexpected way and escalate a situation in a way that is not what the the humans involved intended.

[00:49:06]

Right? Yeah.

[00:49:07]

And I think that's interestingly analogous to at least one story that I've heard about how the First World War got started. Are you familiar with this one? Why don't you tell it basically around these various European nations having these modernized or somewhat modernized militaries that they hadn't really fought in wars with before. And they're being all these new considerations and sort of the railroad, I think was also quite new. And so there are all these new considerations about like if you start mobilizing your troops on the train at this time, like what does that imply for when the other country needs to have already mobilized its troops before, you know, in order to be ready?

[00:49:41]

If so, this sort of ends up with this strange dilemma where everyone kind of needed to start preparing before anyone actually really wanted to go to war, ending up again with this sort of similar like unintended escalation dynamic. I'm not a historian, so I don't want to stand 100 percent behind that causal explanation. But it's kind of kind of interesting that it has that neat analogy.

[00:50:00]

Yeah, it really does. I have to admit that personally, I've been feeling kind of pessimistic about about the potential for cooperation around development, especially between countries, but also between companies. So I'm hoping you can help me. I'll give you a couple of reasons for pessimism and then you can share your thoughts and hopefully counter some of my pessimism. So the first reason is just that you've probably heard this statement bandied about, which is that A.I. is software and it's impossible to regulate or control software.

[00:50:31]

So that makes, you know, any kind of treaty that relies on observation, like, I don't know, like the ones at the Montreal or Kyoto Protocol about climate change. Like you can kind of observe whether other people are adhering to the treaties. And, you know, it's just much, much harder when when the technology is is so much more invisible as I. Sure.

[00:50:54]

Yeah. I mean, I think that is right about software. I think actually it does look like I will be very difficult to monitor in that way. I know that Miles Brundidge, a friend and colleague of mine who works at Open Eye, which is a San Francisco based eye research organization that I've mentioned a couple of times without it, without introducing it. Zamir's bandages is on their policy team and he is extremely interested in ways in which, for example, I hardware so that.

[00:51:23]

Chips involved could be used as some kind of input that could be monitored in the same way that, for example, you know, uranium is closely monitored in the nuclear case. So I think it's a it's a difficult question. It's not obvious how you could do this, but that is a much more sort of concrete, trackable thing that might be possible to to build treaties around or something like this. You know, if at some point in the future that seemed like something we wanted to do.

[00:51:47]

All right.

[00:51:48]

Reason for pessimism. Number two is that cooperation, especially in this case, I think just isn't very robust. Like even if nine out of 10 major players and I want to cooperate, it kind of still doesn't work. If you don't have that 10 out of 10 and doesn't work in the sense of like the nine out of 10 may not be willing to cooperate unless you can get the 10th. And also doesn't work in the sense that, like, if the tenth player goes on and develops some powerful and unsafe system, then, you know, we all suffer, presumably.

[00:52:19]

Yeah, I don't know.

[00:52:20]

I guess I kind of want to ask at this point if there's a more concrete version of cooperation that you're thinking about, because I feel like my response to that would depend on sort of the specific way in which, OK, we're trying to cooperate.

[00:52:31]

Yeah, I was lumping together a lot of different things in there. So I was thinking about like like if there is a set of safety standards that everyone was going to agree to adhere to, like, I don't know, intermittent testing of their systems in kind of constrained environments.

[00:52:46]

Yeah. I mean, I think you're probably right, that situation. I think in general, if you have most situations where you need kind of 10 different players to agree to something to cooperate in a prisoner's dilemma, for example, it's just going to be a heavy lift.

[00:53:00]

That's kind of what I'm picturing, if not exactly a prisoner's dilemma.

[00:53:03]

Right, right. And that's why I kind of pushed back a little bit, because I think it does depend on what exactly is the situation and what exactly are you asking the actors to do. And I mean, maybe this is like a reason that I've kind of turned away from talking about cooperation in this broad sense, because I it's just not clear to me that it's that helpful as a sort of. Overall category of actions that people could take, it seems to me like, for example, investing in safety is like a pretty concrete, still fairly abstract, but pretty concrete action that the companies or universities or whatever it could could take.

[00:53:39]

And that seems like something where there are some amount of kwartin, there is some amount of coordination needed. If you felt like if you really felt like you were going to be missing out by investing in safety rather than just pressing full speed ahead, that would be a tough situation. But I know it in real life, it's a bit more complicated than that. So several different subfields that you could call, you could lump together under safety. So things like interprete ability.

[00:54:04]

So how easy is it to understand what a machine learning system is doing and why or like robustness and security? So how easily can you trick the system? How well will it work in settings where it's not it wasn't designed for or things like value learning? So how do you ask a machine learning system to optimize for something that is a sort of complex and nuanced as human values? You know, those subfields have become reasonably well established. I sort of respected normal machine learning research to do.

[00:54:34]

And so now it is not necessarily such a cost if if a lab has sort of a wing of of people thinking about those problems and then publishing their results for anyone else to use, that doesn't have very sort of much of a prisoner's dilemma dynamic to it. So we could totally end up in a more more prisoner's dilemma like situation in the future. But I don't think that's obvious that we will. OK, interesting.

[00:54:57]

Yeah. So I'm I'm now feeling a little bit more optimistic that there might be avenues or solutions that I wasn't that weren't salient to me before, that involve reducing the costs of of cooperation.

[00:55:10]

And requiring less kind of trust and it could be cheering, yeah, do you want to add any any additional reasons for your optimism?

[00:55:21]

I mean, I think at a at a very high level, I think it is it's it's kind of nice. The machine learning community is so international and open and committed to trying to do good things. Know, I think that there could be more where that came from. So I think there could be more connection between Chinese and American researchers or Chinese and Western researchers in general. And I think there could be more thought put into what exactly the good things that machine learning could do for the world are and how researchers could promote those.

[00:55:51]

But it seems like we're starting from a pretty good baseline. There seems like there are plenty of fields that would be much worse on those dimensions. I think that's an additional reason that comes to mind. I'm not sure to be to be clear, I'm not sure in balance if I feel like sort of optimistic or pessimistic about cooperation.

[00:56:07]

I did kind of slot you into the optimistic spot in this conversation. So that's not your fault. But, yeah, I mean, if it were super easy, we wouldn't really need to see that.

[00:56:17]

So before we move on to the rationally speaking picks, I wanted to ask, is this still hiring? Because if so, we should put in a plug for that.

[00:56:29]

Oh, yeah, we are hiring. We are starting to slow down on hiring. So by the time this airs, we may just have a thing on our website saying that you can send us your resume, but please do. We may also still be hiring in full tilt. So we're hiring for research fellows who lead our research projects, hiring for research analysts, just a slightly more junior role. We're also looking for we'd love to have at least one person on staff who really knows our machine learning machine and machine learning really, really well.

[00:56:59]

So we have a post for an AI and Machine Learning fellow on our website. Also hiring for a data scientist, a senior software engineer. I think that's basically all the roles at this point.

[00:57:09]

OK, great. And the website addresses the website is Secich.

[00:57:13]

So Cecchetti at Georgetown dot edu. OK, great.

[00:57:17]

And we'll add a link to that on the on the podcast website too. I know. Helen, before I let you go, do you have any recommendations for our listeners, for ways to keep abreast of developments in China, always to get background on China resources that you think are particularly trustworthy or interesting? Yeah, so I have a couple of books that I'd love to recommend, which are part of the sort of series of books I read and other resources I looked into as I was moving to China.

[00:57:45]

So the two that I enjoyed the most, one is called The Beautiful Country and the Middle Kingdom by John Pomfret. So that's the translations of America and China in Chinese. And so that's just a broad history of the US China relationship starting in the 18th century and up to today, and is very comprehensive and interesting. And then the other is called Age of Ambition by Evan Osnos. And it's also nonfiction, but it's just a sort of potpourri of stories of modern Chinese people and what their lives are like and how they think about what they want out of life and so on.

[00:58:21]

And so I found those to you provided a really nice sort of one, this broad background overview and another the sort of colorful set of pictures of what life is like for different kinds of people.

[00:58:31]

Excellent. That's a really good pair. Well, Helen, thank you so much for being on. Rationally speaking, this was very enlightening.

[00:58:38]

Thanks. I had a great time. This concludes another episode of Rationally Speaking. Join us next time for more explorations on the borderland between reason and nonsense.