Transcribe your podcast

Today's episode of Rationally Speaking is sponsored by Give Well, they're dedicated to finding outstanding charities and publishing their full analysis to help donors decide where to give. They do rigorous research to quantify how much good a given charity does. For example, how many lives does it save or how much does it reduce? Poverty per dollar donated. You can read all about their research or just check out their short list of top recommended evidence based charities to maximize the amount of good that your donations can do.


It's free and available to everyone online. Check them out at Give Weblog.


Welcome to, rationally speaking, the podcast, where we explore the borderlands between reason and nonsense. I'm your host, Julia Gillard, and I'm here today with Ben Buchanan, then did his PhD at King's College in London on war studies. And he's now a postdoctoral fellow at Harvard University's Kennedy School of Government, where his research focuses on cybersecurity and statecraft. He just published his first book on the topic and he's testified to the Senate. Ben, welcome to rationally speaking.


Thanks for having me.


So, Ben, your book is on something you call the cybersecurity dilemma. That's the title of the book. Tell us about that. What is the dilemma?


Well, the title of the cyber security dilemma comes from this older idea we have in international relations called the security dilemma. And the security dilemma in one form or another has been around since the ancient Greeks. And the notion of the security dilemma is that as one nation defends itself, doing things that it genuinely thinks is just for its own self defense, it unintentionally threatens other nations. And these other nations see the action and precipitate a response. And the first nation sees and responds in its own way.


And you can have an escalatory spiral, sometimes even towards conflict. And it's a conflict that no one wants. So what I do is I look at how this notion applies in cyberspace, how nations defend themselves in cyberspace and what fear that causes in other nations.


So in a traditional security dilemma, if I'm understanding it correctly, you have something like, you know, a nation building up its stockpile of missiles so that it can defend itself against potential attacks. But other nations see that and worry that, oh, maybe they're stockpiling missiles because they plan to attack us. And so there's this kind of feedback loop.


How does this apply in cybersecurity? Like how what is a thing that one could do for defensive purposes and cybersecurity that could look like an offensive move to other nations? One of the examples that I write about is how the United States, for a genuinely defensive reasons, hacked the People's Liberation Army in China in order to better prepare its cyber defenders against what it perceived to be a Chinese threat. So they hack the play. They looked at how the play was preparing to hack other targets, including the United States, and they use that information to its form to inform the American defenses.


The problem, of course, is that had the Chinese discovered this American intrusion, this American hacking, it's deeply unlikely they would have known that it was for genuinely defensive reasons.


What does the hack look like to the recipient? Is it just like we can tell someone has been in our system, but we don't know who it was and what they were doing? Or is it more specific than that?


It's often very challenging if you're on the receiving end of a hack to understand what the intentions were behind it. In some sense. I think the security dilemma in general is always about the impossibility of knowing with certainty or with the strong degree of confidence what's in another actor's mind, what their intentions are. But the mechanics of cyberspace make that particularly acute and damaging, where if you're on the receiving end of a hack and you're just looking at the computer forensics, it can be very difficult to know what the intentions were.


And not only that, but what the intentions will be in the future if that attack, if that hack as a beachhead for something more.


Right. So how is the cyber security dilemma different from from previous instantiations of the security dilemma with past technologies like missiles or nuclear weapons even?


Is that is it just the same kind of calculus but now applied to cybersecurity? Or is the calculus fundamentally changed by the new technology?


Certainly there are, we might call conceptual Constance's old ideas of security dilemma that are still relevant in the cybersecurity context. But one of the things that interested me was the ways in which the mechanics of cyberspace, the physics of cyberspace, make the security dilemma worse. And one of these is the very strong linkage between intelligence collection, which often is done with defensive intent and attack. It's very difficult for one human spy in a country to launch a full out invasion.


So the linkage between the intelligence collection by one spy and attacking army is usually not too great. There's some intermediary steps that are required to go from despite the full on invasion. But in cyber operations, in nation state hacking, oftentimes on the receiving end, the intelligence collection looks very similar to preparation for an all out attack. And indeed, that kind of intelligence collection is required if you want to launch a high end cyber attack. And that's one of the things I think that's particular to cybersecurity that makes a security dilemma in this instance more acute and more dangerous.


You pointed out in your book that that there's this inherent imbalance in cybersecurity that's not present or not present in the same way with previous geopolitics, which is that the advantage in cybersecurity goes to offensive moves instead of defensive moves. Can you say a little bit more about that? And why? Why does that matter?


Sure. What's really important here is not just with the reality of where the advantage is between offense and defense, what we call the offense, defense balance. And I think in cyberspace, it's it's a little bit too soon to say how it's going to play out in the long run. What's important is the perception. What do policymakers think is better offense or defense? And the reason why is if they think offense is better. Historically, in past instances of a security dilemma, we've seen that this is more likely to cause conflict because they want to seize the initiative and claim the offensive advantage.


And what's concerning to me is that that's what we see in cyberspace, where policymakers up to and including folks like President Obama, talk about in almost exactly these terms, the offense, dominant nature of cyberspace, how it's hard for defenders to keep up with skilled hackers. And the concern is that in a crisis or in a conflict, that would be a spur to greater action and escalation. Whereas if I thought defense was dominant, they'd be more likely to wait.


They'd be more likely to wait and to try to let the other side act first in order to claim the advantages that accrue to the defense. Right. So if you're a state and you noticed that your system has been hacked, as you were saying a few minutes ago, it's it can be quite tricky to figure out who the culprit is or what they were there for. Does that uncertainty help or hurt?


Like one argument you could make is that the uncertainty, the difficulty of attributing a particular intrusion to one actor versus another actor could mitigate the problem in that, you know, if I'm trying to and I get hacked and I don't know who hacked me, who I retaliate against, it's not clear. Whereas, you know, in previous security dilemmas, if a state like adds forces to a particular border, even if it's for defensive reasons, it's clear, you know, whose forces they are.


That's not a mystery. And so I could imagine the retaliatory feedback loop being stronger in that case. But how does that play out with with attribution and cyber? That's right.


This is the famous attribution problem of which much is made in the cybersecurity context. And I think there's two ways we could think about this. The first is that I don't think attribution is nearly as challenging as as it's often made out to be. I don't think there are that many examples of significant cyber attacks that we've seen where we don't know, with pretty strong confidence, even in public, who is responsible for it. How do we figure that out?


Well, sometimes it's a matter of the computer forensics. So, for example, when Russia hack the DNC in the summer of 2016, the computer forensics were very clear very quickly. So the cybersecurity community has a very robust research component to it. A lot of these folks are former members of the intelligence community of the United States, the United Kingdom, and they do computer forensics and incident response and oftentimes will publish the data. So there was no doubt for anyone looking at this data by July or so of 2016 that it was Russia that hacked the DNC.


So in some cases, the evidence can be quite clear. What's also interesting to me is one of the ways in which nations so not the cybersecurity research community, but nations resolve the attribution questions. Is a hack each other to see what the other sides are up to, sometimes in advance of being hacked themselves? Now, the New York Times reported, for example, that one of the reasons the United States was so certain so quickly about North Korea was the one that hacked Sony in 2014, was not just because of forensic evidence, was pretty clear which it was, but that North Korea had suffered an intrusion from the United States, that American intelligence hackers were watching the North Korean networks and watching North Korean hackers and saw them carry out the attack on Sony.


Now, the challenge there, of course, is that when you're hacking for attribution purposes, it animates a lot of the same risks that the security dilemma would that a nation suffering the intrusion doesn't think that you're hacking just for intelligence collection purposes or defensive purposes. I think you're preparing an attack. So it gets us back to square one, unfortunately, when it comes to the cyber security dilemma, right. Is the attribution problem going to get easier or harder?


Like I can imagine, technology, innovation pushing on in both directions, like getting easier, people getting more innovative at figuring out who did it, and also people getting more innovative at cloaking who did it. Which of those forces do you think is going to dominate?


Sure. The cat and mouse game in so many ways in cyber operations continues. So does the cat and mouse game. We'll certainly play play out and attribution. I don't know how it will resolve. If I were to make a best guess, it's that attribution would be harder for low end activities. So if you're using off the shelf tools, for example, you might be better able to hide in the noise. And that for more sophisticated operations, unless you're truly prioritizing operational security, as I say, not getting caught and not being attributed, those might become a little bit more obvious because when they're found, there's not too many actors that can conduct them.


And there's enough data out there about how different actors operate, that if they're taking any shortcuts, if they're reusing any code or reusing any techniques, they're more likely to be attributed the same way that police look at, you know, a string of bank robberies and look for patterns between the robberies to identify who might be carrying out the robberies. And if a same group of robbers is, you know, carrying out the string of robberies or serial killings, whatever it is, that's what defenders do and analysts do when they look at cyber incidents.


So to avoid that, you really have to maintain a very high level of operational security.


What are some of the strategies that we have? We we meaning, you know, states around the world historically have used to try to mitigate the dilemma, to try to prevent these unfortunate retaliatory feedback loops. And to what extent do those measures help with the cyber security version of that dilemma?


Conceptually, there's at least two categories of activities we can think about that in the past have mitigated the cyber attacks to mitigate the security dilemma. Rather, one of these is shifting what I what we call the offense defense balance. So I mentioned that before. If you can change the perception of who has the advantage and make everyone think the defense has the advantage and maybe even give the defense a genuine advantage, then that has a tendency to reduce conflict to.


One of the great examples here is the Russian railroad tie in railroad system uses a different with than the European one. And this has the effect of making it very hard to move into Russia by train and very hard to move out. So this makes it pretty clear that the border is reasonably stable because to to overcome that border would require a lot of effort. So it gives a defense the advantage on both sides and that sort of reassures everyone other points of not artificial geography, but real geography, whether it's the placement of islands or on the location of certain cliffs and so forth, can yield more stability in the system.


That's often why international borders correspond to these geographic features. And sometimes, like in the Washington Naval Treaty in the 1920s and 30s, this was a key component of the agreement between the United States and other seafaring nations was to try to get everyone just enough ships to protect their borders, given the geography, but not actually attack. So that's one category. Just a second category of what we call mitigations is to ship offense, defense differentiation to make it very clear that the technologies you're building are purely defensive and shouldn't be seen as threatening.


So you can imagine that technologies like walls and mines and fortifications are pretty clearly defensive in nature. You can't take territory with a wall the way you can with tanks or fast attack jets. So on balance, if states prioritize building defensive weapons over offensive weapons or dual use weapons, that has the effect of mitigating the security dilemma, reducing the tension that everyone feels. Great. So what are the cybersecurity versions of those two strategies, the challenges that those those two mitigations or categories of mitigations which have worked pretty well for a long time, don't translate very well into cybersecurity.


As I said, the perception already on offense defense is that offense has the advantage and and that could be destabilizing in a crisis. And in terms of offense, defense differentiation, as I mentioned, it's very hard if you're on the receiving end of an intrusion to know if it's an offensive intrusion setting up for an attack, if it's something in the middle, setting up for an attack, but just deterrence, not one they actually plan on using if it's some kind of intelligence collection that maybe you're not happy about but isn't going to be an attack or if it's something that's genuinely defensive, so still intrusive, still not something you're happy to see, but genuinely done with defensive intent and nothing more very hard on the receiving end to determine those.


And therefore, we say that the offense, defense differentiation in cybersecurity is pretty hard to do. But surely there are things that, you know, the United States, for example, could do that would be purely defensive, kind of kind of analogous to, you know, building walls or mines or something like that.


Like we could we could train our officials in better security hygiene. We could build better firewalls. I guess I don't know a ton about security technology, but something in the firewall space like and I'm sure that, you know, if we had unlimited money to spend on security, then it would be best to spend it all on everything. Just all the defensive measures, all the offensive measures are not offensive, but all the ambiguous measures in addition to the purely defensive ones.


But if if the benefit of these measures is kind of fungible, then couldn't we just in theory, like decide to spend twice as much as we were spending on cybersecurity in order to purchase the same total amount of risk reduction on our end? But all the safe, non ambiguous kind, not the kind that might trigger a backlash if discovered?


Sure, I definitely is the case that what I call baseline defenses, which are nonintrusive defenses, the firewalls we're talking about, the training that those baseline defenses can and should be purchased. And for individuals like you and me or for organizations, those are the only defensive tools available to us. It's not legal for us to hack someone else, even if we think someone else is going to hack us soon. We've got to make do with our own perimeter based defenses and network based defenses.


The real question is in the sometimes the dog eat dog world of geopolitics, are those baseline defenses sufficient?


And so basically what you're saying in my framework, you're saying like we just might not be able there might be a limit to how much risk reduction we can purchase just by a baseline?


I think I think that certainly is what you'd hear if you brought this up to a U.S. policymaker. And we should be clear that the United States does have other nations for not defensive purposes. Right. Likes the capacity to have offensive options if it wants them. It likes doing intelligence collection that is intrusive and many other nations to do the same. So all I suggest in the cybersecurity dilemma is that even if nations didn't have this maybe intrinsic need to compete with one another, there are structural challenges that come from the physics of cyberspace that make it very difficult to get to stable outcomes.


And I think that's quite discouraging and something that, you know, in the short run and also in the long run deserves attention. Right, right, so, yeah, the dilemma is kind of premised on the idea that, well, in the in the pure form of the situation, that states ideally would want to cooperate with each other, like they would want to abstain from offensive cyber attacks if they could be confident that other states were also abstaining.


And then the problem, the dilemma is just that we can't be confident of that fact. But it could also be the case that states have no desire to cooperate with each other and that even if they could be confident that other states were being good and abstaining, they would still be like, cool, let's go do some cyber attacks ourselves. And if that's the world we lived in, then would the dilemma be all that relevant? Like if the if people were going to states, we're going to do the same thing regardless of their impressions of what other states were doing?


And how certain are you that we're not just in that world?


You're exactly you're touching on something which is a long held debate, almost a fundamental debate in international relations, which is is every nation greedy? Is every state greedy? They always greedy. They always want to to try to gain more territory, get get more for themselves. And it's this is a debate that's going to go on for decades. And I don't have any particular answer to it. I think if you if you believe or if anyone believes that states are intrinsically greedy, then they tend to worry less about any kind of security dilemma because it doesn't matter as much when nations are always trying to seize more from one another.


If you don't if you don't believe in the intrinsic greed of nations, or at least not of all nations, then that's when the security dilemma comes more into play. And that's always been the case. So it's a branch of international relations that goes by the name Defense of Realism, which focuses on things like the security dilemma as opposed to and I'm simplifying a little bit here, the more greed focused side, which is known as offensive realism. Got it.


So your position would basically be look at the situation is probably somewhere in the middle, like probably nations aren't purely greedy. And so the cybersecurity dilemma seems like it applies to at least some extent. And we just don't know exactly exactly.


I would be somewhere in the middle. Yeah. And I think it has applicability now already. But I also think that in the future, even if we get to a point where we can show nations that greed does not pay or doesn't make sense, we have the structural challenge that even if we could remove greed from the equation, we've got to find some way to bring about stability in line with what a defensive realist would want. To deal with the cooperation, to deal with the cybersecurity dilemma as it comes to the fore, right.


So in. Cooperation, problems like this, where everyone in theory wants to cooperate, but only if other people are cooperating. One of the most important things you can do, I think, is to find ways to credibly, reliably signal to the other players that you intend to cooperate. Are there ways for states to do that in the cybersecurity dilemma?


At best, those forms of signaling are nascent at an early stage. So we've tried to work on ways that enable communication in a crisis to try to ratchet down the tension. Think about, you know, how the United States and the Soviet Union after the Cuban missile crisis adopted the idea for the red telephone, such there could be communication flows in a crisis. The United States and China and the U.S. and Russia have tried similar mechanisms in cybersecurity. The challenge, I think, is that nuclear crises are what we call strategic.


They they immediately go to the president or the senior leaders in a country very often cybersecurity crises or cybersecurity operations don't make it to that level. They're what we call operational. They sometimes have strategic effects, but they're at a lower level of abstraction. And I think the we haven't yet adapted some of our signaling mechanisms to account for that operational reality. But certainly there's the case of bilateral cooperation, bilateral agreements that do try to build credibility between nations so that they can trust one another and eventually move towards a world in which cooperation is seen as more tenable than maybe it is right now.


Well, so one thing that I was thinking of when I asked that question, was there various things, some of which I think you mentioned in your book, that countries could do that would show that they care about security and defense, that are kind of costly signals, and that lake states could have not done this thing and they would have gained an advantage for themselves. And so they're sort of willingly sacrificing an advantage for the sake of showing that they care about cooperation, basically.


One example I was thinking of was announcing when zero day exploits are found, is that is that relevant to the cooperation problem? And if so, could you explain what it is?


It certainly is relevant to the zero to exploit simplifying a bit. Here is a software exploit that no one else knows about or the vendor doesn't know about. So defenses are much less likely to catch a hacker using a zero day exploit than they are to catch a hacker using a regular exploit that's been known. And so zero to exploit are fairly rare and quite valuable for hackers. And the United States and other nations have very ambitious programs to both find zero to exploit and to buy them from from vendors to enable intelligence operations.


One of the things that I posit a nation could do is when it finds some zero to exploit, it could burn them. So expose them to public view so that defenses can fix them and will be better. Defenders will be able more swiftly to block hackers that are using them. And this would have a clear and potentially have some possibility of credible signaling because the United States would be giving up the capacity to launch intelligence operations with these exploits and would still be sort of shifting the balance to the defense.


The question, as ever is, is how much of this do you need to do in order to gain that credibility and. That's a matter of enormous debate right now, right? I also feel like the U.S. has sacrificed some credibility by now by being kind of kind of deceptive about what's going to be deceptive, about how honest it was being. But I guess that cancels out. But by sort of purporting to to be working for a sort of collective security when actually it's only working for its own security and when that's exposed, that that's sort of a huge hit to our ability to to credibly claim things.


Well, the United States policymakers would certainly tell you that the United States seeks to gather the kind of intelligence that all nations try to gather and that the U.S. might just be better at it than some other nations. But I think I think you can certainly point to cases where the United States, I think, sacrificed longrun credibility for potentially short run gains. One example is the U.S. government runs a program and has run a program for a very long time to ensure that encryption, which protects, you know, we use it every day to protect the communications we send online, is secure.


So the U.S. has a program to make sure encryption is secure and it verified as secure and encryption implementation that was not, in fact, secure, that had a backdoor in it that would enable, it seems, to enable the NSA to decrypt it. And when this came out, this obviously was a tremendous hit to U.S. credibility, though in the interim, it probably enabled intelligence operations. So there certainly are tradeoffs. And some would argue that that was an example of a long run hit to credibility.


It's going to be very hard to undo for a short run intelligence gains. And my guess is that operation, if it was done, as I described, was done because I never thought it would come out. They never thought the back door be found.


Yeah. There's some line from some TV show I watched recently where a character said he'd just been caught doing something duplicitous and his response was, well, in my defense, I never thought you would find out, quite quite frankly, that that's often how the world of international politics works.


Yeah. So do you a do you have any other ideas for how to tackle the cybersecurity dilemma? And or B, what do you think is the most promising of the ideas we've talked about?


I think it's very challenging to make much progress on cybersecurity issues without thinking quite seriously about what the long run state is a nation is trying to build towards. And, you know, there certainly are folks in the United States who would want to establish some series of norms or some series of code of behavior to regulate how nations conduct themselves in cyberspace. Say things like you can't interfere in other nations elections or you can't steal intellectual property for the benefit of your own corporations.


The challenges that. Very rarely is the United States or Western allies willing to admit what kind of operations they're willing to give up. In order to get China and Russia to give up certain kinds of operations that are valuable to them and I don't think it's realistic in the international system, such as it is, to expect that nations like China and Russia are going to bend themselves to the American will without some something in return. So I think that the biggest priority in the broader cybersecurity agenda beyond important things like improving fences, which are absolutely vital, the biggest strategic priority is figuring out what's the end state we're trying to get towards and what hard choices are we willing to make in order to get towards that end state?


Are we willing to constrain some kinds of intelligence collection? Are we willing to constrain some kinds of offensive preparation? Are we willing to. Maybe give some ground on something called Internet sovereignty or Internet freedom, saying that the Chinese and the Russians can have more control over their own Internet. These are all very challenging questions and involve giving up things that, you know, are important to the United States and important to human rights more broadly. But I don't think there's an easy way out of a cybersecurity dilemma.


On that note, how seriously is the U.S. government taking cyber security like on the one hand, my experience is that the government tends to take sort of national security concerns very seriously. But on the other hand, there are all these cases of departments, U.S. government departments, getting hacked sort of stupidly easily. Like where precautions were not taken that they knew should have been taken. And it's just a little hard to hard to look at all those cases and think that the government is sort of taking taking these threats seriously.


Like you had a story in your book about the Office of Personnel Management getting hacked in 2015. That was disturbing, to say the least. That's right.


There certainly are many cases that could prompt worry among taxpayers and among citizens that that a lot that should be done is not getting done. That said, the cyber security threat in general is definitely recognized for the severity opposes, I think, every year since 2013. So going on four or five years now, the director of National Intelligence has looked at cybersecurity threats as a number one threat to United States interests. And the way I often phrase it is.


The United States has the nicest rocks when it comes to cyber offense, but we still live in an exceptionally classy house when it comes to cyber defense and the challenge there where the glossiness of our house, the closeness of our house is like government not taking precautions, that it is more that it's maybe, I should say, the glossiest mansion, because it's just a really big enterprise at the place people want to throw around, place people want to throw rocks at because there's good stuff inside and a place where there's just so many windows that we don't have enough defenders to blow up.


So, you know, you mentioned the Office of Personnel Management, which is a fairly small office that happens to hold security clearance information for millions of Americans and the records for tens of millions of Americans who are government employees. And this was breached apparently by China. And that's that's a you know, a real problem. But I don't know that this would have been thought of as something that needed tons of cybersecurity defense. And I don't think OPM hired their first cyber security employee until something like 2013.


So this is just one example of many of, you know, some glassy windows that weren't protected to continue our metaphor. And when you've got a big mansion, there's a lot of those.


Yeah, the thing that I remember being especially disturbing about that story was that, a, the some experts had been sort of urging the department to take these measures for a long time and nothing had happened. And, B, that there had been similar hacks in the past and no precautions were taken as a result of those. And see that a previous hack attempt, there had been previous hack attempts that had failed only because the systems that were running at the OPM were so outdated that the hackers were stymied and didn't know how to deal with these super outdated systems.


And that's why they failed to hack the department.


You know, when you're writing a book, when you're writing a book like this, you always try to figure out how can you explain things in as accessible away as possible? And you look for telling details. And I think that last one was a pretty telling detail about the state of OPM cybersecurity. There was this Onion article a few years back with the headline Smart, Qualified People Behind the Scenes Keeping America Safe, quote, We don't exist. And I think about that headline a lot.


How much is your view of sort of general, institutional, governmental competence evolved as you've studied cybersecurity?


It's important to recognize there really are a lot of people who show up to work in the US government for less money than they make in the private sector, on cybersecurity and elsewhere, and try to work on very hard. So I think the challenges were not necessarily hiring as many of those as we need. And we have a very hard time retaining them. And things like government shutdowns make it challenging the general problems in terms of salary and compensation. And so it's not that these smart, competent people don't exist, it's that there's not enough of them.


And frequently they're not empowered to do what they need to do in order to achieve the mission. And the mission we asked them to to achieve is very challenging. What we're talking about, securing that classy mansion, that's a very challenging mission and that probably requires many more people who are more empowered than what we've got right now. So that's that's a very good and very diplomatic answer, and I, I it sounds right to me, but really what I was wondering is like, if you had to estimate, you know, what is the probability that.


That is a vague question that would have to be made more concrete, but something like if you estimate the probability that the U.S. government is going to do the sensible thing in a crisis or do the sensible thing to prepare for a crisis or something like that, has your estimate of that probability gone up or down as you've learned more about the government from whatever your baseline was?


I think what I what I've come to appreciate, the more I look into any kind of government apparatus is the old Washington adage where you stand depends on where you sit and that oftentimes the missions folks are asked to achieve are parochial, parochial to their agency or to their.


Unit and what I don't see enough of in cyber security is broad vision for how these different missions are going to fit together. So certainly I think it's it's difficult to criticize the NSA for collecting foreign intelligence. That is their mission. But there should be some adjudication at the high level, the highest levels of government. What kind of foreign intelligence collection carry too much the risk of blowback that we shouldn't do it. And I don't think we have yet come upon a broader national cybersecurity strategy that will that has has put the pieces together in a coherent fashion.


That's not for lack of trying. Every president in the last couple has said they want to make cybersecurity priority. A number of folks in those senior jobs, some of the work those folks have done have been very good. But cybersecurity is exciting to study because it's so crosscutting. But it's very challenging for governments to make policy, to make strategy in this area because it is so crosscutting.


Yeah, I have a question about methodology in your field. To what extent does game theory like the formal field of game theory help you analyze situations like this, like the security cybersecurity dilemma? And I ask because it definitely seems like the kind of field that game theory is designed to help with. And I've always found game theory intriguing, but I worry that it may not be that useful in the real world because a real world, situations that involve game theory are either simple enough that like a smart person with no formal theory, training could probably just figure out on their own like, oh, this is a cooperation problem.


Or B, the game theory models that do yield sort of interesting, non non obvious answers are so stylized and oversimplified that the conditions that would make that model relevant would never actually obtain to any significant degree in the messy, real world. So, yeah, I'm wondering how much formal game theory is helping with issues like the ones you study.


I think in broader American political science, game theory certainly has an enormous purchase these days. I tend to share some of your concerns about it, particularly as applied to things like cybersecurity, where uncertainty is very high, where misinterpretation is very likely, that I think there are limits to how useful those models are. So I'm not a theoretician myself and I don't use a lot of that methodology. But I could certainly see if someone were interested in game theory and were interested in how game theory works with the security dilemma.


That it would be interesting to see what they came up with when they when they look at something like the Cyprus dilemma as I've read it up and their conclusions may be. Similar or they may use a model, a game theoretic model and come to a different conclusion than I've come to, and certainly I'd be interested in seeing if game theory leads to a different outcome or different predicted outcome in the long run, who gets closer to the mark?


Hmm. Do you think that your approach to thinking about cybersecurity differs from other academics? Like, I have the impression that when the way that you choose topics to research and to focus on users, maybe some different search criteria from most academics, that maybe, for example, you're more focused on real world importance or something, I'm not sure. What's your impression, though?


It's hard to compare oneself to others, but I think I think I'm comfortable saying that I am less quantitative and less game theoretic than the more than the median in American political science. But this is a rationally speaking podcast. I would say that you can add something like cyber security. You can be qualitative and not game theoretic and still be very rational. So I try to work very hard to make sure that I am drawing on technical sources that tell me and tell everyone what actually is happening in cyberspace.


I draw on many computer forensic reports by Internet responders, by cybersecurity researchers that are not meant for political scientists. They're meant for technical researchers that are trying to improve cybersecurity defenses, but do provide an enormous window into what happens between nations in cyberspace, how and why they hack or another. And I do think that American political science, American international relations scholarship, to the extent it wants to write about cybersecurity, should take those reports very seriously and try to get over some of the technical barriers to entry because they provide an unprecedented window into what's actually going on out there.


And if our theory is in our models and our quantitative methodologies don't match up with what actually is going on out there, I don't think they're terribly useful. So I try very hard to start with what's happening there. And hopefully the more formal model folks can build on this research that I've done and apply their models and see where I see where they come out.


You've a couple of times throughout this conversation, you've you've touched on a disagreement or debate within the field. For example, there's the debate over how greedy are our nations, how greedy and who should be model them being. And I guess there was also a debate about the attribution problem. How easy is it to attribute an intrusion to one actor versus another? Are there any other controversies or disagreements within the community of experts who study these things?


One of the big ones that I think is still evolving is how the notion of deterrence applies to cybersecurity. And deterrence in some ways is what helped make international relations a field, because in the advent of nuclear weapons, the weapons were so terrible and so powerful that the key question of policy and also of scholarship became how can we have these weapons but design a system that will never have to use these weapons. And that was the era of deterrence. And I do think that folks like Thomas Schelling and other academics in the 50s and the 60s and the 70s did enormously impactful work with the dawn of nuclear weapons in getting humanity and attacking the United States national security establishment in particular, to a strategy that was workable with deterrence.


The challenge, I think, is that I'm not sure the nuclear deterrence strategy applies terribly well to cyber deterrence. And we often hear of the policy question, how do we deter China and Russia or other adversaries from using their hacking abilities against the United States in a way we don't like. And getting back to where I said before about how cyber capabilities are often not strategic capabilities, the way that nuclear weapons are, it's a challenge, I think, to to model cyber deterrence in a new way without hewing to carefully to this well-established notion of nuclear deterrence.


So one of the debates that we're having in the field and that I strongly participate in is what other kinds of deterrence concepts could be useful when we talk about this new area of cyber security.




Well, then before we wrap up, I want to give you an opportunity to introduce the rationally speaking pick of the episode, which is a book or article or blog or something else you've consumed in your career that has influenced the way you think. So what would your pick be for this episode? A book that certainly I've thought about a lot is a book called The Rise of the Machines, which is by someone named Thomas Revd, who was a big influence on my thinking on cybersecurity and what Thomas does and rise of the machines.


It's quite interesting is he focuses on the history of cybersecurity. And we think of cybersecurity as a field that in many ways is new and different. And in some respects it truly is, but also also has an enormously rich history going back to the 1930s and basically every decade from then until now. And it is always a great reminder to read something like that that exposes that ties together that history, because we can learn a lot from that history. And it's a reminder that as we work on the problems of today, we shouldn't discard really valuable ideas from the past.


Certainly, I think the security dilemma has been called an old and brilliant idea for new and dangerous times. And I think the more we can focus on history, the better off will be to solving the cybersecurity challenges of the present and the future.


Great. Well, Ben, thank you so much for joining us on the show today. We'll link to the rise of the machines and also to your new book, The Cybersecurity Dilemma on the podcast.


What's my pleasure. Thanks for having me. This concludes another episode of Rationally Speaking.


Join us next time for more explorations on the borderlands between reason and nonsense.