Happy Scribe Logo

Transcript

Proofread by 0 readers
Proofread
[00:00:00]

I think one of the big complaints that you get from school kids is like, oh, I'm never going to use this stuff. What's the point of it? It doesn't apply anywhere. And I think really showing just how dramatically important maths is to virtually every aspect of our modern world. I think that that's that's something that can really make the subject come alive.

[00:00:35]

Hello and welcome. I'm Shane Parrish, and you're listening to the Knowledge Project, a podcast dedicated to mastering the best what other people have already figured out. This podcast on our website, F-stop blog help you better understand yourself and the world around you by exploring the methods, ideas and lessons learned from others. If you enjoy this podcast. We've created a premium version that brings you even more. You'll get ad free versions of the show like you won't hear this early access to episodes.

[00:01:00]

You would have heard this last week, transcripts and so much more.

[00:01:04]

If you want to learn more now, head on over to F-stop logged podcast or check out the show notes for a link.

[00:01:10]

Today I'm talking about the incredible Hanna Frey, a mathematician, author of Hello World and the Mathematics of Love. We talk math, how schools can promote Better Engagement, Human Behavior, how math can help you dead.

[00:01:22]

And we explore what it means to be human. In the age of algorithms. It's time to listen and learn. The Knowledge Project is sponsored by Medlab for a decade, Medlab has helped some of the world's top companies and entrepreneurs build products that millions of people use every day. You probably didn't realize that at the time, but odds are you've used an app that they've helped design or build apps like Slack, Coinbase, Facebook Messenger, Oculus, Lonely Planet and many more.

[00:01:54]

Medlab wants to bring their unique design philosophy to your project. Let them take your brainstorm and turn it into the next billion dollar app from ideas sketched on the back of a napkin to a final ship product. Check them out at Medlab Dutko. That's Medlab Dutko. And when you get in touch, tell them Shane sent you.

[00:02:12]

This episode has also brought to you by 80-20. 80-20 is a new agency focused on helping great companies move faster without code. The team at 80-20 can build your next app or website in a matter of days, not months. Better yet, they can do it at a fraction of the cost. You walk away with a well-designed, custom tailored solution that you could tweak and maintain all by yourself without the need to hire expensive developers.

[00:02:37]

So if you've got an app or website idea or you're just ready for a change of pace from your current agency, let the team at 80-20 show you how no code can accelerate your business. Check them out at 80-20. Don't think that eight zero two zero dot eye and see the launch project is sponsored by Coramba Furniture.

[00:02:56]

Coramba is a new flat pack furniture company started by two stay at home dads with a shared love of great design. Their latest collection is a modern and affordable and will fit right into your home. All of Christmas pieces are ethically manufactured on the West Coast of Canada and made with sustainable European birch plywood. You get to choose between a white or natural would finish, and a soft cloth is all you need to keep your furniture looking fresh. Coramba makes chairs and benches, nesting coffee tables, wall hangers and everything else you need to fully furnish your home and a cozy, minimalist aesthetic.

[00:03:29]

Go check out Coramba DataStore. That's Coramba DataStore. Like ay caramba. Use code knowledge for free shipping and fifty dollars off your first order over two hundred dollars. Life's better when you love your furniture. That's Coramba with the sea, not with a K. I'm from the East Coast so if it's hard to tell, use the sea. Hannah, I'm so happy to have you on the show. Oh, well, I'm very excited that you have asked me.

[00:03:53]

Thanks. Thanks for having me on. Jane, what are you interested in? Maths. Like I said, math, for starters. Thank you for for Anglicised. Yes. Appreciate that. I think partly I was born that way. So, OK, actually what happened was when I was about 11 years old, my mum, I think she just didn't know what to do with us over one summer holiday. So she she bought me this maths textbook and she made me sit down every day and and do a page of this textbook before I was allowed to go to the garden to play.

[00:04:25]

And then when I went back to school that September after after the summer, I was just so much better at the subjects. I just understood everything. I'd seen everything before. And I was just really well practiced at it. And I think that it's inevitable that if you're good at something, you just find it all the more enjoyable. And the more enjoyable you find something, the less like it feels like hard work. So I think that's it really.

[00:04:46]

I think that that's just that was sort of before then. I mean, I didn't dislike it at all, but I wouldn't have said it was my thing. But I think that that was really a stark change after that. Then it became my thing. And then, you know, the more and more I go into it, the more and more became part almost of my identity in math is such a tricky subject for students.

[00:05:06]

I mean, they seem to have this very love, hate relationship with it, with most people hating it. What are some of the things that schools could do to promote better engagement with students over maths?

[00:05:16]

So it's a tough thing because, I mean, on the one hand, if you're ever going to be able to reach the most beautiful elements of the subject, if you're ever really going to be able to properly put it to use, you can't have your working memory being swamped by remembering all of these rules and remembering these really fundamental basics of the subject. So it's slightly unfortunate that that inevitably means that when you're starting out, when you're in the early stages, it has to be dominated by essentially learning the basics of the subject, something that's it's difficult.

[00:05:52]

It's not particularly inspiring, or if it's taught in a very straight fashion, it's not particularly inspiring. So in terms of what schools can do, I mean, I think for me, the I've really seen a difference when when teachers really put in the effort to demonstrate just how useful this stuff is. I think one of the big complaints that you get from school kids is like, well, I'm never gonna use this stuff. What's the point of it?

[00:06:17]

It doesn't apply anywhere. And I think really showing just how dramatically important maths is to virtually every aspect of our modern world. I think that that's that's something that can really make the subject come alive.

[00:06:32]

Do we see that sort of manifesting itself now with kids attitudes because they're surrounded by algorithms and machines? And does that change how they perceive maths?

[00:06:41]

Well, yeah, but I think that, unfortunately, the maths is invisible, right. Because, I mean, for this stuff to work for a mobile phone to work, it has to be all of I mean, the amount of maths involved in getting your mobile phone. Well, you know, me speaking to you now have many thousand miles apart with the amount of money involved is like phenomenal. I mean, it's easily PhD level stuff. But for this to work effectively, it has to be invisible.

[00:07:04]

It has to be hidden completely behind the scenes. You as the user can't really be aware of any of it, is there? So even though, as you say, with algorithms dominating more and more of the way that we're communicating with each other, how we're accessing information, you know what we're watching, who we're dating, everything. Even so, I think the maths is so behind the scenes that I don't think it's necessarily clear that it's driving so much of the change.

[00:07:31]

As you were saying, that I was sort of thinking of a Formula One car. The driver gets all the attention, but there's this big, huge team of engineering behind them that we don't know their names. We don't know who they are, what they do.

[00:07:42]

That's a perfect analogy is perfect analogy. I always think so. A big fancy Formula One. And the reason why I like it mostly is because I think of it as a giant maths competition. Just with a bit of glamour on top.

[00:07:57]

I have this idea where they they should do a driverless version of the cars, too, because you have this close track, right. It would be super easy to do an autonomous and then you're actually then the engineers are competing. There's no human element. And then you could celebrate the engineers. And I think by celebrating the engineering and the people behind the scenes, you get kids more interested in that work.

[00:08:19]

Oh, I see. I don't know if I agree with you actually say that. So, yeah. Fastback I'm sorry. So early on. So OK, so partly there are examples of that already. There's I think it's called robo race, which is the fastest autonomous vehicles in the world. There's different teams because it's it's like robot wars right on the track and it's all very fun. It's all very interesting. But for me, I think the part of the problem with why.

[00:08:48]

Education is is difficult, is this really we care a lot about stories and we care a lot about stories of people. And I think that in many ways, the thing that makes Formula One racing so fascinating to watch is because you have it sitting in that gigantic engineered machine with so much science and technology going into it. You have a person who cares so much about what happens in that race. You know, you you live the whole emotional rollercoaster with them as the series progresses.

[00:09:21]

And I think if you take that out of the situation, then then actually I think it dehumanizes it and and makes it less interesting in a way that's really interesting.

[00:09:30]

So how do how do we make a better story around NASCAR then?

[00:09:33]

So I think it's that for me, it's humanizing. And I think that's that really is it for me. I think one of the certainly in I think in the States, too, does this massive book called Fermat's Last Theorem Massive is in in terms of its sales rather than is it? It's it was written by Simon Singh and it's I read it when I was maybe 16 years old. And one of the things that really, I guess, solidified the idea that I wanted to be a mathematician.

[00:10:02]

And in it, it's just a long story of hard core maths throughout the centuries. But what he did was he anchored all of the stories to the people that were involved. And it is exactly like a race car driver, like you care so much about the characters who are involved in this history of math. There's stories of someone like Gaulois is a great example of a character that science tells the story of in the book. So he was French.

[00:10:29]

He was about 19 years old. I think someone I'm sure will know the facts better than me and so will contact me and correct me, but is about 19 or 20, and he'd been having an affair with a very important person in French society, a woman who was older than he was.

[00:10:47]

And her husband had found out about this affair and had challenged him to do. Now, of course, in in France, this is like, I get a guess, 70 countries. Eighteen hundreds in France at that time. Sometimes challenges you should do or you do not out you go to the do. Except unfortunately, Gaulois had been working on this incredibly important theory of mathematics, now known as Gaulois theory, and hadn't quite finished the math. And so he knew that at sunset he had to go off and fight this duel and probably be killed.

[00:11:18]

And he was Despereaux all the way into the night, drinking and and carrying over this, you know, his quill and his paper desperately trying to write down as much as he could. And the papers that he left there were left on his desk as he went off to do that. Just incredible. Like you can see sort of photos of them or see images of them. They still exist. And it's Lozano's of equations. Loden is scribbling. And then every now and then he's like, oh, my goodness, what's happening?

[00:11:48]

This lady? Why did I do this? I'm after my death and his death, which to finish up with it. And I think for me, that's what makes the maths comes alive. Because when you realise how important this stuff is to people, that they know that they're going to their death and still the only thing they want to do is finish them. So I think that's the stuff that makes it come alive as a great story.

[00:12:09]

I hadn't heard that before. It is then. Yeah, it is sort of like pulls you in. What is it what does it mean to you to be human in an age of algorithms and machines.

[00:12:23]

Wow. Goodness. I mean, I couldn't have written an entire book on the subject. Exactly.

[00:12:31]

So I think that actually that whole idea of humanizing maths, I think it sort of works both ways, actually. I think that you need to humanize maths to make people want to find out more about it. But I also think that the maths itself needs to be humanized if it's to properly fit in with our society, because I think this is something that's happened a lot actually in the last decade. So I think that people have got very, very excited about data and about what data can tell us about ourselves.

[00:13:04]

And I think that people have sort of rushed ahead and maybe not always thought very carefully about what happens when you build an algorithm, when you build something based on data and just expect humans to fit in around it. And I think that that actually has had quite catastrophic consequences. So the most sort of famous examples of this, this Cathy O'Neill's book, Weapons of Mass Destruction, which I think honed in on one aspect of this really brilliantly, which is the bias that comes out when you don't think very carefully about taking this this this algorithm and planting it in the middle of society and expecting everyone to just to just fit in around it, you know, the sort of gender bias.

[00:13:48]

We see the racial bias, all of that stuff, I think that's very well documented and quite well known and understood about. But I think there are slightly more subtle things as well. Like so the example this is makes this a really personal story for me is this and the reason, I guess, why I started thinking about this very clearly and very seriously and the reason why I wrote a book about it so is because of something that happened to me where I think I made the same mistake, where I got so tunnel vision about the mass that I didn't think about what it meant when you put it in the human world.

[00:14:21]

So this is back in. As soon as I finish my Ph.D. back in 2011, the first project really that I did with was a collaboration with the Metropolitan Police in London. So we just had in 2011, we had these terrible riots across the country that that started off as protests against police brutality. But they evolved into something else and a lot of looting. There was a lot a lot of social unrest, really. And the police had been, I think, slightly stunned by how quickly this had taken hold.

[00:14:52]

I mean, we were in for four days, really, that the city was was on lockdown. London City was on lockdown. So we've been working collaboration with the police just to see if there had been anything they could have done earlier, just to to calm things down, I guess to just see if there was if there were signatures or patterns in the data that would have given them a better grasp on how things were about to spread. So we wrote up this paper, the you know, the academic community were really happy with it, whatever.

[00:15:24]

And a couple of years later, I went off to this big conference in Berlin and gave a talk. There was like fifteen hundred people there. There's talk. And I'm standing on stage giving a talk about this paper. And I think that I think has been naive, really foolish at the time, because when you're a mathematician, there's no Hippocratic Oath for mathematicians, right? There's no like you don't have to worry about the ethics of fluid particles when you're when you're running equations on them.

[00:15:55]

And so standing on stage and I was presenting this paper and I was giving these very enthusiastic presentation, I was essentially saying how great it was that now with data and algorithms, we were in a world where we could help the police to control an entire city's worth of people. That was essentially saying and it just hadn't occurred to me that if there is one city in the entire world where people are probably not going to be that keen on that idea, it's going to be Berlin so tightly, it just didn't think it through anyway.

[00:16:29]

So as a result, the Q&A session, I mean, they destroyed and quite rightly so, they destroyed me standing to destroy them.

[00:16:38]

I just destroyed them. They just deserve. Yeah, it was like heckling and everything was amazing. It was amazing. I think for me that was this is really an important moment because I think I hadn't it just hadn't quite tweaked with me sounds. I know that it makes me sound really naive, but I hadn't quite tweaked in my mind that you can't just build an algorithm, put it on a shelf and decide whether you think it's good or bad in complete isolation.

[00:17:01]

You have to think about how the algorithm actually integrates with the world that you're embedding in. And I think that that's a mistake. That sounds like it's really obvious, but I've seen lots and lots of people make that mistake repeatedly over the last few years and continue to make it.

[00:17:16]

Can you give me examples of what comes to mind when you say that just as a silly example, kind of more trivial example? I think that's the way that some NAVs used to be designed. This is less less true now. But certainly the way that's something that's used to be designed was that you would just type it in and it would tell your destination. And if you don't tell you where you were going, if you weren't and you could if you wanted to go in and interrogate the interface and find out exactly where the thing was sending you bus, but most of all, you'd put in the address and it would just tell you to go.

[00:17:50]

And that is an example, I think, of not thinking clearly about the interface between the human and the machine, because there are all sorts of stories of how people just blindly following that.

[00:18:04]

There's something that said my favorite example is there's a group of Japanese tourists in Brisbane, and this is a few years ago who wanted to go visit this very popular tourist destination on an island off the coast of Brisbane. And I put it in, didn't look at the map. Off they went. I didn't realise the sat nav was essentially telling them to to drive. That was the ocean. An amazingly amazing the story. You think you think, OK, fine, right.

[00:18:32]

Like you get to the side, to the ocean and you're like, well, no, it's it's obviously asking me to drive into the ocean. I'm not going to they didn't have that moment. They carried on driving. They really trusted the machine and thought I to bring us to a path mentally and eventually had to abandon their vehicle. I think like. Three hundred meters out into the ocean. This is amazing. It's like half an hour later as the tide came in, it's very positive and was crazy.

[00:18:56]

Sort of like calls to mind, though. Like what what role do algorithms play then in abdicating thinking and authority?

[00:19:04]

Well, like, that's it. That's it. So I think the shift in design that we've seen recently and this is only very recently, is where you type in the address now. So I'm thinking in terms of Google Maps in ways certainly and perhaps others, is that you type in the address and then up pops a map, which gives you three options. Right. So it's not saying I've made the decision for you. Off you go. It's saying here is the calculations I've made.

[00:19:30]

Now it's down to you, but it's giving you that. I guess that just that last step where you can overrule it, where you can you can kind of sanity check it if you like. And I think I like to I mean, I sort of maybe are giving you a bit too much credit. They did drive out into the ocean. But I sort of think that these terrorists had been seeing a the show and maybe they would have done it.

[00:19:53]

How does that work as algorithms become more and more? Is that the goal then? I'm thinking about the integration between algorithms and medicine where you're.

[00:20:04]

Yeah, you're scanning. Is it is it always a human overruling? Are there cases? Is there a certain like how do we how do you think about that?

[00:20:13]

Yeah. So that I think is incredibly, incredibly tough example. So, OK, the first the first algorithms that came through the machine learning algorithms that were designed to just tell you whether there was cancer cells within an image or not. Yes or no. And that's all very well. That's kind of that's good. And they they proved themselves that they were good, that they could they could perform well in that. But they're problematic. There were examples where, you know, they'd go into a hospital, they'd been performing incredibly well on a certain set of images, and then suddenly they're performing incredibly badly.

[00:20:48]

And these algorithms are so sensitive that they were picking up on things like the type of scanner that was used was making a difference to the decision process of the algorithm or like actually the best example of that is there was a skin cancer diagnosis algorithm that was picking up on lesions on people's skin. Photographs taken by dermatologists with the training set. And it turned out that the algorithm wasn't really looking at the lesion itself at all. It was deciding whether or not it was cancerous based on whether there was a rule, a photograph next to it or nose like that kind of it makes this stuff make some stupid mistakes.

[00:21:23]

So I think that that was sort of phase one of these sort of algorithms with the medicine. I think phase two is about making them much more able to be interrogated. So, for instance, a deep mind who I spent a long time working with on public outreach projects, one of their big systems is rather than just having an algorithm that tells you what the answer is, is having two separate eyes. Right. Two separate agents, one of them the highlights, areas of interest in the within the image itself and then the second algorithm that goes in and labels them.

[00:21:58]

But it's just kind of opening up the box a little bit more so that it's possible for pathologists or radiologists to interrogate that image. So I think that stage two. Right, and that's like that's the difference between old type that new types of nerves. But I think that there's a stage three in medicine that we're only just beginning to go into, which is, I think, harder or even harder. One of all, which is the most cancer cells in people's bodies actually are nothing to worry about, which sounds like a bad idea.

[00:22:28]

But there was a study a few years ago, you have to forgive me because I have all the numbers on the tip of my tongue. But there was a study a few years ago where a group of scientists performed autopsies on people who had died from a whole host of different causes. So everything from heart attacks, car crashes, all these different kind of things. And they looked deliberately to see whether they had cancerous cells in the body. And even though none of these patients had died from cancer, a huge percentage of them had cancer cells within their body.

[00:23:01]

And the reason for this, it's not that they all had really serious cancer that needed to be detected and treated. It's this actually this happens a lot, right? It's not. If you have breast cancer, for example, it's not a case of you don't have cancer or you do have cancer. There's a whole spectrum in between that and in between totally fine and really, really nasty cancer cells. There are tumors that may turn out to be something bad and may just maybe the body made to deal with them or they may just stay there untouched well into for essentially all of your life and be nothing to worry about.

[00:23:41]

And the real danger of relying too much on algorithms to detect those cancer cells. Is that if you are too good at detecting them, you're not just good at detecting the ones that then go on to be a problem, you're also going to be good at detecting the ones that are nothing to worry about and hence potentially causing huge numbers of people to have very serious and very invasive techniques like double mastectomies, for instance, life changing treatments. Right. That actually they never needed to have.

[00:24:16]

And that, I think, is something that's it's another thing about that boundary between how much do we trust our machines that I think is resolved in a sort of tricky one for the next few years. I think that's fascinating.

[00:24:27]

I hadn't really thought of it in that way before, but I like the way you put it. I think one of the interesting things going into the future is also going to be on. If algorithms are involved in the decision, is there an obligation to make them open source? And then that would be sort of like stage one where, you know, you can critique and see the actual algorithm working. But stage two would be maybe it's a machine learning algorithm.

[00:24:52]

And then each iteration that it runs is actually slightly different, like do we have to keep a copy of each algorithm and would we be able to detect, like, how it actually worked? I know.

[00:25:02]

I know. It's so hard. It's so hard because I think it's very easy. You know, it's very easy to say there are definitely problems with algorithms that an open source is very easy to say. There are huge problems with transparency. But finding the the way around it, finding the solutions is a lot.

[00:25:24]

It's a lot harder. I mean, because I think actually I sort of am of the opinion to open source algorithms, at least the ones that are proprietary, least the ones that have some sort of intellectual property types them. I think that that is both too much and too little. So what I mean by that is I think it's too little because if you publish the code, if you publish the source code of something, the level of technical knowledge and time actually that it would take to interrogate that as an outsider, enough that you have a really good understanding of how it works, enough to be able to say, OK, you want to sort of sanity check it if you like.

[00:26:03]

It's just boss. And I just don't think it's realistic that actually you can ask the community at large really to to be able to take on that load. But then simultaneously, I think it's by doing so, by releasing and making everything at the source, then I think that you are going to stifle innovation, because I think that that part of the really good thing is part of the reason why we've seen such acceleration of these ideas is because because it's possible to make them commercially viable.

[00:26:34]

And I think that if you publish things as open source, then there's a problem with that, that you risk slowing down innovation, I think, which is at which I didn't think you'd want to do either. The workaround, though, you know. OK, so what do you do instead? Because I think that everybody sort of agrees that transparency is really important here. I think particularly when it comes to the more scientific and of algorithms. I mean, I think to be totally blunt, I think that unless you're doing science open, then you're not doing science.

[00:27:01]

But, yeah, I mean, it's really this is some of the suggestions have been and I think this is one that I broadly support, some of the suggestions have been to to copy the pharmaceutical industries model. So whether you have a separate board like the FDA who have the ability to really interrogate these algorithms properly and can give a sort of rubber stamp of approval as to whether they are appropriate to use or not, but that's different from from just open source because, I mean, sort of FDA style thing would be able to go in and and stress test them and test them for robustness and check them for bias and all of those those type of things instead.

[00:27:35]

But I mean, there's no easy there's no silver bullet to sort of addressing some of the many problems that algorithms raise.

[00:27:42]

Do you think, like we would rather on on general, like when do we want algorithms making decisions and when do we want humans making those decisions?

[00:27:52]

Well, so there's certainly there's there's certainly some occasions where actually the further away humans are from, it's the better humans are. We're not very good at making decisions at all. We're not very good at being consistent. We're not very good at being clear, you know, with nuclear power stations, for instance, as much as possible. You want to leave that to the to the algorithms. You want to leave that to the machines. Likewise, in flying airplanes, I think you want to leave that to autopilot as much as you possibly can.

[00:28:19]

In fact, see, there's that that really nice joke to fly a plane, you need three things. A computer, a pilot's a human and a dog. And the the computer is there to fly the plane. The human is there to feed the dog and the dog is there to bite the human if ever it touches the computer night. There's definitely some situations where you want the humans as far away from as possible. But I also think that actually these machines.

[00:28:47]

Especially the ones that are getting much more involved in more social decisions, they really are capable of making quite catastrophic mistakes. And I think that if you take the human out of the decision, even if on average, you might have a slightly better, more consistent framework, if you take the human out of that decision process altogether, then I think that you risk real disasters. We certainly seen plenty of those in the in the judicial system, you know, of where algorithms have made decisions.

[00:29:22]

Judges have followed it blindly and it's been really the wrong thing. Just to give you an example, there was there's a young man called Christopher Drew Burk's. This is actually a few years ago, but he was 19 years old from Virginia and he was arrested for the statutory rape of a 14 year old girl.

[00:29:41]

So they had they had been having a consensual relationship, but she was underage. And so he was which is illegal and he was convicted. But during his trial, an algorithm assessed his chance of going on to commit another crime in future. These are the sort of very controversial exactly algorithms that but actually have been around for quite a long time. And this algorithm went through all of his data and it determined that because he was a very young man who was only 19 years old and he was already committing sexual offences, then he had a long life ahead of him and the chances of him committing another one in that long life were high.

[00:30:18]

So it said that he was high risk and it recommended that he be given 18 months jail time, which I mean, I think you can argue that one way or the other, depending on your view. But I think what this case really does do is it highlights just how illogical these algorithms can sometimes be, because in that particular case, if instead the young man had been, I think, thirty six years old, that would have been enough, this algorithm to put so much weight on his age that if he'd been thirty six, it would have been enough to tip the balance, even though that put him at twenty two years older than the girl, which I think surely by any possible metric makes this crime much worse.

[00:30:57]

But that would have been enough just to tip the balance for the algorithm to believe that he was low risk and to recommend that he escaped jail entirely, which I think is just an extraordinary example of how wrong these decisions can go if you hand them over to the algorithm. But I think for me, the scary thing about that story is that the judge was still in the loop. The judge was still in the loop with that decision making process. And I think that you would hope in that kind of situation that they would notice that the algorithm had made this terrible mistake and step in and overrule it.

[00:31:28]

Well, it turns out that those Japanese tourists were talking about earlier. I think the judges are a lot more like them than we might might want them to be because so in that case and lots of other cases like it, actually the judge just sort of blindly followed what the algorithm had to say and increase the sentence of this individual. So, I mean, you've got to be really careful, right? You've got to be careful about putting too much faith in the algorithm.

[00:31:51]

But just on the flip side of that judge's example, I also don't agree with the people who say, well, let's get rid of these things altogether in the judicial system, because I think there is a reason for them being there, which is this human is a terrible decision makers. Right. Like, there's so much luck involved in the judicial system. There's studies that show that if you take the same case to different judges, you get a different response.

[00:32:15]

But even if you take the same case to the same judge and just on a different day, you get different responses or judges who have daughters tend to be much stricter in cases that involve violence against women. Or my favorite one actually is that judges tend to be a lot stricter in towns where the local sports team has lost recently, which is kind of situation where you're dealing with right like this. There's just so much inconsistency and lack that's involved in the judicial system.

[00:32:41]

And I think if you do it right and carefully, I think that is a place for algorithms to support those decisions being made.

[00:32:50]

Do you think in a way we get to advocate ourselves from responsibility if we defer to an algorithm? So if you're a judge and you defer to an algorithm, it's not like you're going to be fired for deferring to the algorithm that everybody agreed was supposed to input or make the decision exactly that, especially if you're you know, especially if people vote you in like, you know, and here's a way that you can absolve yourself of responsibility.

[00:33:15]

I completely agree. I think all of us do it, all of us do it. And that's the problem is that this is a really, really easy thing to happen. It's very easy for us to to just, I don't know, take a cognitive shortcut and do what do what the machine tells us to do, which is why you have to be so careful about thinking about this interface of thinking about the kind of mistakes that people are going to make and how you mitigate against them by by designing stuff to try to prevent that from happening.

[00:33:44]

Can you talk to me a little bit about what we can learn about? Making better decisions from us. I'm going to do a personal example with the thing that I think the example of what's going on right now with the pandemic is a really tragic and chilling example of how important mass can be when it comes to making clear decisions. Because I think that this is just one situation where in many ways, mass is really the biggest weapon that we have on our side.

[00:34:17]

We don't have pharmaceutical interventions yet. We don't have a vaccine yet. And all we have really is the data and the numbers.

[00:34:25]

This is March 18 just for people listening to Tony.

[00:34:30]

Yeah, exactly. So we're still at the stage where things are ramping up. I mean, you know, who knows how bad it's going to get from here. But but certainly in the last month, I mean, they're the first ones really. The epidemiologists, the mathematical models are the ones who've been sort of raising the alarm and driving the decision making and driving the strategy and driving government policies. You know, because at the moment, if you looked only at the numbers of where we are, I think there's been maybe one hundred and fifty deaths or so in the UK.

[00:35:00]

I've got the exact numbers to my fingertips. But something of that order right around 100 deaths in the U.K., which, you know, every single one of those is a real tragedy, but it's not a huge, huge, huge number. But the reason why we know that, that's about why we were in a bad situation and the reason why we know we need to take these extreme measures to essentially shut down our borders, to shut down our country is because the math is telling us what is coming next.

[00:35:29]

We don't have a crystal ball to look into the future, but really, this is the only thing that's there guiding us.

[00:35:35]

It's really fascinating to me. Can you talk to me a little bit more about the pandemic and sort of like how you think about it through the lens of math?

[00:35:44]

Yeah. So actually in twenty eighteen I did a big project with the BBC because we knew that a pandemic was coming. So we teamed up with some epidemiologists from the London School of Hygiene, Tropical Medicine and the University of Cambridge to collect the best possible data so that we could be prepared for when something like this did happen. The big problem at that point say this is a couple of years ago, the big problem was that if you want to know how an epidemic or flu like virus will spread through a population, then you need to have really good data on how far people travel and how often people come into contact with one another and crucially, who they come into contact with, the different age groups, the settings they come into contact with, other people and so on up until a couple of years ago, Sandmeyer to say it.

[00:36:40]

But given that everyone's carrying mobile phones. But up until a couple of years ago, the best possible data that we had within the UK, at least for how people did that, how people moved and how people mixed with one another, was a paper survey from two thousand six where a thousand people said, Oh, yeah, I reckon I reckon I did this. I reckon I came into work and I went about that front. I reckon it came into contact with these people.

[00:37:06]

So what we did with this, with the help of the BBC, because they have such amazing reach, is we created this mobile app that would essentially track people. Then people would volunteer and sign up by watching the programme and so on and let us track them around for twenty four hours. And they came into contact with and they get loads of things about their demographics and their agents on the zone. So now, two years later, less than two years later, we have this incredibly detailed data set that's feeding right into the models that are currently using, making this enormous difference in terms of the accuracy of of how well we can predict things.

[00:37:45]

And I just think it's like it's just the most pertinent and chilling example I've ever been part of, which just demonstrates how important the maths is if you're going to try and win a war with nature.

[00:38:00]

Essentially, it seemed to me, I mean, there is two different types of people just to broadly generalize going into this pandemic. There was people who understood non-linear and exponential functions and people who maybe had a harder time with that. And the people who did seem to understand or grasp those concepts better seem to take it a lot more seriously than the people that didn't. And I would love to find a way to help people think better in terms of exponentially.

[00:38:28]

Yeah, of course. I'm part of the problem is that the word just gets thrown around like, you know, people say, oh, this this project is exponentially more difficult or exponentially more dangerous. And someone knows it's not what the word means. And it is really counterintuitive because the thing about exponential growth, it doesn't just. Mean big, it doesn't just mean lots. It means something very specific, it means that it's where something is changing by a fixed fraction in a fixed period.

[00:38:59]

So this this virus, for instance, is doubling every five days. So doubling fixed fraction and every five days is a fixed period. And I think that it's just. Yeah, I mean, it's just not something that's counterintuitive at all like this. The really classic example of the rice on the chessboard. So this is this idea is like a classic story about an Indian king who's really impressed with the chessboards when it was when it was shown to him.

[00:39:26]

And so he said, OK, I'll tell you what I will I'll give you a grain of rice for the first quarter and then we'll double the grains of rice every subsequent graphite, which sounds like, oh, that's not very much the beginning. And it's like one grain and two grains and then four grains, like, OK, this is not going to cost me very much. The thing is, is that by the end of the the end of the chessboard, you need like a lot of rice.

[00:39:51]

Essentially, you need 18 quintillion grains of rice, which is essentially I work this out. If you take Liverpool, the area of Liverpool for American listeners isn't easy to imagine, but it's essentially like a whole city. It's that an area that size stacks three kilometers high with rice. That's how much rice it is. So, I mean, exponential growth is just beyond imagining. It's just completely counterintuitive.

[00:40:18]

One of the stories I loved about in your book, switching gears a little here to Hello World, you had the story of Kasparov and playing Deep Blue and everybody's told that story, but you had a unique angle to it that I hadn't heard anywhere else, which is that the machine was also playing with Kasparov.

[00:40:35]

Yeah. So this goes exactly back to what I was saying earlier about it's not just about building a machine, it's about thinking about how that machine fits in with humans and fits in with human weaknesses. Because the thing is, is that Kasparov I mean, his incredible player. So I had a chat when I was researching the book. I spoke to lots of different chess grandmasters and one of them described him like a tornado. So when he would walk into the room, he would essentially pin people to the sides of the room.

[00:41:06]

They would kind of clear a path for him because he was just so respected. And what he used to do at this trip, if he was playing you, he would take off his watch and he would place it down on the table next to him and then carry on playing. And then when he decided that he sort of had enough toying with you, you would pick up his watch and he would put it back on as if to say, that's time now.

[00:41:27]

And I'm like, I'm not playing anymore. And essentially, everyone in the room knew that was your cue to to resign the game, which is just like so intimidating. I is really, like, terrifying. The thing is, is that those tricks that Kasparov has, I mean, they're not going to work on a machine. Right? You've got the IBM guy sitting in the seat. But I mean, he's not he's not the one making the moves.

[00:41:50]

He's not the one playing. So it's not going to affect him at all. So none of that stuff works in his favor. And yet the other way around, the IBM machine could still use tricks on him. So there's a few reports the IBM team deliberately coded their machine. So that's the way that it worked for you would sort of search for solutions. And depending on how long that search would take, it would be how quickly the answer came back.

[00:42:18]

But they deliberately coded it so that sometimes in certain positions, the machine might find the answer very quickly. But rather than just come back with the response, they added in a random amount of time where it looked like the machine was just ticking over, thinking very carefully about about what the move was, when in reality it was just sitting there in a sort of holding pattern on Kasparov himself. So in his latest book and in several interviews had said that he was sitting there and was trying to second guess what the machine was doing at all times.

[00:42:52]

So it was trying to work out why this machine was stuck, grunting through very difficult calculations and essentially got psyched out by the machine, because I think all of the chess grandmasters pretty much uniformly in agreement that at that moment in time when the machine beat Kasparov, Kasparov was still the better player. But it was the fact that he was a human. It was the fact that he had those human failings that meant that he was outsmarted by the machine.

[00:43:19]

That's such an amazing and incredible story. Thanks for sharing that. Your first book, The Mathematics of Love, explain the math underlying human relationships. How can applying math, maths concepts to romantic situations be helpful to people?

[00:43:35]

Well, so this is this is this is a it was sort of a kind of private joke that got tired out of hand that book. You know, when I was sort of in the dating game or like, you know, designing my tablecloth, my wedding or like any of those things, I mean, I just like generally apply math to everything and we just try and calculate as much as possible. I was trying to game it as much as possible.

[00:44:03]

And so in the end, I, like, wrote these off into a book and it's all very tongue in cheek. But the thing is, is that while I totally believe that you cannot write down an equation for for real romance, you can't write down an equation for that sort of that spark delight that you get when you meet someone and you know, you really like them. It's kind of there's no real math in that. But there's still loads of math in lots of aspects of your love life.

[00:44:25]

So there's math and you know how many people you date before you decide to settle down. There's math in the data of what photographs work well on online dating or apps or websites. There's loads of math and designing your table plan for your wedding to make sure that people that don't like each other don't have to sit together and study my cards available if anyone wants it. And there's even actually my favorite favorite one is there's even maths in the way that arguments between couples in long term relationships, the dynamics of those arguments, there's lots of little places that you can you can you can find a place to kind of latch on and use them on how many people should be date before we settle down.

[00:45:08]

This is this is the one that got me the most in trouble. So, OK, so here's the problem, right?

[00:45:13]

Is that what you don't want to do? I guess, in in an ideal world is you don't want to just decide to latch onto and settle down with the very, very first person who shows you any interest at all, because actually they might not be that well, society. And if you hold out a little bit longer, maybe you'll find someone who's better suited to you. But equally, you don't want to wait forever and ever and ever and ever because you may end up missing the person who was right for you, turning them down because you think someone better is round the corner and then finding out that actually they were always the right person.

[00:45:46]

So what you could do is you can you can set this up as it's like a mathematical problem. So you've got a number of opportunities lined up in a row, sort of chronologically lined up. And your task is you want to stop at the perfect time. You want to stop at the moment with your perfect partner. So it's essentially a problem in optimal step. In theory, it's cold. So the rules are that once you reject someone, you can't go back and say, actually, I want to do after all, because people don't tend to like that.

[00:46:15]

And the other rule is that once you decide that you've settled down, you can't look ahead to see who you could have had going on later in life. So if you frame it like that with those assumptions, then it turns out that the mathematically best strategy is if you spend the first thirty seven percent of your dating life just having a nice time and playing Fields, says one of the seven percent. Yeah, spend the first seven percent of your life just playing field, having a nice time getting to know people, but not taking anything too seriously.

[00:46:48]

And then after that period has passed, you then settle down with the next person who comes along. That is better than everyone you've seen before. So, yeah, that's what the math says, but I should tell you, right, that there's quite a lot of risks involved in this.

[00:47:09]

Is that what you tell your husband? You're the you're the best after the thirty seven percent.

[00:47:15]

Yeah, yeah. Yeah. Marginally better. Yeah.

[00:47:19]

So how can we use I don't want to say argue better, but I'll use your language. Like how can we use math to argue better in our relationship.

[00:47:27]

This is my favorite, favorite one. So this is, this is some work that was done by the psychologist John Gottman. He's done some amazing work with couples in long term relationships, and he works out a way that he said she does. He gets couples in a room together and he videotapes them and he gets them to basically to have an argument with one another. So officially, they say that they all seem to have a conversation about the most contentious issue in their relationship.

[00:47:54]

But basically they lock up a couple in an argument. But what they've done is they've worked out a way to school, everything that happens during that conversation. So every time that someone's positive, they get positive school. Every time someone sort of laughs and gives way to the partner, but even gestures. Right. So if you roll your eyes, you get negative score. If you stonewall your colleague and have to make to score that kind of thing. Anyway, the thing that's kind of neat about this is that it then means that you can look at a graph of how an argument evolves over time.

[00:48:23]

So the really nice thing about this is that John Gottman then teamed up with a mathematician called James Murray, who came up with a set of equations for how these arguments ebb and flow, the dynamics of these equations essentially and hidden inside those equations, there's something called the negativity threshold. So essentially, this is how annoying someone has to be before they provoke an extreme response in their partner. So my guess would have been I mean, they've got the data on hundreds, if not thousands of couples here.

[00:48:55]

My guess always would have been, all right, negativity threshold. Surely the people who've got the best chance at long term success, the people who end up staying together, surely those are going to be the ones where they've got a really high negativity threshold that would that would have always been my guess. You know, like the couples where your leaving room for the other person to be themselves, you're not sort of picking on anything, on every single little thing.

[00:49:18]

And you're kind of you're compromising. Right? That would have been my guess. Turns out, though, when you actually look in the data, the exact opposite is true. So the chances the people who have the best chance at long term success are actually the people who've got really low negativity thresholds. So these instead that the people where if something annoys them, they speak up about it really quickly, immediately, essentially, and address that situation right there and then.

[00:49:49]

But they do it in a way where the problem is dealt with and then then actually go back to being go back to normality. So this is couples where you're continually repairing and resolving very, very tiny issues in your relationship because otherwise you risk bottling things up and then not saying anything. And then one day coming, I'm being totally angry about how it's left on the floor or something. And it just being totally at odds with the incident itself is, you know, bottling things up.

[00:50:18]

And then and then it's.

[00:50:19]

Yeah, I think that's really fascinating. Right. Because if you look at what it takes to bring things up in a relationship when they happen or pretty close to the time they happen, it means you have a lot of security and comfort. And you know that bringing this hard thing up and is not it might make somebody angry or hurt them, but it's not going to be the end of the relationship. And then not letting it fester actually makes the relationship stronger long term.

[00:50:44]

Exactly. Exactly. Now, of course, the language that you use is really important as well. Right?

[00:50:48]

So you can't just be like, you know, when I read about it, but I think that I really love I love those stories. I love those stories where there's something about humans that is just written completely in the numbers. I think that's really wonderful. This has been an amazing conversation. I want to thank you for your time. Thank you.

[00:51:09]

Thank you very much. Hey, one more thing before we say goodbye, the knowledge project is produced by the team at Farnam Street. I want to make this the best podcast you listen to, and I'd love to get your feedback.

[00:51:27]

If you have comments, ideas for future shows or topics or just feedback in general, you can email me at Shein and F-stop blog or follow me on Twitter Chainey Parish.

[00:51:38]

You can learn more about the show and find past episodes at First DOT Blogs podcast.

[00:51:43]

If you want a transcript of this episode, go to F-stop logged Tribe and join our learning community.

[00:51:49]

If you found this episode valuable, shared online with the hashtag The Knowledge Project, or leave a review until the next episode.