Transcribe your podcast
[00:00:00]

Today's episode of Rationally Speaking is sponsored by Livewell Give Oil takes a data driven approach to identifying charities where your donation can make a big impact. Give all spends thousands of hours every year vetting and analyzing nonprofits so that it can produce a list of charity recommendations that are backed by rigorous evidence. The list is free and available to everyone online. The New York Times has referred to give well as quote, the spreadsheet method of giving give. Those recommendations are for donors who are interested in having a high altruistic return on investment in their giving.

[00:00:30]

Its current recommended charities fight malaria, treat intestinal parasites providing vitamin supplements and give cash to very poor people. Check them out at Give Weblog.

[00:00:52]

Welcome to, rationally speaking, the podcast, where we explore the borderland between reason, and then I'm your host, Julia Gillard, and my guest today is Clive Thompson. Clive is a journalist covering technology and culture. You may have seen his writing for Wired, among other places. And he's the author of two books, including, most recently CODAS The Making of a New Tribe and The Remaking of the World, which is a fun and interesting intro to the psychology of coding and the people who love it and how that that psychology, that culture ends up influencing what they make and their impact on society.

[00:01:29]

Clive, welcome to rationally speaking.

[00:01:32]

It's good to be here.

[00:01:33]

So I don't know if you remember this, but I have to tell the story of how I ended up having this podcast with you. So a few months ago, you wrote an article for I think it was The New York Times. New York Times magazine. That's right. Yeah. Yeah. Near Times magazine. And it in it you mentioned some study and your article got shared widely on Twitter. And then a friend of mine, Kelsey Piper, who is also a recent guest on the podcast, a journalist.

[00:02:03]

A journalist. Perfect. Yeah. Yeah. She responded to that thread saying actually the study in this article is terrible and shouldn't be cited. And then her tweet went viral criticizing the study in your article. And you replied very graciously and said, this is this is like a good critique of the study. I'm going to I'm going to issue a, you know, retraction or revision. And I'm also going to, like, revise the part of my book that talks about the study.

[00:02:32]

And I was so pleased at your response that I instantly bought your book and encouraged other people to buy your book, too, for two reasons. One, because I sort of have more trust in and more interest in the things that someone has to say. If I know the kind of person who corrects misinformation when they find it in that writing. And two, because I want to incentivize that kind of behavior because it's good for the epistemic coming.

[00:02:58]

So thank you for that. Oh, not at all. But I mean, I honestly, I was I was very thankful to Kelsey for speaking up because, I mean, it is, you know, when you're when you're a magazine writer, that there's a fact checking process for the magazine. But everyone knows that sometimes things get through and you're trying as hard as you can to get things right. And if you make a mistake, you really want to correct it quickly, particularly if it's for a magazine where it's going to be on on the Web, you know, for as long as the Internet endures or the sun sun explodes or something like that.

[00:03:28]

So you're getting it right is good. And I'm very, very grateful to people that that pick things up and let me know.

[00:03:34]

Yeah. Although I have to say, I you know, I ordered the book and then, you know, it's been months since I ordered it and I had it on my shelf and I forgot that I had ordered it for this reason. And I guess I thought that, like, the publishing company had sent me a review copy or something. So when I started reading it, it was like, oh, I should interview Clive. I had no memory that this was how I ended up with your book.

[00:03:55]

So my decision to interview you was sort of independent of this this episode before it came to you twice. It came to you twice. Exactly.

[00:04:04]

Two reasons. So anyway, codas, one of the things I liked most about the book is the way you really delve into the psychology of coders and how that affects society. So probably the most your most emphatic generalization that you were willing to make about coders was their love of optimizing and efficiency, which really resonated with me personally, actually. Can you can you start to explain what's behind that love?

[00:04:30]

Sure. Sure.

[00:04:32]

Well, the thing about so one of the things the reason why this cropped up over and over again when I talk to developers is because when you're learning to program a computer for the first time, one of the things you quickly discover is that computers are really good at doing the things that humans are really bad at and vice versa. So humans are you know, we're great at intuition and and synthesizing in a very organic way different pieces of information. But we are terrible at being meticulous work.

[00:05:05]

You know, if you ask us to do something repetitively over and over again, we drift. You know, we space out, we get it wrong. And we're terrible at doing things at the precise time we're told to do them. In contrast, you know, computers are amazing at doing things repetitively over and over again with great precision in precise timing. And so one of the things that dawns on you when you're learning to program is that, wow, there's there's all these repetitive tasks that I have to do in daily life that actually could be better done by a piece of software.

[00:05:38]

You know, like, you know, you've got some job that requires you to, like, you know, take all these different fields from a word document and dump them into a spreadsheet. And it normally takes you four hours to do that. And you're like, this is. This is. Insulting waste of my time, and when you learn a bit of program and you're like, wow, I can actually write a little routine that will do that in, you know, a matter of milliseconds and it'll do that every time perfectly for the rest of my life.

[00:06:05]

And actually, I can give that to all my colleagues. And I can suddenly save not just me, but the entire corporation, you know, the hundreds of hours in a year. And so you discover that and you start you start automating things in your everyday life and it becomes very addictive. There's something incredibly delightful about optimizing all these dull, boring tasks. And it starts to become almost the way coders would describe it to me, almost like an aesthetic like, yeah, inefficiency and UN optimized, repetitive behavior sort of grosses them out, you know, in the way that I think someone else would be appalled by a bad smell.

[00:06:44]

Yeah, you actually I remember this metaphor that you mentioned in your book. It wasn't your metaphor. You were talking about how coders talk about code. And you said that when code is efficient, they talk about it in as sort of clean, beautiful, elegant, kind of like a visual arts metaphor. Yeah. Visual metaphor. And then when it's like a terribly written mess, the metaphor is that it smells bad. And you had a nice sort of analysis of what's underlying that metaphor.

[00:07:13]

Do you remember? Yeah, yeah, yeah.

[00:07:15]

They shift from the pleasures of the eye, the apprehension of gorgeous proportions in a painting, you know, wow, that coat is beautiful, too, like, oh my God, that code smells. It's just like there's something wrong with it. And it's just like there's like someone someone put a fish in a paper bag beneath the floorboards and it's right.

[00:07:35]

And it's like that's that's like there's there's there's these very funny kind of aspects to to to the way coders think about what they do in the metaphors that come up. But yeah, the efficiency thing is really interesting. It I heard it. I heard it. You know, I wouldn't say universally, but very, very commonly from from a great array of coders. I spoke to all walks of life.

[00:07:57]

So I was just thinking about that metaphor because I love metaphors. They kind of like work on multiple levels or reveal something deep about the thing, their metaphor rising. And it seems to me like the reason that the bad smell like this code smells bad seems like such a good metaphor to me is that it's so. So when you smell something bad, like you don't know where the smell is coming from, somewhere in this code is a problem. Right.

[00:08:25]

But also it kind of it often indicates something kind of growing like or out of your control, not plan. Right. Like, right. You know, something rotting or a colony of like insects or bacteria or something like that. And it's like not not ordered and and it can be contagious. So like if you don't take care of it, you know, this like bad code could mess other things up or the problem could spread or something like that.

[00:08:49]

Right.

[00:08:50]

And that is exactly what like when code is a real mess and there's something wrong with it. That's exactly the reason why. The metaphor, I think, is so powerful is because those are exactly the emotions that the coder faces there. They're sort of looking at this and thinking, oh, God, this is like thousands of lines of spaghetti code and it smells terrible. And I there's nothing going wrong right now, but it could so easily go wrong because I can't even understand how this thing works.

[00:09:16]

It was so badly written that no one, no one documented any parts of it. And so you're just sort of aware that there's something terrible out there and you could dive in and clean it up. But like, you know, you might not have time to do that. You know, you might have to have to tolerate this horrible smell. But it's really it's quite aggravating it really. I think that's why the smell metaphor works so well.

[00:09:35]

Yeah. It's so perfect. You started learning to code in the process of writing this book, right? Yeah.

[00:09:41]

I mean, I'm I'm I'm a generation where I did like a little bit of coding when I was a kid in the 1980s on the first generation of sort of personal computers like those ones you could plug into your TV like the Commodore 64 or the S.A.T. And, you know, honestly, like, I loved it a lot. Like I found it just incredibly joyful and fun. And like, I made, like, databases and I made games. I may chat bots and I probably I might have even become a programmer except my mother.

[00:10:11]

So my father was a civil engineer and he was totally into technology. But my mother was worried that I would just sit around playing games all the time and I would drop out of school. And so she said, yeah, no, we're not there's no computer in this household. And so we never got one. And I like I would write code literally on pieces of paper, you know, and like try and type it into a school computer when I could get access.

[00:10:33]

But, you know, if you don't really have the machine, you can't really get you can't go very far. So I sort of decided to leave it behind and I and I decided to become a writer instead. But I never really lost the fascination with it, which is sort of why the instant I started writing for a living, I immediately gravitated towards writing about technology. Writing about. Its effect on society, but then, you know, about seven, seven or eight years ago, I guess, or five or five or seven years ago, I decided I wanted to learn some of the new languages that were that were common and prominent, like like Python and JavaScript.

[00:11:09]

And so I was poking around on them and teaching myself stuff. And that's sort of all around the time that I started thinking about writing this book. And I you know, I wanted to I also wanted to be able to talk more confidently with the people I'm interviewing, like. So I wanted to do enough coding that I could really apprehend. What was hard and challenging and amazing about the work they were doing is sort of a useful or useful reporting exercise.

[00:11:32]

But it turned out that, again, I discovered that I really loved it. In fact, I think I like it more than writing. I would procrastinate.

[00:11:37]

I'm writing my book by quoting basically this is your mom kicking herself, that she pushed you out of what would have been a very lucrative career in tech and into a career in journalism.

[00:11:48]

You know, like I I think she notes the irony a little bit for sure. But, you know, I mean, the truth is, I'm certainly happy being a writer. I mean, things have worked out well for me. I think if I if I had hated my writing life, I would have an even more troubling relationship to not being a career. But as it is now, I'm kind of getting the best of both worlds because I get to have fun doing this writing stuff and I get to sort of have this weird little extra power that I use, I use in the service of my journalism.

[00:12:17]

I mean, like one of the things that I discovered that's kind of interesting that, you know, I think might be resonant for your listeners is I think there's actually a certain incredible usefulness in knowing a bit of coding when you're not a full time programmer. Right. Like when when it's in which an adjunct power onto something that you're doing, you know, whether you're a nurse or you're in marketing or you're, you know, in you know, in journalism, because you can you can do this like crazy little automation thing.

[00:12:44]

Like when my book came out, you're going to find this funny as my book comes out like a month ago. And what does every new author do for the first few days while we sit in front of our Amazon page, refreshing it over and over again to see whether or not the sales rank is rising at all? You know, it's like this totally neurotic does anyone love me thing? And, you know, after like, doing this for, you know, a couple of days, I decided, OK, this is crazy.

[00:13:07]

A, it's not psychologically healthy. B, this is a repetitive computer behavior and I can automate it. So I sat down and I wrote a little web scraper that goes into that page four times a day, retrieves exactly the information. I'm looking for it and formats it into a text messages and texts me. So I have a little automated bot that does that perseveration on my sales rank for me.

[00:13:32]

Is that well, first of all, kudos on the on the coding, you know, in the wild. Is it better for your psychology than the clicking or the just that you're no longer tethered to your computer?

[00:13:44]

Yes, actually it is like it broke my habit.

[00:13:46]

I swear to God, like once I had this thing up and running, I just stopped going to the page. And so it's kind of nice, actually. You know, I keep track of generally how it's doing, but I don't have to think about it and I stop thinking about it. I think in some respects it was almost like a form of cognitive behavioral therapy writing the code, because in the process of writing it, I'm like, OK, I'm insane, I'm insane, I'm insane.

[00:14:07]

You ponder deeply your insanity as you turn it into an abstracted algorithm and then you dispatch it to let this, you know, deathless computer execute it on a schedule.

[00:14:19]

Do you so you talk in the book a little bit about the downsides of this love, of optimizing our efficiency. But the examples that I remember might be missing something or misremembering were things that felt kind of like obvious mistakes, like there is. He was obsessed with reducing the amount of time that people spent making jokes in meetings. Right. And there was the guy who, like, invented an app that would automatically send love messages to your partner. Sure.

[00:14:49]

I'm like, those are both obviously dumb mistakes. Right? So that's true.

[00:14:53]

But the problem is there also there are bigger ones, like think about this one, like think about the the like button. Right. OK, so originally the engineers and designers who made the like button conceived of it entirely as an optimization, as an efficiency ploy. They they were noticing the fact that, you know, before the like button was there on Facebook, it was actually kind of inefficient and a little ponderous to indicate that you liked someone's photo.

[00:15:19]

You know, you had to write a comment. You had to go, hey, you know, great photo, Clive. And the truth is, most people are just busy and rushing and they're not going to do that. And so the designers thought, well, you know, we could probably unlock a lot of positive behavior if we made it one click easy to approve a post, like if we sort of just made it dramatically more efficient to sort of indicate your approval.

[00:15:44]

And so they get together and they, you know, Leo Pearlman, the designer, and Justin Rosenstein, the the programmer. And they you know, they prototype. It and they got it working, and they originally called it the awesome button, you're going to click on something and say that it was awesome. And anyway, they put it together and they showed it to Mark Zuckerberg and the top team. And eventually, you know, they said, yes, this is a great idea.

[00:16:04]

We'll do this. They roll it out. And, you know, the first order effect is exactly what they expected. There is like a quote unquote, as they said, a an unlocking of latent positivity. You know, when you make something easier to do, more efficient to do, people will do more of it. But after, you know, over the next year or two, they began to sort of notice rather uneasily that they also inadvertently created a bunch of like, kind of unsettling behaviors that they didn't particularly like and wished didn't exist.

[00:16:35]

For example, you know, people started posting something and then sitting there much like me with my my refreshing of my ranking page on Amazon, just constantly refreshing the page to see whether they were getting any likes. You know, once you quantify something, people will deform themselves with that quantification. You know, this is Campbell's law. If you you know, if you create a metric, people will alter their behavior to goose the metric. And so they realize that what they done is they created a situation where people were now sort of hustling for likes, you know, and deforming what they what they posted in the first place with the idea that it was specifically to try and get more likes.

[00:17:14]

And, you know, in some respects, they actually really, really regret these knock on effects that they did not expect. We're going to emerge because they they successfully did what they set out to do, but they created a situation that they think actually kind of ruined aspects of Facebook. So there's there's a lot of like in fact, actually, here's the thing. Think about it this way. Every time there's a major collision between a large software company and the rest of the world, it's nearly always because they dramatically increase the metabolism of something in a way that broke other things.

[00:17:47]

Right. There's there's no there's no pure winners in efficiency. Every time you increase something, increases throughput, you're going to cause some sort of outcaste effect. You know, Uber, you know, is a wonderful optimization ploy. Right. You know, the company identified a serious inefficiency in the way that cars are dispatched. The people don't know where the cars are, the cars don't know where the people are. And they were like, that's crazy.

[00:18:15]

You know, like we we have these pocket computers and we can resolve that inefficiency. And they did an amazing job at it. And they, you know, they produced an enormous win for me. The writer, like it is crazily easier for me to get around now using cars. And, you know, and they also created a kind of a cool new opportunity because now it became a lot easier to become a driver. You know, you didn't have to go through this process of finding someone's, you know, hack license in this, you know, completely, you know, oligopolistic sort of like slightly mobbed up world of of cab licenses and cab aliens.

[00:18:50]

You could now just download the app and become a driver. So, you know, if you wanted to learn to earn a little bit of extra money, enormous upsides. But they also kind of along the way broke the ability of a lot of people to, you know, make a full time living, you know, by distributing the work amongst a lot of part time gig people, this route to the middle class that had existed for decades. And it was it was not a great route like it like it wasn't, you know, anyone thought it was a fantastic way to make a living, but it existed, particularly for immigrants is now more or less gone.

[00:19:21]

Right. You know, that's just that that's that whole thing that that was part of the civic life of cities is is broken.

[00:19:28]

OK, so those those seem like two very different types of of optimising and effects of optimizing the the first one seems like the like button seems like a genuine kind of unintended consequence, which I mean I guess it wasn't even quite making it easier to leave a comment. It was like creating a whole new way of interacting with the post that like had a different meaning. It was like voting for the post or something, as opposed to like interacting with the person.

[00:20:04]

Right. Which but that still is kind of interesting. Like unintended, like if not negative, then at least like, you know, double edged sword consequence of optimizing. But but the the Uber case just seemed like purely like straightforwardly this is what, you know, innovation is supposed to do. It's supposed to like find new and better ways to do things. There are always losers in that process. Yeah. And like and like you could totally make an argument that, like, society should, like, have a cushion for like the people who got in a place with new technology and so on.

[00:20:41]

But that doesn't seem like a like argument against optimizing our efficiency.

[00:20:46]

Well, I mean, like, I'm not sure that I'm arguing against it entirely. I'm merely pointing out that. This is this is this is what the central trick of software is, right? I mean, like that that over and over again, if you if you ask what is the what is the kind of Fratelli like, largest pattern of what software does to society, it it speed things up. It gets rid of inefficiencies, more or less.

[00:21:10]

I mean, by and large, you know, like Microsoft Word basically took the process of creating a document which I which I'm old enough to have done on a typewriter. And it was ruinously slow. Right. It made it really, really faster and created an explosion thereby of utterances, you know, again, for good or for ill, like, you know, people in the corporate world will tell you on the one hand, it's great. And on the other hand, now we're just sort of drowning in useless memos, basically.

[00:21:34]

So, you know, I wasn't I wasn't necessarily arguing against the fact that people should not pursue efficiencies merely to point out that this is if you're ever wondering what the what the overall pattern of how software works is, this is it. And certainly, I think, you know, you're you're quite right. I mean, like the response, the correct responses is probably less to say we shouldn't have efficiencies. But on the civic realm, I mean, I frankly, I think I'm I think it's probably best to let the innovators do whatever the heck they want and have, you know, within reason and have society try and, you know, organize its responses to it rather than to say, don't do this in the first place.

[00:22:12]

Oh, yeah, I agree with that. I, I wasn't sure reading your book if that was your position or not. So nice to hear it.

[00:22:19]

Yeah. No. Well I mean I think, I think it's partly because I wasn't trying to write a big didak manifesto, you know. I mean I was I mean, I mean like I'm I'm you know, when it comes to the free market, I'm I'm I'm raised in Canada and I still have essentially a Canadian view. This, which is like it's what you want is a dynamic marketplace that that that has like an active state that organizes how to deal with the sort of problems that gas by it basically.

[00:22:46]

So, you know, so that's that's the way I approach all these things. I do think that, like I do think that, like, there are scale, actually one of the I think lessened efficiency. And we're one of the things that I am that I think I probably do come more down negatively on the side of is the problem of scale in large technology companies, which is to say, like, you know, through through a concatenation of influences ranging from the dictates of how venture capital funds things and why it funds them, ranging to the fact that software itself is just a really interesting new form of machine in that it can scale far faster than other machines could.

[00:23:28]

You know, if you and I put together a tractor company, there's only so fast we can make tractors. But when Instagram rolls out stories, there's there's very little marginal cost in in the, you know, the thousand versus the millions versus the billionth person using it. So it can scale much more rapidly for for the whole bulk of these reasons that I sort of, you know, point out in the book, there's really an interesting sort of challenge to the marketplace that you get these very large companies to grow very large very quickly and establish extremely dominant positions that are hard to unseat.

[00:24:02]

There's strong first mover advantages. And I think actually one of the most interesting conversations now is whether Antia classic antitrust actions need to be taken against the scale of some of the companies, because they at this point time, they might be actually thwarting innovation, because mostly mostly at this point in time, what innovation is in Silicon Valley is trying to create a company that one of the four big ones will buy up and then either kill or phagocytes, basically, you know, like at this point in time, there's there's no one who's trying to compete with any of the major four or five big companies.

[00:24:35]

It's like, yeah, you're not evolving landscape. That's very interesting. Like, that's not to me, it's a gnarly, weird problem.

[00:24:42]

Yeah. So talking more about unintended negative consequences of tech, you have a thread going throughout the book about how the engineers and designers who built the world of social media. Yeah, kind of kind of naively failed to foresee some of the negative consequences of their platforms. Sure. And specifically, one of the reasons why being that they had this kind of naive, like it's communication, more communication is better kind of mindset. And you quote an early Twitterer named Alex Payne making this point.

[00:25:21]

He said, I saw a lot of really smart people who are smart in a very narrow kind of way that didn't kind of intersect with humanity, as folks are just interested in math and statistics, programming, obviously business and finance, but who had no insight into human nature. They had nothing in their intellectual toolbox that would help them understand people. And it's rote. And so I, I feel skeptical that the math, the engineering mindset is the problem here.

[00:25:47]

So interesting to me. Like the problem. Well, yeah. To me it seems like the problem is just that any human designing a like product or system that's going to be used by society as a whole is just going to have a bad time. Yeah, probably just because things are complicated, like it's really hard to predict the like. Right.

[00:26:08]

Large society wide effects of a thing. And also just because, you know, humans have we all have our own ideas about like what's obviously good or fun or like that or off-putting or something. And then, you know, we inevitably transmit some of that to our creation. But, you know, it seems to me that, like, this is true of just everything like laws or social programs with unintended consequences, like, you know, Dehra, the drug abuse resistance education program here in the US.

[00:26:34]

I don't know if you it in Canada, but. Sure, but it was designed to keep kids off drugs and it actually increase the rate at which kids started taking drugs.

[00:26:42]

Yes. So I don't know. To me, this just seems really hard.

[00:26:44]

And I'm not convinced that, like, people with humanities degrees would do a better job. What do you think? Trysting?

[00:26:50]

Yeah, I mean, it's you'd have to basically what you'd have to do is you have to find counterexamples. Right. And I think there's I think some counterexamples do actually exist, though, you know, like say let's take a look at even Flickr. Right. So Flickr was essentially a social network, although people don't think of it that way anymore. In its early days, the whole thing was you're going to post photos and other people are going to look at them and there's kind of going to be like people talking to each other about their photo.

[00:27:17]

So they actually had, you know, comment threads and whatnot. And because a bunch of the people that were involved with Flickr were even just a tech older, like, you know, kind of in their late 20s, early 30s, they had been around in the early days of blogging and some of the been around back in Usenet. And so they knew that like that. The tone set in the early days of how someone uses a tool, you know, has a has an effect that that that that companies can actively decide to create.

[00:27:52]

Cultures are not great cultures. And so a bunch of people took it upon themselves to do a lot of work in trying to inculcate a civil culture, a sort of like a pleasant culture on Flickr. And they worked really hard at it, like like, you know, someone posted something, a photo, and they were like in their in the comments like going, hey, that's awesome. Check it out. You might check the other person stuff out.

[00:28:14]

You know, there's there's this there's this deep sort of complicated un automatable human work that went that went into that. It was quite successful. I mean, as a lot of them went on to later describe it to me, most people think of moderation as discouraging bad behavior, but partly what they were also doing was encouraging good behavior, right? Yeah.

[00:28:39]

And how big did Flickr get it? Got it.

[00:28:41]

Got pretty massive. It got pretty massive over time. Yeah.

[00:28:43]

And the culture of Flickr managed.

[00:28:46]

It remained pretty good. It remained pretty good. No, no, no. We don't, we don't know how, we don't know how well it would have been in the long run because Yahoo! Bought it and basically broke it.

[00:28:56]

Yeah. They they kind of destroyed it. And it is entirely possible that like that that culture could have died over time, you know. Certainly. But but but the point being that these these these things are possible, if you think about them. Absolutely. No one was thinking about it in these companies, in part because they were so damn young. I mean, they they hadn't even been around for blogging and like and like and and Usenet. Right.

[00:29:20]

You know, in fact, this is one of the interesting problems people talk about, like Silicon Valley being demographically narrow, you know, and they usually mean it's mostly guys and. Yeah. And that shrinks the sort of white and Asian. Yeah, yeah, yeah. White, Asian, white and Asian young guys. And they and they talk about how that that shrinks the space of of intellectual talent. Yeah. And that's true. But honestly and.

[00:29:44]

I talk a lot about that in the book, but one thing that's kind of funny I actually didn't talk about in the book probably should have talked about it more is age. I mean, like it is.

[00:29:51]

It's like a Logan's run in Silicon Valley when someone is like, you know, in their 30s or certainly in their 40s, like they just get squeezed of these companies that don't want to pay them what they're worth. You going to send the latter to be like a manager or something like that. But there's a finite number of those seats. If you want to just be a engineer, just like to make stuff, they just shove you off to one side.

[00:30:12]

You know, maybe you don't want to maybe they can't squeeze 120 hours of work week out anymore or they want to pay you what it's worth.

[00:30:18]

So they lose it. They lose all this this rich knowledge like the people who have been at that rodeo twice before, they lose all that design knowledge and all that, all that engineering knowledge. And so I think I think I'm not saying you're wrong. I think and this goes back to my the fact that actually what I really identified as a problem with a lot of these social networks is scale. They're simply so big that I think they just become unmanageable.

[00:30:43]

Right. So I you know, I think I think you are correct that complex systems always develop wicked problems, you know, like the interstate. That seemed like a great idea at the time. It was a great idea at the time. It caused enormous economic activity in the country. And, you know, we baked the planet basically with it to, you know, by encouraging, like, an unreal amount of driving that that it's really, really hard to step off the treadmill of now.

[00:31:05]

So, you know, complex systems are complex. You're not wrong about that.

[00:31:10]

You also wrote about the a different aspect of coding culture, focusing, I guess, mainly in the tech companies and like the 90s and early 2000s, which was this super like, confrontational, blunt, no bullshit, super intense culture where people would like throw chairs across the room when they were, you know, frustrated with their smelly code. Yes. And so I'm curious, is your would your position be that that is like something like that or something to be avoided?

[00:31:41]

Like companies should or should be, you know, pressured to not have cultures like that? Because it's it's like really it makes it really hard for, like, people who are not not like comfortable in that culture, especially like women or. I am absolutely in favor of that.

[00:31:55]

I don't see any damn reason to tolerate people being dicks. Absolutely zero reason nobody needs to behave that way.

[00:32:05]

I'm going to try to make a counter argument and see what you can go for it. So first of all, I want to distinguish between dicks in the sense of like like, you know, actively like harassing or or like, you know, backstabbing or like you're like. Yeah, like genuine jerks. I want to distinguish that from, like, the kind of culture where, like, it's just acceptable for people to, like, raise their voice and like, say, you know, this is bullshit, are like this code sucks or whatever.

[00:32:34]

Yeah. We're like it's it's just like understood that the culture and the people who are comfortable with that are like not bothered by it because they like they get it, they like don't perceive it as like an attack on them. And so the counter argument that I want to put forward, which is not mine, is that someone else make it online. Is that like to really have a like an inclusive, like maximally inclusive tech world? What we need is like a diversity of different kinds of office cultures where like, you know, the people who want a like, very pleasant end and civil and like like neurotypical workplace, you know, there's lots of places that have those.

[00:33:15]

And then the people who, like, can't function well in workplaces like that and are just going to like like get on other people's nerves because they, like, have no filter. But they're like still really valuable programmers and could contribute to society, also have workplaces they can go to where they can thrive. And so we just want to be like maximizing the different kind of workplaces out there as opposed to like trying to get every workplace to be the one that like hits the most.

[00:33:41]

People are like works for the most people. Yeah.

[00:33:43]

I mean, like. I suppose I suppose that's that's a perfectly fine goal and I guess there's nothing to argue with there except that, you know, that's not remotely the situation we seem to have. Right. I mean, particularly in the in the in the area of software, there is this overly romanticized veneration for this sort of, you know, brilliant jerk. I think in some respects it's kind of funny. One of the things that comes up my book a couple of times you probably noticed, is a comparison between poets and and encoders, right?

[00:34:20]

Yeah, they they both work with precision in language. They both prefer to have like ten solid hours where you don't bother them. In fact, you in fact, you know, I mean and I make this case I think in the book, you really shouldn't bother them. Like you should let them just not be bothered while they do that work as it's this deep, deep, mental, intense work that requires immersion. They're building a castle, but they also you also get the sort of self aggrandizing, self romanticizing bullshit about how tempestuous we are and, you know, and how it's it's it's crap.

[00:34:55]

It's crap. I've I've organized my career. I've worked in places where I've been around people like that.

[00:35:01]

Oh, I can't stand people like that. I personally will, like, avoid them.

[00:35:05]

Yeah. I've seen no evidence, I've seen never seen have any evidence that those places are uniquely productive compared to others.

[00:35:13]

Yeah. No, uniquely productive was not part of the. I would be willing to ponder the data. I'd like to see the data. I've never seen any data on that.

[00:35:20]

Oh you're talking about the places that like have a pretty normal culture, except they have like one or two like brilliant jerks or something like that.

[00:35:27]

I mean, like and the truth is, like when you talk to the really talented people, like the Jeff Deans of the world, like the ones who are like have like tilted the universe on its axis with the quality of their software. They're incredibly awesome people. You know, they understand. They partly because they understand this at this point. And software is like a completely a team sport. Almost no one does anything on their own. Right. And so if you can't actually work with other people, you're going to dramatically limit your ability to to have a serious impact on the world.

[00:35:58]

So, yeah, what can I say? I think there's an awful lot of people making excuses for their unwillingness to to to do any sort of ability to work with other people and and unfortunately, management that buys into that romantic nonsense.

[00:36:15]

I'm and by and by the way, in terms of neurotypical city, by the way, this is also something that does not appear to me to decline on new hospitality. I have been writing about software developers for twenty years, many of whom are not even, you know, not at all neurotypical. And personally, I find a lot of them incredibly delightful to deal with. They're fantastic. They're they're incredibly perceptive and whatnot.

[00:36:40]

I you know, one thing that someone said to me, and I think this is probably true, that there is a large chunk of of people who are completely neurotypical but are just assholes that claim they're not neurotypical, frankly, giving, giving, giving people that are genuinely not neurotypical. A bad name in Silicon Valley.

[00:36:59]

Yeah, that's really tough. So this relates to a confusion that I had about the book, which is you mentioned this idea of a 10x coder and this sort of debate over whether there is such a thing in the 10x coder. And I'm confused about what exactly the disagreement is and why people care about this question. Can you explain the debate? Yeah, sure.

[00:37:23]

And I think in some respects it's probably one of them, one of the weaker chapters in my book, because I was struggling to figure out what to think about it myself.

[00:37:32]

OK, so you're confused, too? Yeah.

[00:37:35]

Yeah. No, no. Yeah, I did my best to think through it, but I think I think I, I think I think the element, the residual aspects of my confusion are evident in the writing. So so here's the thing. But about the sort of story of the 10x coder is that there is there was this historic idea that some people were remarkably better at coding than others. It was originally sort of observed in a series of studies in the 60s and 70s that were not terribly statistically valid studies.

[00:38:05]

But it kind of entered the the idea that they demonstrated that some coders were ten times better. You know, when I say better, I mean measured by how quickly they can write function in code or how efficiently their code works or how quickly they could find a bug. And this sort of entered the law partly because, you know, on a practical basis, it certainly did seem like some people were, you know, more productive than others. Again, because of the slightly you know, going back to this poetry metaphor, you know, if you need a poem written, a really good poem and it's not getting written, you don't solve it by adding ten more poets.

[00:38:43]

You know, it's an inside problem. There's kind of one point that I have to figure it out, and coding is is sometimes like that, like it's an insight based thing where you can the best thing might be to find someone and just leave them alone to work on it. If you throw former people out at, you're going to just add a whole bunch of communications complications that are actually going to slow down the work. So there is a situation where like, you know, one person can have a really large impact on solving a problem or create or creating something new, like when you're really starting a project, when something is doesn't exist and someone just has an idea for it.

[00:39:18]

That's what they often call sometimes. Not often. Sometimes I've heard called like a greenfield situation, like something. There's a greenfield, nothing is done. And so, you know, the person writing the prototype really punches above their weight because they're creating all the stuff new themselves. And they can move very quickly, partly because they don't have to deal with existing code, like they don't have to work with stuff that's there. There's the writing it all themselves.

[00:39:43]

And so, you know, when you look at the origins, a lot of these very epic making pieces of software like Photoshop, you know, which is like two people or, you know, the first 3D graphics engines or video games, really one or two people each time, you know, or the teams that created a lot of a lot of big pieces of, you know, basic, you know, for for Microsoft, you know, written by Bill Gates and and a team of three.

[00:40:07]

So you begin to you begin to believe and I think in a way that's partly true, that identifying these core super talented people is the way to get software written right, the way to get good software. And so there's there's an aspect to that that's true. But as with all sort of, I think, self mythologized aspects, it gets blown out of proportion. And a lot of people that merely, you know, for example, write a lot of code in a prototype, get thought of 10x coder.

[00:40:37]

But it turns out when you actually have to make that prototype something that can scale out to being millions of people, you have to tear it apart and start all over again, you know, and now you have to have like a team of people working very slowly and patiently on it. And, you know, you might look at them and say, well, they're, you know, one X coders, but they wind up producing the thing that works really well and robustly.

[00:40:56]

They had to slow down. They had to they had they couldn't just, you know, jam things out in that in that sort of frantic miasma of creativity. They wouldn't go they wouldn't get the street cred and the plaudits for having created this fantastic prototype. But they but they also produced something that's far more reliable. So I think part of my confusion came from the fact that, like. The sort of the idea of the tax code or, you know, seems true in certain situations and seems very untrue and other ones and even even harmful.

[00:41:25]

Right. You know, because you can wind up sort of, you know, thinking that software gets made by the heroic activities of one or two people and not creating an organization that actually says, no, we need 50 engineers here, and they're all going have to come at the code very carefully and move slowly and talk to each other. But what the heck is going on? Because we can't have one person being the big hero, because if they get hit by a bus, no one knows how to fix this stuff.

[00:41:48]

Right? I think I think I think the confusion in my book is that I didn't do as good a job as I should of at articulating the dynamics of how team based a lot of modern software is like.

[00:42:04]

I think I fell I fell myself a little bit into the into the narrative trap and joy of looking at individual people, you know what I mean?

[00:42:15]

OK, so to make sure I understand it, it seems superficially like a debate about the distribution of programming talent or something and whether, you know, there are these like few superstars. But it's actually a debate about the nature of programming work, how far you can get like individual programmers as opposed to teams. Yeah. And the reason that this is a more heated debate than it might seem like it should be is that it turns into a debate about like should we valorize these, like, you know, brilliant jerks or should we be like hero worship people?

[00:42:51]

And then and like some people like that myth and other people don't like that math and that's what they fight about it. Does that seem right? Yeah.

[00:42:56]

Yeah, I think I think you're probably doing a better job of that than I did in my book. I get you should you should go.

[00:43:01]

Well, I had help from the guys when when the soccer edition comes out, I'm going to be like, actually, Julie did a great job of leftover chicken. And let me just cut and paste what she said on her show. I think that's very well done, actually.

[00:43:14]

Oh, good. So I wanted to ask you about your earlier book, which is called Smarter Than You Think. It came out about five years ago. Yeah, that's right. That's right. And the thesis was basically that the Internet is making us smarter in the sense of acting like this kind of auxillary brain enabling collaborative learning and all these other things. And the book had a pretty upbeat techno optimist attitude. Yes. In contrast to some other books that were coming out at the time, like like Nick Carr's The Shallows, which is basically the Internet, is making us stupider.

[00:43:48]

So I'm curious. Well, first off, I'm curious if you still agree with that book or if your perspective has changed at all.

[00:43:53]

Yeah, the the answer is yeah. I actually agree with the book quite a lot. In fact, I know because I wondered, do I still agree with the book?

[00:44:03]

So, I mean, I reread it recently. And so the reason why there's an interesting reason why I agree with the book, which is that when I sat down to write the book, I was interested in in cognition and people's fears that using technology would make you stupider. And so I thought I tried to define in a way that made sense to me what are some of the aspects of what our daily cognition really looks like, which is to say, our ability to encounter information, our ability to retrieve it, our ability to make sense of it, to connect dots, our ability to externalize our thinking and show it to other people, our ability to collaborate and to coordinate, to coordinate thinking with other people.

[00:44:46]

And over and over again, I kept on just the oh, and also to think in different modalities, like to think using things other than just text, which had been the dominant mode of communication for, you know, you know, a few thousand years.

[00:44:59]

And so over and over again, the story kept on finding that, yeah, like this is honestly very often a net good. Now, I think the thing that I actively didn't tackle is that I didn't I didn't make any moral argument. I did not say this makes you into a better person. In fact, I in fact, my politics chapter, I very explicitly pointed out how when autocrat's figure out how to use technology, they become better autocrat's, they become better at suppressing dissent and become better at crushing people.

[00:45:29]

Right. And I think and I began to realize in the book came out after talking about it for a few years, that everyone would would talk about the book, but fundamentally asked me, but but doesn't this make us worse people or can't it also make things worse people? My answer was always yes, of course. Absolutely. Many, many very smart people are absolutely terrible.

[00:45:49]

Many people that don't, you know, do cognition intensely, are delightful and wonderful.

[00:45:55]

And I would trust my children to them in a way that I would not trust them. To many, many wickedly smart people. We is very intelligent, something positive.

[00:46:03]

Intelligence has no innate moral dimension. Right. And so one of the things that when you look at the online world right now, many of the problems we have is that some absolutely terrible people have become incredibly good at coordinating their activities. Dots at communicating in various media that are new and and fantastically persuasive, it's quite alarming, actually, right.

[00:46:26]

So do you understand what exactly your disagreement is with the people who are writing the more worried, pessimistic books about the effect of tech on our lives, our psychology? Like like. Yeah, yeah. Is that like. Yeah. Is it an empirical disagreement or is it just like there's both good and bad then you just like focusing on the goods and they just like focusing on the bad.

[00:46:46]

Well there is an extent to which to which I think actually the other thing about my book, the reason why I agree with that is that there's enormous amounts of caveats in it, which most people sort of ignored.

[00:46:54]

One of my favorite reviews was geat here, who currently writes for The New Republic back then was writing for The Walrus. He reviewed the book saying that that Clive's prose is so frequently includes moments of shade and complication that you can extract a book from inside his book that has the exact opposite argument. And he sort of right. I mean, I actually like I think I think the disagreement was often that, like, people like Nick Carr were were convinced that the previous modes of expression, print based mostly were, you know, so, so salutary and positive compared to the modes of thought that occurred with digital tools that it was it was this net decline.

[00:47:40]

And I was in a weird way, I was actually much less positive about them. I think I think the situation I had was I would look at the history, I look at the past where all this print culture happened, and I see all sorts of huge problems, you know, like the fact that that the tools for cognition and publication and and meditation were like so restricted to so few people that like. Well, sure, it seemed great if you were in the 18th century, you know, I'm sure it's in Gravier Alexander Pope.

[00:48:09]

Right.

[00:48:09]

But if you were any of the other people, it was it was like looking back on the 1950s and being like, there are so great. Well, if you were, you know, white and.

[00:48:18]

Yeah.

[00:48:18]

And so the joke I often made is like, you know, you know, someone like Nicholas Carr was looking at at the at the at the at the half at the glass and going, this is this glass is half empty and I'm going, no, no, no, no. It's one tenth fold now. Like it was empty before. Now it's one tenth. Right. Right. Yeah. It's like yeah it's really bad. It's 90 percent empty but it was like one hundred percent empty before.

[00:48:41]

Right now we're 10 percent up. Like I simply had a much more dismal view of the past actually, you know, which, which made my view of the present more.

[00:48:50]

Now the one thing, by the way, the one thing the one thing I would rewrite, though, that I would change is that the a persistent warning that that goes throughout smarter than you think is that these tools are great. But when they become centralized, highly centralized in the corporate control, they almost always go off the rails. Right. Because suddenly the tool is fighting what you want to do with it. It's it's saying actually, I just want I'm really just here to click on for me for me to keep you engaged with, like, click bait so you can click on ads.

[00:49:18]

Actually, we're going to algorithmically determine what you want to look at instead of letting you pick what you want to look at.

[00:49:24]

And that was that was back when I wrote the book basically in 2011, 2012. That problem was there, but nowhere near as big as it is now. I mean, I'm a person who like when even to this day, if you're to ask me, like, what are that the most interesting healthy spaces on the on the on the Internet where you actually see people doing the things I describe in smarter than you think. It's not it's it's much less often on the big, highly centralized social networks.

[00:49:53]

It's in these highly more distributed ones like everyone's to while I just for fun, spend an afternoon going and looking at crazy old discussion boards, you know, that are run on not even old new ones, but just ones that are run on completely non-commercial software because someone just wants to talk to the other 300 model train builders. You know, around the world. In my case, I'm a guitar player and a musician. So I spent a lot of time on guitar player boards, on guitar pedal boards.

[00:50:19]

And they're just unbelievably delightful spaces. Right, because everyone's there to talk about something they care passionately about. It's people from all over the world, from Russia and Texas and Ontario and and everyone's like really funny and polite. And we're diving into gnarly, complicated stuff, having great conversations. And because it's being run for, I don't know, fifty cents a month on someone's, you know, server bill somewhere, there's there's no ads.

[00:50:45]

There's no one trying to get us to click on ads like it is. It is a genuinely civil environment. So so you know, like I mean, maybe this is the Canadian side of me sort of speaking here. But like honestly, like the free market tends to break a social online activity when it starts to try and beat it hard enough for that money bleeds out basically is what I've learned. And if I were to go and rewrite the book, that's what I would say a little more directly.

[00:51:10]

Yeah, that's an apt metaphor. Yeah. Super upbeat way to end the. To end the interview. Yes, I have one one final question for you, which is what book or multiple books, if you want, have been the most influential on your life or on your worldview?

[00:51:34]

A couple, let's see, one really big one, the one that actually, I think literally made me become a technology writer, is called The Real World of Technology by Ursula Franklin. And what it really is, is the collection of of talk. She did the Canadian Broadcasting Corporation has this thing called the Massey Lectures, where every year they get a public intellectual to do a bunch of lectures on a particular subject. And back around 1990 or 91, they invited her.

[00:52:04]

Franklin, who is a metallurgist by trade, she's a professor of metallurgy. But she'd also been someone who'd been thinking a lot about the social implications and political implications of technology. And she so she did these lectures. And I miss the lectures. I didn't hear them. I just saw it when the book came out. I picked it up and I read it. And it was to me, it was incredibly interesting because I had become a student journalist with the idea that I would go out and I would write about things that were, you know, quote unquote important.

[00:52:31]

But for me, what that meant was politics, like, I'll go and write about politics because that's what, you know, serious, important people write about. And it was towards the end of my degree and I was I was a news editor at the the campus newspaper. And I read this book and my head sort of lifted off because she would sort of talk in this really smart, intelligent way about the fascinating cultural and political implications of the technology of digital technologies.

[00:52:55]

And I was you know, I was kind of a neophyte. I had not read the long literature that existed in this area that goes back decades. Right. She was introducing it to me. And that, more than anything else, was a book that made me realize, oh, I should take the nerdy stuff I already care about, which is computers and technology and all this weird, you know, BBS's stuff. And I can devote my life to it.

[00:53:16]

And it will it will allow me to talk about all the other things that I that I care, that I also care about, like culture and the arts and politics and and and the economy and business and stuff like that. So that book, you know, is literally responsible for why I have done everything else I've done that's so cool.

[00:53:37]

And I want to encourage listeners to consider generalizing that conclusion, to see like what other fields you might not have considered. You could, like, treat seriously or devote, you know, serious study or analysis to that that you might actually be able to anyway, go on.

[00:53:54]

Yeah, absolutely. So a couple of the books that were cataleptic was, I would say, Northrup Frise the Great Code, which is his book on the myth of poetic structure of the Bible, basically, and its impact on literature. And I primarily studied poetry at college. I read an absolute crap time. And I mostly was interested in in pre nine pre 20th century stuff like 19, 18, 17, 15, you know, 13th century stuff.

[00:54:22]

And so one of the things I loved about the book was that it first off, he's just a wonderful prose stylist. Like, it's sort of taught me, wow, this is what confident, intelligent, like writing is like basically. But also he you know, he he he had such a wonderful command of history and the culture and the history of culture that it also made me realize that, wow, whenever I write, I want to have one eye on history and culture, basically, because, you know, he has his he great way of he has this great phrase where he talks about mythology and why we have myths.

[00:55:00]

And he says, you know, you know, news and facts are the things that that that are happening or that happened. Mythology is what happens. It is the template for the things we all go through all the time. Like it is it is the it is the sort of platonic shape that lurks behind our individual experience that helps us make sense of them. And so that was that was an amazing book to read. I think it also it also sort of you know, I had always been interested in the literature of antiquity and it sort of gave me permission to sort of wallow in it, you know, because I began to realize its deep relevance to understanding the modern condition.

[00:55:40]

So to this day, I still you know, I still like to read, you know, ancient Greek tragedy and stuff like that because I. I love it. I found it revealing. I find it fascinating to think about the similarities and differences between people over over thousands of years. And I think his book was enormously catalytic in giving me permission to be to be interested in that for the rest of my life.

[00:56:02]

That's great. I particularly like how both of your suggestions, what you got out of them was was both on the like object level of what the book was about and also on the meta level of like, how is this author approaching this subject? How are they writing about it and thinking about it that you got, you know, got value out of it on both of those? Yeah, yeah, yeah.

[00:56:24]

The both of them had had a sort of metacognitive aspect to them. It helped to think about my own thinking in a more clear way. And and sometimes I think that's one of the most valuable. Things you can do to a young kid is to is to encourage them not just to think about, you know, the subject matter of the thing that that impassion some, but, you know, the way they think about it and and and the seriousness or the levity with which they approach it.

[00:56:49]

Right. You know, that's that's that's crucial to great.

[00:56:52]

Well, Clive, thank you so much for coming on the show. We will we'll link to your new book, CODAS The Making of a New Tribe and the Remaking of the World, which I encourage listeners to check out. If you want more exploration of coding culture and history and or if you want to, you know, reward epistemic virtue. And and it's been a pleasure having you on the show.

[00:57:13]

Thanks so much, Clive. It's been wonderful. Thanks so much to you, too. This concludes another episode of rationally speaking. Join us next time for more explorations on the borderlands between reason and none of them.