Transcribe your podcast
[00:00:00]

Today's episode of Rationally Speaking is sponsored by Give Well, they're dedicated to finding outstanding charities and publishing their full analysis to help donors decide where to give. They do rigorous research to quantify how much good a given charity does. For example, how many lives does it save or how much does it reduce? Poverty per dollar donated. You can read all about their research or just check out their short list of top recommended evidence based charities to maximize the amount of good that your donations can do.

[00:00:28]

It's free and available to everyone online. Check them out at Give Weblog.

[00:00:45]

Welcome to Rationally Seeking the podcast, where we explore the borderlands between reason and nonsense. I'm your host, Julia Gillard, and today I'm talking to Zach Weiner. Smith Zach is the author of one of my all time favorite webcomics. It's called Saturday Morning Breakfast Cereal. And it sort of lives in the intersection of philosophy, dark humor and silliness, which is like Julia's happy place. And that's been a guest on the show before several years ago. But the reason he's returning today is that he has a new book coming out with his wife, Kelly Weener Smith.

[00:01:23]

It's called Soonish 10 Emerging Technologies That Will Improve and or Ruin Everything. Zach, welcome back.

[00:01:31]

Yeah, I'm excited to be fun to talk about crazy nerd stuff. Excellent. Yeah. This book, just to give a little more context on the book for our listeners, it's reading. It is like sitting at the bar with your two nerdiest friends who are slightly drunk and slightly hyperactive and our friends with a lot of top scientists and have talked to them a bunch for months about the hottest new technologies and are explaining it all to you while simultaneously doodling cartoons on a bar napkin.

[00:02:04]

Yeah, that was my experience reading it. It was great. I wish I did that for a blurb. That's almost exactly what we were going for. So excellent. Good to hear.

[00:02:14]

So let's start by talking about how you chose this list of 10 technologies. What criteria were you using? I mean, I guess you sort of give the criteria in the subtitle technology that will improve and improve and or ruin everything. But I could you know, I could name two dozen more technologies that, you know, theoretically could have made that list. So, yeah, it's not meant to be exhaustive. It's it's messy. It's stuff we were interested in.

[00:02:39]

But it's also so we actually originally started with a list of 50. Oh, wow. And then the very short version of that is that as we got into writing it, we found the longer we made the chapters or I should say, the longer the more in-depth we made the chapters, the more we enjoyed them, and the more the more it felt like we were bringing something to the table beyond what you can get by a cursory look at Wikipedia or a popular science article.

[00:03:04]

And so we just kept drifting towards longer, more in depth and more humor to just more more fun until there was only room for 10 chapters after a lot of hacking in terms of the particulars we chose. I mean, some of it some of it's because we explored certain technologies and they just for whatever reason, didn't fit the format like they were going to be way too hard to explain well in the allotted space or they just seemed like they were kind of not a good idea, like a little too implausible even for a book like this.

[00:03:32]

We talked about that a little. And the conclusion if you wanted a sweet spot. Yeah, yeah. Not so definite that it's everyone sort of used to it and and already incorporated it into their model of the world, but not so pie in the sky that. Yeah. The way I'd say that, like we didn't want to do like a chapter on self-driving cars and not because we're not like totally freaked out about self-driving cars, but one there's probably already 80 books on that topic.

[00:03:56]

And to as we found when we research some changes, if you talk about it, if you talk about a technology, it's already kind of far along. It's really hard to give people the details because the details get really, really. Ticketek interesting, because if you talk about something that's not established like a space elevator, you can still talk in a somewhat abstract way about sort of parameters. And I think as a reader, that's a more satisfying experience.

[00:04:21]

The one huge chapter we ended up cutting that we don't even mention. The conclusion was we did an entire chapter completed on nuclear fission technology like advanced nuclear fission reactors. And I think we did a good job. But it was definitely the hardest chapter, I'm sure, for would have been the hardest chapter for a reader, because there's a lot of here's the difference between a fast and slow neutron reactor and between a light water reactor and a heavy water reactor and and all this stuff.

[00:04:47]

And the more established the technology, the more the harder it is to have a good time explaining the basic deal to sell. And I think.

[00:04:55]

Right, right. That makes sense. So maybe a good way to get a sample, a feel for some of the technologies on your list is to ask you. Well, it's a two part question. First, I want to know, out of the 10 technologies on your list, which of them do you think is the most likely to happen? And then second, which of the 10 technologies do you think would be the most transformative if it did happen?

[00:05:19]

Sure. I mean, it's a little tricky because, you know, these chapters, none of them are talking about like a specific machine. They're all Tushar. Yeah. You know, dozens of different approaches to a problem, I guess you'd say. So it's like, you know, if you're talking about we have one chapter on cheap ways you might get to space. And there there are probably several dozen new ways you might do that. We wouldn't even we we're not in.

[00:05:44]

To do it exhaustively in the space allotted, although I think we got pretty close, but I'd be a little careful answering that because some of this stuff really already exists in a rudimentary form, such as renewable I'm sorry, reusable rockets or augmented reality technology. Those already exist in some form. So so it's would be silly for me to say, well, one day, get them in terms of the stuff that's a little ways off that we might get.

[00:06:09]

Yeah, I'm thinking of like unsolved, unsolved problems that we might be able to solve, that we're that it's plausible enough that we could solve. But it made the cut for your book but that we haven't already solved. Yeah. I think for one, I would say bioprinting organs is something that will almost certainly inevitably happen. I think. I mean, it might there might be it might not happen the exact way we talk about it in the book and it might be combined with other technologies.

[00:06:33]

But organ printing or organ manufacturing is just it would save so much money and so many lives. It's an extremely valuable technology. And it's also, I think, something that can be somewhat iterated, you know, in terms of figuring out if you get something. I think it's very important to know whether it can be iterated or not in the sense that, like, does this slight improvement matter? And I think in the case of at least some organs, you don't have to have it perfect to get someone off dialysis.

[00:07:02]

Right.

[00:07:02]

Sorry. When you say it does a slight improvement matter, you mean over what we currently are, current methods, right?

[00:07:09]

Yeah. So so to give an example from the book, actually we talk about space elevators and in order to have a space elevator, you need to make the cable, obviously. Oh. Should I explain the space elevator. Yeah, go ahead. OK, yeah. Yeah. So if you want to imagine a space elevator, imagine you are in the boat and you are going towards something that resembles an oil rig and only it's probably got some boats around it with a lot of guns to to stop people from doing anything bad to it.

[00:07:35]

And up from the middle there'll be this cable or a ribbon that goes up into the sky to the point where you can't see it anymore. And then, in fact, it goes very far out into space, about a hundred thousand kilometers in some designs. And there it attaches to a counterweight. And the counterweight is very probably a captured asteroid or maybe some space junk we've thrown up to use as a counterweight and to a rough approximation. It works the way a sling works with a rock on it that you spin around your head.

[00:08:05]

It keeps it the the cable taut. That's why there's this asteroid. It's also awesome.

[00:08:11]

It is the is the counterweight orbiting sort of in sync with the earth and.

[00:08:15]

Right. So you want in geosynchronous orbit. So, so the counterweight, you know, without getting to detail the counterweight is there to keep the center of mass geosynchronous. So the so the ribbon remains sort of pointing straight up from the earth, roughly speaking, instead of starting to veer straight up. Well, yeah, because of you imagine even if it's slightly drifting around Earth in one direction or another, it's like a thread going around a ball. Right.

[00:08:42]

It's going to be bad real fast. Right. For people in the cable. Yeah. So I can get into the nuts and bolts of this. But the reason you want it is just very briefly, the example we use in the book is the difference between, oh, I need to be careful about this because I want to get the physics kind of wrong.

[00:09:01]

You can just well, just imagine you waving your hands while you talk. So we know not to anger towards what? Our language. Yeah. Let me just do it this way because maybe we can get in the details later if it flows naturally. But the basic deal is if you have a space elevator, a reasonable estimate that scientists have made say it'll cut the cost of launching an amount of stuff to space by something like ninety five percent. It would be a huge cost savings over the conventional rocket methods we currently employ.

[00:09:30]

And so anyway, so the point I wanted to get to was the material you're going to have to make this cable out of is going to be very exotic. It's going to have a very high amount of what's called specific strength. And for the for the physics people, that's something like how hard you can thwack it, how much force you can put on it for breaks divided by its density for the less physically people. Essentially, you want something like Superman's hair, right?

[00:09:53]

It weighs nothing and is really strong. And the reason is the super strong is obvious because it's under a lot of forces of all sorts. And but you also need to be lightweight so it doesn't pull itself apart because it's it's holding up its own weight just because you're going to need this really exotic material and you look around and what's going to work, you might think Kevlar would work, but Kevlar is is an order of magnitude off from having enough specific strength.

[00:10:19]

And in fact, no new material you've ever interacted with will work as the cable. But there is this substance called carbon nanotubes that might work, might just be enough, which is a problem in its own right. You know, just being enough might not be good enough from an engineering perspective, but set that aside. So what's what's holding us up from for making this cable out of carbon nanotubes? Well, you need the carbon nanotubes. To be one solid tube, you know, one tiny molecular tube made of carbon, it needs to go the whole hundred thousand kilometers because the moment you start using shorter chunks and weaving them together, you lose specific strength.

[00:10:56]

Right. And if the cable breaks anywhere, it doesn't matter where it breaks. You're in trouble. Right. Nothing good happens when it breaks. And so you need to have really long carbon nanotubes. And so the good news as of twenty thirteen was we're getting exponentially better at building carbon nanotubes. The problem is that we have not gotten any better since 2013. And in 2013 we were able to make them about a metre and a half long. I'm sorry, no half a metre long, which is quite a bit shy of a hundred thousand kilometres.

[00:11:28]

Right. And so it makes it really hard to speculate about what will come soon, because I would have told you, I think if I had only had data up through 2013, I might have told you, hey, we'll have this. And I think I estimated something like thirty five years. We'll have the stuff to make the middle part of the space elevator the hard part that turns out to probably be wrong. So, you know, trying to predict even medium term stuff gets really tricky.

[00:11:53]

And because I could also be wrong in the other direction, maybe someone discovers tomorrow that you can make really good airplane wings out of ultra long carbon nanotubes and suddenly a market develops and we're off to the races. But that seems to me to be a bit implausible. So I was I would have been more enthusiastic, ironically, four years ago than I am now.

[00:12:14]

Yeah, interesting. So I'm going to bookmark for the moment my second the second of my two part question about how transformative or which technology is most transformative, which I still want to get to. But I also wanted to ask you something that this is a nice segue into about what you see to be the main obstacles or bottlenecks standing in the way of some of these technologies, like if we sort of have reason to believe that some particular technology should be sort of logically or physically possible, what is usually the most common reason why we don't have it yet is it usually lack of insight, like we just haven't figured out a way around the technical challenges?

[00:12:53]

Is it lack of economic incentive that like if enough people were willing to pay for this thing, we probably would have invented it by now? Or is it like a legal thing like this? Would, you know, pose a threat to governments or this would, you know, we couldn't get regulation? Yeah, it's definitely all free in some regards. My my bias is that at its most fundamental, it's economics. I think when the economics of something get irresistible, those regulations almost always go away or they get loosened.

[00:13:22]

That's not always true. I think you could argue that didn't work out for nuclear, but that's a whole that's a whole thing that's really not worth digging into. That feels like a bit of a special case. Yeah. Yeah, I that's that's a good way to say it. It's a special case. But but yeah. So it's economics and we try to talk about this a bit in the book. So, you know, there's this question you might ask, which is why don't we have a colony on the moon, for example?

[00:13:46]

And I, I think it might be an astronomy version. That's maybe a tough question. I think to an economist, it's quite obvious there's no reason to go on the moon. I mean, there's there's sort of Carl Sagan ish. We are nomads sort of reasons to go to the moon, but there's not really much of value there. There's there's a I think a specious argument made that, well, there's a lot of helium three on the moon that maybe could someday work in some sort of fusion reactor.

[00:14:10]

But now you're talking about like a fusion reactor that's not even one of the popular fusion reactor designs. And you're going to have to get its fuel from the moon as opposed to the ocean. So I think if you look into a lot of technologies that we thought we'd have and that we don't usually if it's not that people just had a physics misunderstanding about what was possible, it's because the economics didn't materialize. And in specific is because either there's no good economic reason, as in probably the case for going to the moon to build a colony or there's as we said earlier, there's not a sort of iterative way to improve the technology like the carbon nanotubes.

[00:14:47]

Right. So like meaning, you know, there might be there's some benefit to small carbon nanotubes, maybe for some sort of composite materials. But there's no we don't all get better off if you make it twice as long. You know, there's not double the benefit. Right. Whereas with a computer, five percent better specs and we want it right. It reminds me if I'm understanding you correctly, it reminds me a little bit of evolution, like as you're making incremental movements in this landscape of of organism design.

[00:15:16]

And by you, I mean evolution. You need to be like getting some benefit even from intermediate changes before you get all the way to an entirely new feature. Like there has to be something that the intermediate stages, the Proteau feature does for you, for the organism in terms of survival advantage, in order for it to stick around. Yes. And I think in principle, public funding of science is kind of supposed to act. Is that that as a.

[00:15:43]

Yeah. In theory, that's how it's supposed to work, but with some of this stuff like, you know, a cable to space or a really good example would be quantum computing, I think I was I shouldn't say because I don't know if they want this repeated, but I was talking to a prominent quantum computing guy and he said probably, you know, we won't have a real quantum computer until some government wants to pay whatever it is, one hundred billion dollars to make it happen and then you can have it.

[00:16:06]

It's probably true for a fusion reactor, for example. There's all sorts of things we could throw money at, but the money is enormous for some of these technologies. So there are limits on what I think the public will bear in terms of trying to bridge those fittedness landscapes with public funding of science. Right. OK, well, then let's go back now to the transformative question. Yeah.

[00:16:26]

Which of the things you encountered in the book do you think would have the biggest impact on what you can sort of pick, what you want the outcome measure to be GDP, human welfare, like in terms of transformative news, what I think of as the brain computer interface stuff to me, to me, that's the most I also want to say upsetting technology. That is a measure of transformative ness. Let's be honest. Yeah, I mean, transformative.

[00:16:51]

This is upsetting. Like the older I get, the more I know I have to change. And that's not really true. But you know what I mean. Like, I'm already I'm I'm all of thirty five and I'm already saying like cultural things on Facebook that I don't understand. And you know, the moment we're actually thinking of tinkering with the ways our brains work, you're talking about a pace of change. That's that's it means we won't be recognizably us anymore.

[00:17:17]

And I don't know, I find that very troubling fettling. I almost want to feel to me, it's very existential. It's like, you know, if if all humans died out suddenly and there was, you know, like a race of armadillo people and they went to the moon, it's like I mean, I guess all humans are dead, so we don't care. But like, I was the one human left. It wouldn't do much for me to know that the armadillo people went to to Alpha Centauri or something because maybe this is chauvinists or something, but they're not me.

[00:17:46]

They're not us. And when you start putting when you start tinkering with human brains, you know, the first things we'll do if we succeeded, this will probably be things like, well, we'll be a little smarter, we'll have a little bit better memory. But, you know, over time, it's going to become stuff that we're not even able to consider right now. It'd be like trying to tell someone from the nineteen forties about the Internet.

[00:18:06]

It's just too much you there's too much you couldn't anticipate. And if we do that to our brains too, we're going to end up as entities we couldn't anticipate. We're not going to be recognizably human anymore. I don't think I know you just said that these changes will be things that our current selves can't anticipate. But could you sketch out an example of of what such a change could look like? Sure. I mean, so there's a couple of ways you can imagine it.

[00:18:31]

So I don't know why springs to mind is just a random example. But like, OK, so suppose you're in a future where you're completely interfacing your brain with a computer. That means your view doesn't exist inside your skull like it has for humans, for always for all animals that have skulls, I guess. So what does that mean? Well, that means, for one thing, you you can't really be killed. You're immortal. And it's hard to imagine such an individual has a thought process that's recognizably human or at least completely recognizably human.

[00:19:01]

I feel like that would drastically change the way you look at your own life and like what's valuable to you. This other stuff, for example, comedy is my my job. I think the basic way comedy works is it's kind of a trick you pull on your brain where you you sort of set up a logical expectation and then you twist it in a way that resolves into some other sense. And I feel like it might be the case that a super intelligent future brain just doesn't appreciate a joke.

[00:19:28]

And, you know, I feel like. And that's bad for that creators. Yeah, yeah. No, I put it put me out of a job, which is depressing. But what may well be depressing because I'll just eliminate depression with my computer interface. It is both the problem and the solution. You're going back to the armadillo people thing. It's like a version of quote unquote US is doing cool stuff in space or whatever futuristic thing we're excited about.

[00:19:52]

And it can't understand a joke. It can't appreciate a pretty song or a poem or something. It doesn't do anything for me. It feels like it's all pointless. So in terms of transformative, is making all of human existence feel pointless? That's pretty transformative. I say that as someone who's excited about this technology and furthermore is excited about some of its specifics, the idea of being able to boost your attention span or your focus or your intelligence or memory, that's I want those things.

[00:20:23]

And I know if they came, I would and I wouldn't want to be the first one getting the surgery, but I might want to be the fiftieth one. Yeah. So I it's it's it would be transformative in a way that I think would ultimately be kind of depressing.

[00:20:36]

And where do you think brain computer interfaces score on the sort of weighted score, including both. How transformative. They would be and how, I guess, how much of an economic incentive there is for them, since that's the proxy for a fair likelihood?

[00:20:50]

Yeah, so that's I would say I mean, there's a huge economic incentive. I actually feel like the economic incentive is the scary part. So let me give two examples of that. One is just the sort of obvious example of the the arms race of intelligence. I'm sure, as you're aware, I read recently something like a fifth or a quarter of elite scientists will admit to taking no trubek drugs, brain enhancers. Right. Like Modafferi lateral. I've heard cocaine, you know, and so there's already an arms race happening.

[00:21:20]

BCI, I would just really take it to another level if it were perfected. So there's incentives in that direction. And, you know, if you if you believe, Tyler Coulon, that you can't even be average anymore. BCI creates weird dynamics where, you know, if you can't be average, that means you have to have this technology in order to compete and maybe at some point even to have like an OK job. And so the problem is it will much like with the smart drugs, once twenty five percent of people are doing it, you're pretty highly incentivized to do it, too.

[00:21:52]

And not just peer pressure, economic pressure. And another example, I mean, frankly, that's kind of the nice version. That's the version where you take Modafinil and you discover the secrets of the universe. A more depressing version is one proposal we read about, which I think was meant positively was we could have say some kind of we heard the word electro Stickles used. So let's use that electro Stickles meaning like something that acts like a drug on your brain but is done via, say, electric or magnetic fields.

[00:22:24]

Suppose there were an electric suitable method that results in increased or detects when you're drifting and focuses you. So there's a nice version of you wanting this, which is say you have a crummy, I shouldn't say crummy job. So you have like a dangerous low paying job, like you work in a meat factory. It would be really helpful to you probably to have a machine that says, hey, you're drifting and you're holding a machete, you know, an ultra sharp knife.

[00:22:49]

Yeah.

[00:22:50]

Or if you're doing surgery and, you know, labs and folks that be good or a truck driver, well, probably at this point we won't have truck drivers anymore. But just for example.

[00:22:59]

Yeah, yeah, but but jobs like that or flying a jet plane into a war zone, you know, their jobs were possibly it would be a good thing for you to have. But the scary thing to me is let's suppose you're working an office job and it's possible to detect when you're, like, drifting. I don't know. I mean, maybe that's good. Or maybe maybe there's a nice version where you work fewer hours because you're just so focused.

[00:23:19]

That seems like an ugly direction that this could take as a general way to think about it. There's an extent to which you're offloading metrics and control over your own brain. You know, obviously isn't going to be a bowl of cherries, let's say.

[00:23:35]

So a different way to look at the transformative question is, well, to zoom out a little bit, you may already know this, but I did not realize until pretty recently just how uniquely transformative the Industrial Revolution was. Like, if you look at curves of like graphs of GDP or productivity or life expectancy or like percentage of the world not living in absolute poverty and any sort of metric of, you know, human well-being. And you look at that over centuries, the graph is like pretty flat until just about the 18th century when it just sort of rockets upwards in this sort of hockey stick graph.

[00:24:22]

And we've sort of been on this steady upward clip ever since then, thanks to technology developed in the industrial industrial revolution, like the steam engine and the technology that we've invented in the last, say, 50 years has kept that growth going. But it's still sort of at the same rate, like even computers and the Internet. None of that produced another kink in the graph search that we're like rocketing upward at a, you know, a different significantly higher rate than we were before.

[00:24:54]

So I'm I guess I'm curious whether you think any of the technologies you looked at and talked to scientists about have the potential to spark another in something analogous to industrial revolution that could be like a phase shift for for our growth?

[00:25:10]

Yeah, potentially this gets into difficult to predict future stuff, of course. And sure. Totally. Or the way to say it is if you went to someone in the 18th century and you said, you know, the technology that makes looms work a little faster, that's going to result in people starting to cure cancer. Two hundred fifty years from now, that would have been a non sequitur. And so meaning as it's to say, like there might be something now that seems trivial, that ends up being the most important thing and we're just not thinking about it.

[00:25:39]

That said, so we have a chapter about fusion reactor. It's by the most well-known technology we talk about, but we did try to get sort of into the nitty gritty of what's holding it back and what what's what's going well. And something I like to think about when you when you talk about increasing GDP stuff up until I want to say the late 60s, increasing GDP was deeply tied to energy use. And it wasn't until that period that they kind of disentangled.

[00:26:06]

And I don't necessarily think that was just technology. I think a lot of that was environmental movement, concern about efficiency type of stuff. And I just this is totally speculative. I have no signs of no evidence of nothing. But it's something I think about from time to time, which is what if we were in a world where we could just be completely profligate with energy, like not just us as people, but people running factories or, you know, people, you know, working on the future?

[00:26:30]

You know, one of the technologies we talked about, which is almost certainly like completely implausible, except under particular circumstances, is there's a better, potentially better form of going to space in a rocket where you use an incredibly high powered laser to shoot energy into the back of the rocket to get it to get more acceleration per unit of fuel. But I think we calculated requires like the equivalent output of 50 large nuclear power plants at the same time. So that in and of itself, never mind the technical hurdles, is a huge barrier because energy is dear.

[00:27:02]

So I wonder about it. If we got into a scenario where energy costs almost nothing, if there would be surprising new things we would do. And I'd say that not knowing what they would be. But I just wonder, given the history of human life improving as we get more access on an individual basis for energy, I wonder if there's if I say this way, like suppose nuclear fission power went the way people thought it would go in the in the 40s like that.

[00:27:32]

It made energy. I think anyone ever seriously thought, as is sometimes claimed, that it would be too cheap to meter. But I think something like that could have happened or it could have gotten so cheap we wouldn't think about it. We wouldn't be concerned with efficiency. We wouldn't be concerned with pollutants or CO2 issues. It might be a very different world. It's hard to say. I don't know. It might be different in ways we we aren't thinking about.

[00:27:56]

Yeah, yeah.

[00:27:57]

I agree that I had actually thought of that of fusion as a potential answer to that question as well. Although I wonder also brain computer interfaces could fill that role if they could make a significantly more intelligent just because that I could see that being the catalyst.

[00:28:14]

Yeah, I think there's probably some basic economics. I know. I mean, this always when I say is I always feel like a jerk, but there's just very good evidence that increased IQ equals increased productivity. So if you could twiddle that knob for everybody, I mean, there might be weird social effects we're not anticipating that would be terrible. Maybe nobody would want to work if we were all super geniuses. But but then I guess economics would say that the janitor would make millions of dollars a year or so it would work out.

[00:28:41]

Yeah, it's complicated. This is this is actually a good Segway into another two part question that I wanted to pose to you of the technologies you looked at, which do you think is least risky and which do you think is most risky sort of to society or, you know, civil as a whole?

[00:28:58]

I would say least risky has got to be organ printing or maybe like precision medicine, like any of the medical stuff. To the extent we could come up with some way, it's bad. The good so outweighs it. With organ printing, you can say, well, there it's going to change the way we think about our bodies. And that's probably true. I don't know if it's negative, but it's weird. On the other hand, there's whatever it was one hundred twenty two thousand people in the US waiting for an organ like, you know, it's very hard to say.

[00:29:23]

Well, there's an ethical conundrum with giving you, you know, an immediate exit from dialysis like our conception of ourselves.

[00:29:30]

Yeah. Yeah. So I think there's it's hard for me to imagine a serious downside to that. I mean, you can you can get a little philosophical about how it's going to make culture, I think, strange to people of our generation, even to generations. They have this sort of stuff. But that's our problem, you know. Right. In terms of dangerous to society, not kind of the stuff that might make us inhuman, something that we thought about.

[00:29:54]

And this is part of the space launch, but also part of asteroid mining is there's a sort of physics problem with bringing stuff home from space, which is that if you have a Poopsie, you just got to like ballistic hunk of metal coming at the planet. And it's not terribly different from dropping a nuclear bomb. Oh, and so and so there's a sort of fundamental problem, you know, so we came to the conclusion that probably the utility of asteroids is not to bring stuff home, but there are scenarios where that might be true, that you might want to bring stuff home.

[00:30:27]

But if you did think about it like this, suppose you wanted to bring back a relatively small hunk of metal like, say, a thousand tons of of iron. Do you trust another country? I mean, I don't know if you trust your own government to prosecute bringing that home somehow. But do you trust like. Vladimir Putin to bring that home, bearing in mind that, you know, the physics of this stuff is pretty Newtonian, if you wanted to deorbit into a city with a bomb big enough, it really wouldn't be that hard or not a bomb, but with an object, with the energy to deliver a bomb like explosion, it wouldn't be that hard.

[00:31:00]

And as far as I can understand that there's no way around that. So like, you know, with an ICBM, with a nuclear warhead, a nuclear warhead, it's kind of a ticklish mechanism. You know, it's things have to go just right or it just blows itself apart and it doesn't react properly. So it's like literally, if you could shoot a cannonball hard enough at the tip of an ICBM and actually hit it, that's the hard part, of course.

[00:31:24]

But if you can actually hit it, you could disarm it. And you would you would either get a minor explosion or you just, you know, get some nuclear junk scattered around, but you wouldn't get the real danger of a nuclear bomb. But if you have a thousand tons of tungsten falling to New York, the solution to that problem might be worse than just letting it head. You know, you might be able to deflect it with an enormous amount of energy.

[00:31:47]

You'd probably need nuclear weapons to do it. So this problem of of how gravity works is is a little freaky. I don't know that there's a good solution. I mean, you have to have really stringent laws about what people would be allowed to do if space is cheap to navigate.

[00:32:02]

Right. But the laws would have to be between countries that we would like all have to have some way to follow and enforce the laws which we've never successfully done. And it's just the other thing to consider and we talk about this very briefly, is like it's not just that we'd have to cooperate. It's that the first nation that solves this problem, the first nation that builds a space elevator or some ultra cheap other method of going to space has the greatest military advantage in the history of humanity.

[00:32:25]

It's like you get to fight every war from the top of a mountain, you know, almost literally, right? Yes. You know, there's a great the greatest and least creepy Heinlein novel is The Moon is a Harsh Mistress. And one of the plot points is that the moon people are able to rebel because they can just launch rocks from the moon. It's one sixth gravity and no atmosphere. It's very easy to shoot stuff down at Earth. It's almost literally like being at the top of a mountain so you can have your people, but you still win, right?

[00:32:54]

So this compounds the risk problem because the more incentive there is to be first, the more you've got a kind of an arms race dynamic where, you know, taking your time to find safe ways of doing things and to build up a system of international governments and cooperation is there's a huge temptation to sort of toss that by the wayside because you just really need to get there first. Right.

[00:33:16]

And there's also like it's some of these concerns I think are embedded in the technology in interesting ways. So we read a bit about space, obviously, given the amount of space it occupies in the book is relatively modest, but we read a lot about it. And a common proposal is that we'll do it as a base. And there are physics. Any reason to do that? Because you can it might help you. A little of you need to dodge something.

[00:33:37]

You can move the cable very slightly. And also there's spots in the sea that are very peaceful and in terms of weather. But the other thing is scientists tend to be very cosmopolitan. And in the case of space elevators, there's a really good reason, which is it would really be nice if we all went in together. So it wasn't just one group that had it. You know, that might be the least bad solution to the problem. It wouldn't solve it permanently and it certainly wouldn't solve a problem of terrorism, of, you know, small groups of actors just trying to cause mayhem.

[00:34:07]

But something like that might be plausible if at some point we realized we can do this, the best thing for the world would probably be as many nations as possible could go in on some sort of agreement to build the thing together in international waters. Whether that's likely, I don't think we did a little research on the early space stuff in nineteen sixty seven Space Treaty governs a lot of supposedly governs a lot of space, but it's kind of a silly thing to even talk about.

[00:34:32]

We're talking about governing the rest of the universe, you know, and so the stuff in the 60s is very utopian. It's very Kennedy ish. It's very we like I think Kennedy laid out in his famous speech at Rice University that, well, we should use all space to benefit mankind. Like, come on, we're going we're going to all just agree to use all the right. We can't agree to use Earth to benefit each other, but we're going to reduce the rest of it.

[00:34:56]

It's just completely implausible. So I think cheap space faring opens up a whole world of problems. It's hard for me to imagine it doesn't open up great wars unless we somehow have evolved our ethics to to get beyond squabbling and distrust.

[00:35:13]

Since we're talking about risk, there's a different kind of risk. I'm also interested in your take on we've been talking about kind of macro scale civilizational risk, but there's also this issue of the risk to the users of the technology, having spacecraft that come with some non-negotiable risk or space missions that come with some non-negotiable risk of, you know, the passengers dying. And I wonder, like we earlier in this conversation, we were talking about bottlenecks or obstacles that prevent technology from being developed.

[00:35:43]

I wonder if another bottleneck is just that we're not tolerant enough of risk at the small scale level, like it's kind of ironic, actually, because I personally I feel like we're a little too cavalier about risks at the society wide level, like risks that could destabilize or wipe out civilization. But then also at the same time, I feel like we're too uptight about risks at the level of like a handful of people potentially dying, which is, of course, terrible.

[00:36:09]

But like we accept that level of risk all the time for much more mundane stuff that has less potential to expand the frontiers of our knowledge, like driving trucks or something that also kills a bunch of people every year. So, yeah, I guess I'm wondering if you think that being too averse to risk is a bottleneck for some of the technologies you're you've looked at?

[00:36:27]

Yes. I just want to put the name of a book that I think we reference briefly by Randy Sumberg, who wrote this book with the greatest title of any book in history, I think is Randy Sumberg.

[00:36:37]

Oh, I know the one you're talking about. It's safe is not an option. Overcoming the futile obsession with getting everyone back alive. That is killing our expansion into space. Yes. Overcoming the futile obsession with getting everyone back alive. So I should say it's a funny title. I think he's basically right. And one of the arguments he made was essentially a lot of times risk aversion doesn't actually mitigate risk. So he discusses some cases, I think, with the space shuttle where basically sort of escape hatches were built in the design, which actually make it substantially more dangerous because it's just one more thing to break.

[00:37:10]

And so there's that. But, yeah, I take a general point that we're probably more risk averse being dangerous. But I think at least in some regards, some regards or humane issues for sure. For example, if you look up like, you know, what was done early on defined like the polio vaccine, there's a lot of testing on children like this is pretty iribe FDA. Oh, yeah.

[00:37:30]

I'm talking about risks that people adults opt into knowingly just for the sake of discovery and exploration and adventure like the explorers of ages past and the Ben Franklin who flew a kite into electric storms. And, you know, like people took on risks because it was exciting. Yeah. I honestly don't think I see society maybe getting more risk averse or at least more bureaucratic. But on the individual level, I don't know that I see that. So, like, if you look at the shuttle disasters where we like, you know, NASA funding was affected by this sort of thing by people getting killed.

[00:38:04]

I personally believe if we had a space race, anyone could go to and launch a ship from all. There'd be a whole there'd be like Richard Branson, people who would be like, oh, there's one every chance of survival. I guess I'll go. I think there are a lot of people like that. I really do. I and I think a lot of the risk aversion is about covering in nature. And so it ties into bureaucratic or governmental stuff.

[00:38:26]

But I think if we had a system, you know, where where you could just go, if you had the money, I'm sure there would be like rich crazy people as as was true in the age of exploration, who would just pay to outfit a ship with adventurers as long as the government wasn't prohibiting it? Yeah, my wife is a parasitologist, for example, and there are all sorts of stories out parasitologists who want to bring I should repeat this, but who want to bring some species home from South America, Africa, and they bring it in their bodies to get it through customs.

[00:38:57]

Oh yeah, I wouldn't do it. I'm kind of I wish I didn't know that, but yeah, I don't know that I agree with that. On an individual level, there's that risk aversion. I think it's just, you know, I mean, up until the early 20th century, there were people quite dangerously going to Antarctica or exploring the Amazon, and that was as dangerous as going to space, I'm sure. And I think that dried up just because, you know, there's not much to really explore like that anymore.

[00:39:23]

I think that impulse is still probably quite available. It's just we don't have a place for those people. But if you could say, hey, do you want to take I mean, well, a good example. There's a project called Mars One, which was do you want to take a one way trip to Mars kind of for a reality show by, I should say, but like by kind of sketchy people. It's not like it's by NASA and they got 4000 people to sign up for a one way trip.

[00:39:44]

And surely you must know, unless you're just not thinking hard, like you'll probably die on Mars away from your family. But people are willing to do this. I, I don't know that we're on the micro level risk averse to that extent.

[00:39:58]

Yeah, I guess I did. I did mean on the societal level, like, is the government willing to fund things that have a non-negotiable risk of death?

[00:40:06]

And I think I mean, that's what's exciting about the cheap space travel, is that the government I mean, the government kind of has to be risk averse, right? They're representing our assets to an extent, you know what I mean? So it's reasonable that they're risk averse. But if you if you have, like Elon Musk is planning to become king of Mars or something, I guess, and he's he's free to risk his life doing that. I think there is a quorum of people of that sort who, for better or worse, once it's an option, we'll go around exploring the solar system.

[00:40:34]

Cool. Well, last question about soonish. I'm just curious for you personally how your attitude about to. Theology changed from doing all of this research, like if you compare yourself now to two years ago or whenever before you started, you know, compiling your lesson and investigating these technologies, do you feel more or less optimistic now than you were before?

[00:40:58]

Yeah. So there's a kind of two parts that it's one like how is our impression change into? Are we more or less optimistic? Our impression totally changed. One thing that was probably shouldn't have been, but was a bit shocking was every time we dug into a technology, pretty much universally, it turned out our preconception, wherever we had gotten it was totally wrong. And what we thought was the hard part turned out to just not be that interesting, what we you know, and then conversely, there would be things that were totally impossible or things that were really difficult that we just hadn't even thought about.

[00:41:34]

It kind of messed with my worldview almost. I feel like I got a little more reticent to have political opinions, if that makes sense, because I'm like, oh, my God, I was just learning about how rockets work. And it and it turns out I totally didn't know what I was talking about. So how do I think I know about how tax policy should work, you know?

[00:41:52]

Have you heard of Galman amnesia. No. So Murray Gellman who's um I think he was a physicist. Yeah. He uh he commented once that there's this funny thing that happens where if you read anything in the popular press about a subject that you personally happen to be an expert in, you discover how off base it is. Oh God. They're just misrepresenting everything and understanding everything. And then when you read about anything that you're not an expert in, you kind of forget that.

[00:42:19]

And you just sort of take it as truth. And, you know, you forget that there's no reason to expect that your particular field should be an exception. And, you know, maybe you should be more uncertain about everything.

[00:42:31]

Yeah, it's totally messed with my world view. So I like to think it's made me a little bit of a better skeptic. You know, I'm yet more reticent to think anything, which is hopefully not too paralyzing, but but it's probably at least a good impulse. I appreciated that you guys were willing. Like, despite acknowledging the perils of making predictions right up front in the book, I appreciated that you were nevertheless willing to say, like, here's a thing that could happen.

[00:42:53]

Here's a reason to think it might happen. You know, I think it's good to speculate and to put very rough levels of confidence on things instead of just completely being sort of compulsively agnostic. Yeah, I totally agree. I think that this the way we say it is, we are skeptical but optimistic. And so there are things we want to have and we'd like to happen and also things we're scared of. And we just try to almost like with each chapter, it's sort of like holding the universe steady.

[00:43:21]

If this technology changes, what might it do? Because, you know, that alone, that act of holding the universe steady is kind of a cheat. But we always do when we're predicting the future, we don't think of all the different things that are going to happen because you can't you can't you can't predict one thing. You certainly can't predict 50 in terms of optimism about things getting done, I would say not too different, I think, because when you learn a lot about technology, it kind of gets you more excited.

[00:43:47]

But it also informs you of all the perils and difficulties I'm trying to think about.

[00:43:54]

So on net about the same. Yeah, yeah, I would say about the same. But because of balancing, not because of a lack of change. Interesting. So like, well, I would you know, I take that back. It seems to have a little more optimistic about some things. I'm more optimistic, like the precision medicine chapter to me was like it was almost shocking. The new technology is coming out like there is. I think we briefly touched on something called circulating tumour DNA, which is the apparent fact that at least for some cancers, you can detect solid tumors via blood tests, which is incredible because people may not know a big difference between a lot of dangerous and none.

[00:44:30]

I shouldn't say non dangerous, but between more or less dangerous cancers is just how hard they are to detect. So part of why it's easier to deal with leukemia is it's in the blood, it's bloodborne. There's not like a secret tumor hiding somewhere in your body. So if we have a diagnostic that just says, hey, there's a tumor with this genome in your body somewhere, go look for it. That's potentially enormous. So that's that whole paradigm is just really exciting.

[00:44:53]

Makes me you know, I'm not one of these we're going to live forever starting next week type of people. But I am optimistic that maybe within my lifetime they'll be something like a tricorder that just kind of, you know, it'll be a much more painful tricorder. It'll take like eighteen samples from different tissues, but it'll give you a relatively quick readout on what might be killing you right this second. So I guess I'm optimistic about some things and pessimistic about others as well.

[00:45:17]

Zach, before I let you go, I'd like to invite you to nominate a pick for this episode. So some book or blog or article or movie, something that has influenced you in some way. What would it be?

[00:45:30]

There are two books. Can I give one fiction, one non-fiction section? I'm kind of a humanities guy on the sly, so I hate to leave out literature.

[00:45:37]

You're like a parent who doesn't want to choose between his favorite children. I know. I really, you know, I do feel. So let me just say, and with the caveat that there are too many to choose from and I worry I'm going to be insulting a friend who wrote a great book by not mentioning it. But there's a delightful book by, as you say, delightful and underappreciated. I want to also select a book Maybe Your Audience Hasn't Heard Great and underappreciated book by Jonathan Dowling, who's a quantum computing guy who wrote a book called Schrodinger's Killer App, which is kind of a goofy title, but is essentially a book that if you were a lay person who can do a little math and logic is the closest I found to being able to teach me what quantum computing is.

[00:46:17]

I'm sure my sense is there are other quantum computing people who would be like, well, this part isn't quite right, according to me. But from from my perspective, as someone who is not a quantum physicist, it was it was just delightful. And incidentally, Dowling's also a great storyteller, and that's in there, too. And most shocking of all, it's like it's a book by CRC Press, which is a company, a group I love, but they don't usually publish.

[00:46:38]

I don't think books like this. This is like a pop sci book. It's a very thick pop sci book. But but there it is. And so you usually expect to just get sort of thick technical books. And this book has a bit of that, but it's very accessible to a nerd. I don't want to oversell it.

[00:46:52]

Well, I think my audience is disproportionately nerd. So that's a very odd point.

[00:46:56]

I think to a person who enjoyed a little discrete math at some point in college, this is this is accessible to someone who flirted with discrete math and topology.

[00:47:05]

Exactly. Exactly. Who had a. Yeah. May I just recommend there's a somewhat forgotten piece of fiction, but it's one of the best books ever written. I don't it's hard for me to say it changed my life because the way in which I think, you know, great books changes your life is kind of subtle totally or happens by accumulation or reflection. But an almost forgotten book by a woman named Beryl Markham and they are called West With the Night.

[00:47:30]

It's a sort of collection of fictionalized personal stories. And it's really the only book she ever wrote. She she has she put out one other book, which was kind of cobbled together and is, I think, done to make money. And it's mostly garbage. So it's not worth reading. But but Westway, the night is like the kind of book you pick up and you can't stop reading it. And it's not because it's suspenseful, it's because it's just so beautifully, perfectly executed.

[00:47:53]

And I found it. I was reading an old book of Hemingway's complete letter was in complete letters, but it was it was a collection of Hemingway letters. And he mentioned it as sort of this book that makes all of us look like garbage. And I'd never heard of it. I'd never heard of Beryl Markham. So I went and got an old copy of this book. And, you know, sure enough, the Hemingway quote was on the blurb on the back.

[00:48:14]

So I guess it must have had a resurgence after his letters got put out. But this is my like my my my little rant. I think as nerds were often we often overlook literature as either sort of a froufrou thing or a luxury that we don't have time for because we're in the serious business of being nerdy. But it's a book that's just good because it's beautiful.

[00:48:33]

And I think it'd be nice if people in the sciences, people have a bit more logical bent, made a little time for that sort of thing to cool willing to both of your favorite children, the fictional and the nonfictional ones, as well as, of course, to soonish and to CNBC and back. Thank you so much for joining us and congratulations on your book launch. I, I feel like also congratulations on making it through writing a book with your wife, which must be up there up there in the in the grand list of relationship trials by fire, like trips to IKEA.

[00:49:10]

I just I've been I've been surprised. I've been surprised by how surprised people are. But I, I totally get it. But, you know, I normally love picking on my wife. But let me just say, having a reasonably mature person who can communicate and think rationally to work with is invaluable about doing anything that actually came through in the little comic illustrations of your working relationship sprinkled throughout the book.

[00:49:38]

So I'm not surprised, but glad to hear that. Yeah, cool. Well, thanks so much for joining us. It's always a pleasure.

[00:49:46]

I'd love to come back again sometime.

[00:49:48]

This concludes another episode of Rationally Speaking. Join us next time for more explorations on the borderlands between reason and nonsense.