Transcribe your podcast
[00:00:14]

Rationally speaking, is a presentation of New York City skeptics dedicated to promoting critical thinking, skeptical inquiry and science education. For more information, please visit us at NYC Skeptic's Doug. Welcome to Only Speaking the podcast, where we explore the borderlands between reason and nonsense, I am your host, Masimo YouTube. And with me, as always, is my co-host, Julia Gillard. Julia, what are we going to talk about today?

[00:00:47]

Masimo, today, we're going to discuss the philosophy of transhumanism, which is the idea that we should be pursuing science and technology to improve the human condition, modifying our bodies and our minds to make us smarter, healthier, happier and potentially longer lives.

[00:01:05]

So there's a couple separate but intertwined issues here, the first of which being how feasible are the technologies that we would need in order to accomplish the transhumanist goals? And the second being, even if we could accomplish those goals, is it a good idea? Are these are these goals that we should have?

[00:01:22]

But before we get to the actual issues, so are you saying that transhumanism, therefore, is more than just the idea that we should be using technologies to improve the human condition? Because that seems a pretty uncontroversial point. I mean, everybody who takes medicines or goes to the doctor of flies in the air with an airplane or anything like that wouldn't object to the idea that, yes, technologies should be used to improve the human condition. Right?

[00:01:47]

Well, that's actually the argument that I and many other, at least somewhat pro transhumanist people would use to defend transhumanism, that it's really just an extension of what we've always done.

[00:01:59]

But but what makes it feel different from from the sort of thing you're describing is that it's transforming us in a way that to the extent that we no longer feel like we are actually human, recognizably human.

[00:02:10]

Right.

[00:02:10]

So so the demarkation line between transhumanism and humanism. So consider that I consider myself a humanist, as you know, but the former did go out of the latter.

[00:02:24]

Well, actually, I'm going to question that even historically. But but OK. So I consider myself a humanist in particular, of course, the secular humanist, although these days religious humanists are few and far in between. And you are saying that the major distinction between, say, a humanist perspective and a transhumanist perspective is not the use of technology to improve the human condition, per say, but the use of technology to radically alter the human condition, as in, say, altering the human genome genomic line or finding ways to, you know, transfer our consciousness somewhere else or think things of that sort.

[00:03:01]

We're talking about that kind of radical solution, right?

[00:03:04]

Yeah, that sounds about right, including modifying our body to become part cyborg, modifying our minds in some way, either through genetic engineering or through implants to improve our mental abilities to make us far smarter or improve our emotional abilities to allow us to take more pleasure in things or to get rid of some of the emotional instabilities that humans seem to have built in. And that was the life extension, which I think is not really traditionally a part of a humanist.

[00:03:32]

Well, see, that's that's where things get, again, pretty fuzzy because let's say I think a few people today have objection to, say, using chemicals to alter our mood. If you're depressed, you take antidepressants, things like that. But I wouldn't consider that a transhumanist thing to do. I mean, this is something that a lot of people do, regardless of what their philosophy is. Or, for instance, your other example was life extension.

[00:04:02]

Well, again, we've have extended human life significantly over the last several decades and a couple of centuries, and nobody seems to have a particularly strong objection to that. We're talking about radical extension. We're don't talk about radical alterations, right? We're talking about a fundamental rearrangement of what it means to be human.

[00:04:22]

Right. I think that's spot on, I think. And what's interesting is that when you when you put it that way, it really does sound very uncontroversial.

[00:04:30]

Of course, we want to to extend our lives. And, of course, we want to create technology to make us happier.

[00:04:37]

But but there is the spectrum. And I think that people start dropping out at certain points where they say, no, we're no longer in the realm of of acceptable fixes for problems.

[00:04:48]

And now we're getting into into hubris, into playing God, into right into play. Well, what it means to be human. Right. And so the life extension issue, they're not talking about about increasing lifespan from, you know, average of 85 to 90 or 95 years. They're talking about can we figure out what actually causes ourselves to die and and what are the mechanisms that causes us to die that cause us to die? And how can we fix that?

[00:05:14]

We're talking about immortality. Well, I mean, it's not like we would be invulnerable to wounds or, you know, getting hit by a car.

[00:05:21]

But but I guess indefinite lifespan is what the term they would use.

[00:05:25]

But again, let me go back to one more one one step, which is it's not quite fair to say that that even. Much less radical, much more mild improvements in human condition are not controversial. I mean, for instance, we live in a society where there is a lot of discussion and I think for good reasons about the use of chemicals, you know, mind altering or mood altering chemicals they certainly need. Right. I mean, there is there seems to be a general agreement that, yes, they need to be used, certainly in very specific cases.

[00:05:56]

But should they be used on millions and children just because we want our children to behave slightly more calmly than than they would otherwise and that sort of stuff?

[00:06:04]

So so even in the non radical area, there is actually quite a bit of discussion. So that explains why when you get to the radical applications, of course, people are going to start raising objections. Now, let me ask you a question. So you read quite a bit, I assume, about transhumanism. You consider yourself a transhumanist.

[00:06:23]

Is that correct or sympathetic to it? Yeah, that's good enough.

[00:06:28]

So transhumanism derives historically from futurism. It is a type of futurism. And one of the things that always struck me about futurism is the futurists are really, really bad at predicting what's going to happen in the future. What makes Transhumanist think that they actually are going to be any better?

[00:06:44]

Well, what I would say is that the what they're actually saying is that these are desirable goals to pursue and that based on what we currently know, none of these goals, with potentially the exception of uploading that sort of a separate issue, I think, and mind uploading.

[00:07:00]

I'm sorry. Mind uploading, mind uploading now. Sorry.

[00:07:03]

Uploading one's mind to a computer. There's sort of some philosophical issues involved there about whether that would actually be you or whether a consciousness could exist in a computer. But setting aside that which is sort of the most radical of their proposals and we probably will do a separation.

[00:07:17]

Yeah, we should actually do some really interesting issues.

[00:07:19]

Not there, but all of the other technologies that they propose that I'm aware of, there's nothing that makes them apriori based on what we know now seem impossible. And in fact, in a number of of those dimensions, we have actually made some amount of progress already. And so all that they're saying, at least most of the transhumanist minutes I've talked to is that these are desirable goals to pursue and we should be pursuing them and faster and with more effort and funding than we currently are.

[00:07:45]

So to the extent there that any of them are actually making concrete predictions about when something will happen and exactly in what form it will arrive, that I would say, yeah, that's that's pretty unfounded.

[00:07:57]

And you're right that futurists have a pretty terrible record of making concrete predictions about things more than a year off in the future.

[00:08:03]

Right. Futurists also have a terrible record at underestimating the problems with nanotechnology, but maybe we'll get to that later in the show. So the issue is there. There is if you define it that way, I mean, if you think I mean, the way you just did, that is OK.

[00:08:19]

Well, this is about talking about what sort of technological research, research agendas we should we should be pursuing. Well, to some extent, then it may be a lot to do about nothing, because a lot of some of these technologies are way far into the future. If, in fact, they're coming at all the way, it's a lot to do about nothing.

[00:08:37]

We still need to start now. If we're going to achieve them, we're going to start talking about and we need to start doing something that is preliminary to it. I mean, for instance, let's say they want that or they're trying to do that. Right. But I don't see any actual research program coming out directly of the transhumanist effort. I mean, you know, let's take a second example, OK? You know, people have been talking about artificial intelligence, say, for a lot of time, a long time.

[00:09:00]

And we're going to pursue it regardless of any transhumanist who says that we should be pursuing it simply because it's interesting in and in and of itself, simply because it has obvious potential applications that a lot of people are interested in. So the transhumanist doesn't seem to be adding anything there other than sort of telling people that this is a good idea. But nobody's really agreeing, disagreeing that it's a good idea or am I getting this wrong about transhumanism?

[00:09:25]

Well, I think artificial intelligence is somewhat tangential. It's part of some of the proposal, the proposals they've come up with. But even in the case of artificial intelligence, a lot of the researchers are actually transhumanist who are pursuing A.I. because they think it could be really valuable for for transhumanist goals. So in that sense, the reason that A.I. is continuing a large part of that is due to transhumanist agenda. Another example would be, well, wait a minute, let's start something for a second.

[00:09:51]

I don't know what the numbers actually are. I guess it's hard to come up with numbers in these cases. So I don't know whether it's true or not that the majority of researchers are transhumanist. Certainly some major figures in the strong A.I. program are. Yeah, but then again, the strong AA program is the one that, interestingly, has failed abysmally so far. We've made a lot of interesting advances in week II, but strong you know, people like Marvin Minsky had been talking about for decades, certain things that were just around the corner hard.

[00:10:19]

Well, I don't doubt it's hard, but they've been talking about things happening just around the corner for decades and they're simply not. In fact, if you're outside of that community, a lot of people that I talked to. And to consider the aid program dead because it hasn't actually made any significant breakthrough in decades.

[00:10:41]

All right.

[00:10:41]

Well, I mean, I don't want to turn this into a referendum on artificial intelligence in particular, but I don't think that that's even necessary just because there are so many other research programs that they're pursuing, like the Obree three degrees program to to pursue radical life extension.

[00:10:59]

This really is based on gender. Based on what?

[00:11:04]

No, no. Yeah, I think that that was a desire. No, I'm talking about the technology. So what sort of technological venues are they pursuing?

[00:11:13]

I don't understand it well enough. But his basic point is that is that aging is we can think of aging as this horrible disease that that afflicts all of us at some point or another.

[00:11:24]

And the fact that there hasn't been more research to actually understand why it happens and see if we can prevent it from happening to us is appalling.

[00:11:32]

And so he's a perfect example. First of all, it's simply not true. That hasn't been research in aging. There is there are large areas of both molecular biology and evolutionary biology where people have done for decades research into aging. We know a lot of the Mollica mechanisms that cause aging in in animals, for instance. You know, a lot of this research is done on things like flat worms and fruit flies.

[00:11:56]

But nonetheless, so there is a there's a large research program, relatively large and really not that large, but there is a significant research program with, you know, a good number of millions of dollars have been spent over a period of a couple of decades with a good number of papers published in that area. I've actually even tangentially been concerned with it now because I've done any research on it directly, but because I've been peer reviewing some papers by colleagues in in genetics and molecular biology that deal with these things so that, in fact, there seems to be exactly the point I was trying to make earlier.

[00:12:32]

That is, some of this research is being done entirely outside of transhumanism. It's not done for the same reasons. I mean, nobody nobody none of those colleagues that I talked to, I'm referring to really necessarily think it's a good idea to become immortal or anything like that. But they do think that aging it's an interesting question. It's both an interesting question and a fundamental scientific level. I mean, we want to know why and how it happens.

[00:12:56]

And it's interesting in terms of consequences, the kind of consequences you're talking about actually much more immediate, such as not necessarily in terms of life extension as much as in terms of extending like what, life quality. So getting us to a point where, yes, we are aging, but both aging more slowly, we're maintaining a higher life life quality, both physically and mentally until you know as much as possible and that sort of stuff. So these things are already being done and as far as I can tell, with no input whatsoever from the Transhumanist community.

[00:13:26]

Well, a number of the aging researchers are transhumanist. So, again, like with the EHI, they are actually furthering the field.

[00:13:32]

Again, I'm not a gerontologist, but I've read a little bit of what obituary has said.

[00:13:36]

And the case that he's making is that they're not looking at radical enough solutions, that the drugs that they're trying to develop are going to extend life by, you know, a few years at best. And what we really should be doing is looking for a cure for aging, not for a bit of a to forestall it.

[00:13:54]

So I can see that there are two possible objections to that sort of line of reasoning. One, of course, is to assume to talk about a cure and curing aging. Aging assumes, of course, that aging is a disease. And that's an interesting loaded term right there. So we could have an interesting discussion about, you know, normally most biologists don't think of aging as a disease. They think of it as a natural process.

[00:14:16]

Do we have reason to believe now that it's impossible to prevent aging, that we can that we can increase our lifespan a little bit, but not by a lot? I've never read anything that that's the case.

[00:14:25]

There's no reason in principle. Of course there is. We don't have any particular reason to believe that we will succeed in doing anything like that. But that's not true. But it was. No, well, I'm not sure that it's worth trying, but we'll talk about that. Right. That gets into the ethical area.

[00:14:38]

But but before we get to the ethical er, if we get there at all, it seems like saying, well, we're not doing things radical enough. Let me let me come up with an analogy. It would be like saying, well, you know, the space program isn't trying hard enough to get to the stars because we haven't got an even to Pluto.

[00:14:54]

Well, you have to get to the first. You have to get the technology to get us that far into the confines of the solar system first, and then you're going to seriously start thinking and pushing about interstellar flight. In fact, the analogy, I think, is particularly good because interstellar flight, as far as we're understanding right now, would require radically different technologies from interplanetary flight within within the solar system. So it seems to me that, again, this is sort of like jumping the gun, you know, how are you going to get to the stars if you don't even know how to get to the next planet?

[00:15:27]

I mean, again, I don't understand the exact technologies that are being talked about here. From what I understood from. His statements, there are different avenues, and if you actually believed that it was possible to to accomplish the radical agenda, then you would be pursuing different avenues of research. And he talks about the kinds of social logjams that perpetuate this suboptimal line of research. So the public thinks nothing can be done to cure aging. So politicians aren't willing to be seen allocating a lot of money to something that their constituents see as an ambitious pipe dream.

[00:15:59]

And so scientists know they're not going to get funding for more radical anti aging research. And so they don't work on it. They don't submit proposals for it and they don't talk about it, which just reinforces the public's idea that it's impossible sort of thing.

[00:16:11]

I can say that a lot of of radical sounding. Yes.

[00:16:15]

Technology, I can see that reasoning. But again, I can exactly turn that reasoning around and say, well, that's actually a rational way to proceed because we need to have an idea of where to go before we actually start investing millions of dollars on a particular research program that nobody at this point seems to have a particularly good idea where it goes. But let's now it seems like it's it's we're pretty much closer to the point where we should be talking about the consequences of this stuff.

[00:16:38]

Yeah.

[00:16:38]

So, you know, one of the obvious objections that have been raised to Giddier radically altering, you know, the conception of what it means to be human and so on and so forth, is that if history is any teaches us anything, it tells us that any new technology of that sort is more likely, at least in the beginning. It's more likely than not to increase the disparity and level of injustice between people, because some people are going to have access to these technologies, which presumably are going to be very, very expensive and especially initially and others are not.

[00:17:12]

How do Transhumanist deal with that sort of objection?

[00:17:15]

Well, just like with all technologies, they start out really expensive and only a few can afford them. But we don't treat that as a reason not to develop them. We develop them.

[00:17:24]

And then as they become more widespread and it becomes easier to produce them, they become cheaper and more people can afford them. I mean, the washing machine and dryer used to be a luxury for the rich, and that didn't stop us from developing that technology. Most people in America can afford them.

[00:17:40]

Right.

[00:17:40]

But we're talking that's that's fine if we're talking about technologies that actually not necessary for for life or that they're not dramatically altering, you know, your very conception of what it means to do to have a life. It seems like in this case, it would be very easy to envision a situation where essentially you create two classes of human beings that are radically different from each other. And whenever we create classes, we also create conflict. How do we deal with that?

[00:18:08]

It's a good point and I've seen it raised before, but I think the dichotomy of the haves and the have nots is probably a little bit oversimplified. I could see a continuum.

[00:18:17]

I could see a spectrum of people depending on their level of of means and what they're able to afford, determining what enhancements they can buy for themselves.

[00:18:26]

But, you know, we have we have a spectrum of of wealth and power and strength and intelligence now.

[00:18:33]

And we don't have people turning on each other any more than well, we have people turning on each other. But that's a fact of inequality of any kind. Is that right?

[00:18:41]

I was going to say, I mean, I have ever heard of revolutions. I mean, they start exactly when when people perceive that there is a huge gap and huge inequality that takes root in our society. Now, it's hard to imagine a larger degree of inequality than having, say, a small number of people who are essentially mortal and the rest of us who had to be content with living, you know, 70 or 80 years or whatever it is that we can now achieve in in Western societies.

[00:19:09]

That seems to me like a huge disparity. This is not like I mean, well, you know, I have a yacht that you don't or I have a private airplane. You had to take Delta Airlines. It's it's much, much more radical than that. And it seems like it's hard to imagine how that wouldn't translate immediately into conflict.

[00:19:27]

OK, so just to be clear, if this were available to everyone, would you support it then or do you have other objections to it? No, there's other objections to it. But since since this one seems to be prior, because logically prior at least, because obviously we wouldn't get the technology to be universally accessible unless we actually had the technology to begin with. Right. OK, so it's hard to imagine a situation where you develop the technology and it becomes immediately universally accessible.

[00:19:52]

So just to be clear, Massimo, you're not objecting to the idea of radical enhancements to the human condition or to human nature on the grounds that they're unnatural or that they would destroy human dignity. These are common objections. I don't know if you agree with them or not.

[00:20:05]

No, I'm not sure what. But there's not much dignity in being human anyway. So that will not be one of my objections now. But but there is an objection that is related to that. I suppose I would phrase it differently, and that is that we really don't know what we're talking about at that point. If we're talking about radically altering what it means to be human, as in, you know, either radical life extension or uploading of your content to someone else or changing the way our minds work.

[00:20:29]

And we're changing that, right? Exactly. Changing our minds or changing. Dramatically, our genetic line and so forth, I don't think these are things that one can have necessarily objections in principle, but plenty of people do. I know people that wouldn't. But that wouldn't be my line of questioning. The line of questioning is, well, do you have any idea? Not you personally, but obviously. But the transhumanist in general, did they have any idea what they're playing with?

[00:20:52]

Because it seems like they don't. It seems like one of the underlying assumptions here is what I sometimes referred to as a techno optimism that if something can be done technologically, it's going to be for the better and things are going to work out somehow in a good way. Now, I am certainly not a Luddite. I don't I'm not against technologies, far from it. But on the other hand, I do realize that new technologies already in even in the recent past, have transformed our lives in disastrous ways.

[00:21:23]

And so I'm not I'm not ready to buy the idea that a technology that would radically alter what it means to be human is intrinsically a good thing without further discussion. And I don't see how we could possibly get enough data to get that discussion going right.

[00:21:38]

So I would definitely agree that there's always some element of risk when you're tampering with a system as complex as human nature. But I think that there are still things that we can say about how likely a proposed modification is to result in negative unintended consequences.

[00:21:52]

There's a great paper by Nick Bostrom who is a transhumanist, and he argues, I think convincingly, that when you're when you're deciding whether the modification you want to make is safe, you want to ask yourself if this proposed modification would result in an enhancement, why have we not already evolved to be that way? And that that's that's really the question that determines whether we should sort of trust in the wisdom of Mother Nature on this one or try to take matters into our own hands.

[00:22:18]

But even given that reasoning, there's plenty of cases in which we could pretty confidently say that just because our nature is a certain way, that that doesn't mean that that's the best way to be. So, for example, a modification might be an enhancement from our perspective as individuals, but not from our genes perspective. And that would be a reason why it wouldn't have evolved and yet would still be good for us to create.

[00:22:39]

Or it could be that it was evolutionary adaptive, evolutionarily adaptive in the environment in which we evolved, but not in the environment in which we currently live.

[00:22:47]

Or it could just be that evolution was incapable of producing that enhancement because of reaching a local maximum. That's my spine isn't perfect.

[00:22:55]

Or, you know, or evolution takes a long time, especially with a recent development like our brains.

[00:23:00]

Right. But that's a great example of how my mind works. Very different from a transhumanist mind. OK, I would never it would never occur to me to bring evolution into this thing, because I don't think for a moment that just because something hasn't been done by evolution, therefore it's not good. I mean, that's that's a classic example of a naturalistic fallacy that it's something is not natural. Therefore, it's not going that. No, they're arguing the arguing the opposite.

[00:23:24]

But what I'm saying is it never occurred to me as a sensible opposition or a criticism of transhumanism to argue along those lines that, well, no, we shouldn't do it because it's the natural is actually the point I was making here is not that that his critics are committing the naturalistic fallacy, but that there is actually a grain of truth in what sounds like the naturalistic fallacy that it is risky to to tamper with human nature and to do things that that are unnatural just because we don't know what the consequences are going to be.

[00:23:51]

Yeah, we may simply disagree on how large that grain is. I think it's the green as large as a mountain. But I heard before before the show that you have some objections or criticism of your own about transhumanism. I'm curious to hear what that might be.

[00:24:03]

Oh, well, so this was actually one of them. Another problem that I have is with the the technology of cryonics.

[00:24:15]

I actually it's not what the feasibility of cryonics say, which is the idea of freezing people so that they can be used to future rate when they seem to be what we would consider now clinically dead so that they can hopefully be fixed by some future technology and and resuscitated.

[00:24:32]

I actually don't have as big of a problem with the the feasibility of it as I thought I would when I started researching it. It was surprisingly hard to find scientific rebuttals to cryonics. The majority that I found were popular rebuttals that relied on really outdated evidence like Penn and Teller did a bullshit show about how it would result in the freezing of the brain would result in your brain trying to mush. And Michael Shermer wrote an article to the same effect. And the current technology that Kranks uses isn't it doesn't rely on that anymore.

[00:25:01]

Well, but there is another possibility there, which is when when scientists don't bother responding to an idea, it may be because they think it's so absurd that they don't even spend their time responding, I suspect.

[00:25:13]

But I suspect that that is the case in this in this instance.

[00:25:15]

But anyway, if you can explain to me why it's so ridiculous, I would love to hear it because I couldn't find anything, because that what we know about, for instance, the human brain is that it is an incredibly complicated and very, very prone to decay instantaneously, essentially as soon as something doesn't stops working. Anyway, there are technical reasons also to think the cryonic is not going to work, but we're not going to get into that. What was your objection?

[00:25:38]

Oh, my objection is just that I'm I'm very risk averse and I'm kind of terrified of the future.

[00:25:44]

So, I mean, I'm terrified of the idea of being revived and being in horrible pain and unable to communicate it or being revived in a dystopian society in which I I mean, you're essentially putting yourself at at the mercy of whoever happens to be in charge of the cryonics facility.

[00:26:02]

And when they revive you.

[00:26:04]

And it's not I'm not confident enough that they are going to be a kindly future beings who revived me. Right.

[00:26:11]

Well, so you're not necessarily a future optimist? Well, maybe not at all.

[00:26:15]

Maybe there is a pill that can give you to fix that before we wrap up and we're running out of time. I want to bring up one of the comments from our Tizer. Harry S. Pharisee rhetorically asked, Why is transhumanism not considered an offshoot of religion? And several other commenters, including in Pollack and Michael Riggs, objected that that was an unfair and all too common comparison, which I would I would agree with with them. But I know you've made that comparison before, Masimo, at least to see libertarians.

[00:26:43]

I don't know if you feel the same way about Transhumanist.

[00:26:45]

No, I'm not sure that they would qualify as a religion. I mean, my definition of religion has to include a belief in a supernatural being of some sort. So, no, I wouldn't consider anything like that a religion. I would consider it much more close to to, say, Scientology, which I don't consider a religion. I consider it a techno optimist, a science fiction inspired cult. Now, I'm not saying that transhumanism is like Scientology, but if you have to come up with a derogatory because I imagine the listeners comment was derogatory, if you had to come up with a derogatory analogy, I wouldn't pick a religion.

[00:27:18]

I would put a I would pick a science fiction inspired cult like Scientology. But I'm not saying that that's what transhumanism is.

[00:27:24]

I say, see, I think the reason that I think it's not a fair comparison. I think the reason they get associated is that they have similar goals. I mean, the transponder's goals are to live as long as possible, you know, ideally indefinitely and to transcend their human form. And so these goals are very similar to religious goals. But what people forget is that what makes something a religion is not the goal you have.

[00:27:44]

It's what you think is the best way to pursue that goal. So if you're you know, it makes the difference whether you're sitting down and praying to achieve your goal or trying to invent new technologies, which I would completely agree with that.

[00:27:52]

OK, all right.

[00:27:53]

We're out of time. We're going to move on now to the rationally speaking PEX.

[00:28:13]

Welcome back. Every episode, Julie and I pick a couple of our favorite books, movies, websites or whatever tickles our rational fancy. Let's start with Julia Spicks.

[00:28:22]

Thanks, Masimo. My pick is a book. It's called Expert Political Judgment by Philip Tetlock. It came out a few years ago and I came across it while I was reading up for our previous podcast on expertise and deferring to experts.

[00:28:35]

So Philip Tetlock is a social scientist.

[00:28:38]

And a couple of decades ago, it occurred to him that we have all of these expert forecasters of political and economic events and there's very little actual record keeping about whether their predictions are accurate.

[00:28:52]

So he set up a database of close to 300 self-styled experts in punditry, and he, over the course of the two decades, collected predictions from them about what would and wouldn't happen in the political and economic spheres of questions like will South Africa's apartheid end nonviolently? And so over that time, he collected something close to 80000 predictions from the mall.

[00:29:15]

And these were predictions that had actually been confirmed or falsified based on what had happened in the world. So his book is sort of a summary and analysis of what he found. Do they do any better than astrologers?

[00:29:26]

They maybe slightly. Not a lot. It really is pretty appallingly bad. The level of accuracy and one of the primary findings is that the amount of training you have doesn't seem to make that much difference. So having a Ph.D. in a field doesn't actually make you much of a better predictor of what's going to happen in that field.

[00:29:48]

But the particularly interesting secondary finding was that there was something that jumped out at him that seemed to determine whether your predictions were terrible or just sort of mediocre.

[00:30:00]

So he divided his experts into two groups, which he called foxes and hedgehogs.

[00:30:07]

It's a reference to an ace up fable, but the gist is that the hedgehogs have have one particular thing that they know and they know it really well and they make their predictions all based around that one thing.

[00:30:17]

So, for example, a Marxist theorist would try to explain everything in the world and make predictions all based around class warfare or classical economists might make predictions all based around the idea of people being motivated by profit and monetary incentives.

[00:30:32]

And then the foxes didn't have one particular area of expertise that they knew really well, but they knew a lot of little things and they used sort of rough sticks to guide their predictions. And the foxes were much better at the hedgehogs. That actually making accurate predictions.

[00:30:45]

That doesn't surprise me, actually. And since we're talking about errors, my pick is also in similar lines. It's a book. It's called Being Wrong Adventures in the Margin of Error by Kathryn Schulz. Now, there are a couple of books that come out recently about error. I have not read the other one, so I'm not going to be talking about about that one. But it's very interesting because she takes a very charitable sort of position about error.

[00:31:12]

She says that we we learn from errors, obviously, and that error is an inevitable and in fact, she claims necessary component of not only our lives, but by the way we learn. And the book is a really interesting exploration of the psychology of error or how people react to errors, what kind of errors we're likely to make or less likely to make. And more importantly, as I said, especially in the second half of the book, there's these positive message about the fact that we should embrace error, because in so doing, we really are making ourselves much in a much better position of learning from our experiences.

[00:31:47]

And it seems to me that that kind of approach is exactly what an ideal skeptic should do. The idea is now that the skeptics are infallible, that they're or that they always know better. The idea is that skeptics are people who adjust their beliefs proportionate to the evidence, and the evidence sometimes comes in in the guise of errors. And you treasure those errors because they're actually teaching you something. Probably they're teaching you more these other argues than if you in fact, when he turns out that you're right.

[00:32:15]

I love that.

[00:32:16]

I love that message so much, something that I am personally trying to campaign for in my own life. And it reminds me of a quote that I used in a post that I wrote way back on on the rationally speaking blog.

[00:32:27]

And I said that one of the fundamental ironies of rationalism is that in order to actually be right as much as possible, you have to not care about being right in any particular disagreement that you get into.

[00:32:38]

Sounds like a good thing.

[00:32:39]

Yeah, sounds like a good idea to aspire to, at least.

[00:32:42]

Great. Well, that's all the time we have. This concludes another episode of Rationally Speaking. Join us next time for more explorations on the borderlands between reason and nonsense.

[00:32:59]

The rationally speaking podcast is presented by New York City skeptics for program notes, links, and to get involved in an online conversation about this and other episodes, please visit rationally speaking podcast Dog. This podcast is produced by Benny Pollack and recorded in the heart of Greenwich Village, New York. Our theme, Truth by Todd Rundgren, is used by permission. Thank you for listening.