Transcribe your podcast
[00:00:14]

Rationally speaking, is a presentation of New York City skeptics dedicated to promoting critical thinking, skeptical inquiry and science education. For more information, please visit us at NYC Skeptic's Doug. Welcome to, rationally speaking, the podcast, where we explore the borderlands between reason and nonsense, I am your host, Massimo Puchi, and with me, as always, is my co-host, Julia Gillard. Julian, what are we going to talk about today?

[00:00:47]

Mathinna today we're going to discuss the phenomenon sometimes known as neuro babble.

[00:00:53]

So essentially this refers to people concluding things from neuroscientific evidence in a way that's either jumping to conclusions or misinterpreting the neuroscientific evidence or or even coming to logically incoherent conclusions about about neuroscience or about how our consciousness or our psychology works based on neuroscience.

[00:01:15]

So we touched a little bit of that in with a guest that we had not long ago could define. And that was focused on on neuro sexism, I suppose. Yeah.

[00:01:24]

So actually, there's I mean, there's two components or two main components to this phenomenon.

[00:01:28]

There's just interpreting results wrong because of the, you know, the studies being conducted purely because of methodological sloppiness. And then there's just this conceptual problem of jumping to conclusions that you realize are either not actually supported by what you concluded about what the neuroscientific evidence showed, or at least not not supported uniquely like that.

[00:01:52]

There's other ways of interpreting it that you hadn't that you hadn't been thinking of or jumping to conclusions that are just sort of philosophically incoherent.

[00:01:59]

So, yeah, I'd say it's partly methodological and partly conceptual.

[00:02:02]

Yeah, we could talk about both both areas and also it actually I'm sorry you, but I just realized that one of our other recent episodes also was relevant, the episode in Free Will, that there's there's a lot of neuroscientific evidence if people want to interpret as either supporting free will or not supporting free will. Right.

[00:02:17]

And just to refresh people's memories, I think I think you and I were pretty strongly in agreement that that the question of free will is really more of a philosophical question than an empirical question that could be answered by neuroscience. So that actually is a good example, right?

[00:02:31]

It is a good example, although I think that might take at the least on that on that particular question is that the neuroscience is certainly relevant. In fact, that's my take in general about the difference between philosophy and science, which is the philosophy is about conceptual clarifications of what we think and why we think it. And science is about the necessary and predictable background for that conceptual clarification.

[00:02:53]

So if we're talking about free will, the philosophers can talk about what exactly people mean by it or what they might mean, and they can dissect the concept.

[00:03:03]

Some of the evidence from neurobiology will be relevant. And some want in fact, the best thing that could happen is actually which it does happen occasionally is for philosophers and neurobiology to work together so that the neurobiology doesn't make claims that are going into the philosophical idea without understanding philosophy and vice versa, that the philosopher doesn't say things that are actually inconsistent with factual evidence from science. No, then I'm actually somewhat misremembering our degree of agreement.

[00:03:31]

Yeah, yeah.

[00:03:32]

Well, actually, maybe this is why we get along so well. I just consistently like misremember the degree to which we agree and I think that we do. That's right. But by the same episode you will remember, you know, I've forgotten this disagreement.

[00:03:44]

Yeah. And that may be fine.

[00:03:46]

But so my really briefly, my take in that episode was that there was no neuroscientific evidence that could possibly bear on the question of free will, because the only two logical possibilities are either that our brains are are deterministic in the sense that, you know, everything that we think had a cause, that had a cause, that had caused that cause, etc. and and, you know, we don't have control over the the deterministic, that deterministic process from, you know, the first cause of our brain being created to what we are currently thinking now or or there's some randomness, potentially some quantum randomness in how our brains make decisions.

[00:04:22]

But that also doesn't actually give us any free will because, you know, obviously we can't have any control over random processes. So, you know, it's either one or the other, some combination of the two. But none of those possibilities allow for what people colloquially like to think of as free. Well, so so to me, it didn't seem like neuroscience actually on that question and that all of the neuroscientific evidence that people thought was relevant was was irrelevant.

[00:04:45]

Yes. I thought that was an example of neuro babble.

[00:04:47]

Sorry, that's okay. But one of the I mean, I don't want to revisit that discussion too much. But the interesting idea there, I think, is that then you refer to as the colloquial understanding that people have the concept very well.

[00:05:02]

But now philosophers have something more sophisticated than the colloquial folk concept in mind. And in fact, they have several different things in mind when they talk about free will, somewhat different characteristics of that.

[00:05:14]

Froogle, now, some of which are relevant, I should say, actually, for some of which the neurobiology is relevant, because one of the components of free will is certainly the human ability to engage in decision making, you know, in circumstances that is relevant.

[00:05:28]

And so that's right. So this isn't actually a total tangent, because one of the inspirations for this episode was a recent article in Slate on. That you linked me to, pointed me to by Ron Rosenbaum, and it was about what neuroscience can tell us about the question of who is evil and whether whether evil is even a meaningful concept.

[00:05:48]

Right. So, I mean, so free will is relevant to that, because some people will say, well, if neuroscience shows that there's no such thing as free will, then, you know, you can't actually blame anyone for there for the actions that seem evil to us.

[00:06:01]

And so we can't actually judge any person of evil.

[00:06:04]

Right. Well, before we get into the details. That's a very interesting article, which actually then touches on other issues, not just evil, persay, although that's the topic. But, you know, one of the classical sort of objections to talk of evil that that probably most skeptics would be sympathetic to is that, oh, well, evil. It's a metaphysical concept. And, you know, it's basically religious of religious origins and so on and so forth.

[00:06:29]

But Rosenbaum points out, actually, that even prominent natives use the word like Christopher Hitchens, but like Christopher Hitchens.

[00:06:36]

So there's this quote from Christopher Hitchens who says, who invoked the word evil in his obituary of Osama bin Laden. And Rosenbaum says that Hitchens admits wishing he could avoid using, quote, that simplistic but somehow indispensable word, unquote, but that Hitchens feels compelled to call it whatever motivated bin Laden to force that, quote, absolutely deserves to be called evil.

[00:06:58]

So this is excerpted that figuratively. Yes, we can actually.

[00:07:02]

I don't doubt that. Yeah, I don't think that Hitchens meant that, literally. But it does mean that that there is something to the concept that can be used even by essentially what Hitchens is referring to. There is moral responsibility. Right. You do want to get the moral responsibility.

[00:07:18]

And one of the dangers in the sort of neuronal attack on on on the concept of evil is that essentially you define in a way and in the process, you also define a moral responsibility. And that would be a problem.

[00:07:35]

Yeah, although. So, for example, one of the neuroscientists cited in the article, Simon Baron Cohen. Correct.

[00:07:43]

That's what I was going. Yeah. Yeah. Actually, I think we discussed him in our episode with Cordelia Fine, because he did a lot of research purporting to show innate differences in the way men and women think when Fine was pretty critical.

[00:07:54]

Yeah, she was, as you pointed out, some pretty serious methodological flaws.

[00:07:57]

But should we point out that he's the cousin of Sasha?

[00:07:59]

I wasn't. I guess I just I feel like that was unfair to him. But I liked how in the article by Ron Rosenbaum, he pointed that out and said that Simon was the cousin of Sacha Baron Cohen.

[00:08:09]

And then he said, but he's a respected neuroscientist.

[00:08:14]

OK, anyway, so now we said otherwise. Yes, it's got out of your system.

[00:08:18]

Simon Baron Cohen basically argues that when what we talk about is evil, what we understand is evil is essentially the absence of a capacity for empathy and that you can actually measure that in the brain.

[00:08:31]

So he's like he's like defining or he's it's actually unclear the extent to which he's actually making a claim about how evil works in the brain versus just redefining the word to mean the absence of empathy, because it doesn't that didn't actually really seem like a complete like the absence of capacity for empathy really captured what people refer to as evilness.

[00:08:53]

Right. For a variety of reasons. So Rosenbaum, again, is, I think, very sharp in in his in his article in Slate because he points out a couple of things about Baron Cohen's position. First of all, just to quote the article, again, it says, I am left with a non empathetic feeling that he's that he's Baron Cohen's boast that he's replacing evil with none. Empathy is more a semantic trick than a scientific discovery, that he's not engaging in changing terms.

[00:09:20]

And while the lack of empathy may sound more scientific than evil, if that's all it is, if there is no content to the to the to the concept, we're not very, very in a very different position.

[00:09:32]

But more importantly, I think that the Rosenbaum points out that if you got that right, do you also define away, abolish you, abolish.

[00:09:39]

Good, right? Yeah, that was a strong point. I liked that. And that is, again, one of the things we really don't want to do. I mean, we want to maintain some sense of moral responsibility, both in the positive and the negative.

[00:09:51]

Otherwise, we will have to come to a view of human action where nobody's responsible for anything because, you know, quote unquote, my brain made me do it. Well, what else could it possibly be that made you do it right? Your brain is, in fact, where decision making happen. So, yes, your brain made you do it. Yeah.

[00:10:10]

You know, I think I said this at the end of the Freewheel episode, but just to reiterate that actually defining away the the idea of moral responsibility doesn't bother me too much, because I think that functionally we can continue on just as we always have, just by behaving as if there is such a thing as moral responsibility.

[00:10:24]

And because, you know, we want to incentivize helpful behavior and and punish harmful behavior. That's still that doesn't change regardless of whether we think people are responsible. For the helpful and harmful behavior, we can do things to make them more inclined to be helpful. I disagree because if we arrive to a societal societal I mean, obviously, this is a matter of a question of matter, empirical evidence of empirical evidence. Right. Is there such a thing as moral?

[00:10:49]

Actually, that's not. Let me rephrase it. It's both a philosophical again, an empirical question. You know, is there such a thing as moral responsibility and what do we mean by that?

[00:10:58]

But if we do away with it as a society, I do think that would have big consequences. You can't once once you say you think that there is no such thing as moral responsible, you can't not go on and pretend that there is it's like, well, by we I sort of meant like scientists think, oh, you've got to be secretly relieved to see.

[00:11:19]

But the cat is out of the bag already soon. Right, exactly.

[00:11:24]

And that's the problem. And that that sounds to me like a lot like, you know, Pascale's reasoning in his famous wager when he says, well, yeah.

[00:11:32]

Where he says, well, you here's the analogy. It's not a direct one to one user.

[00:11:37]

I know, but but there is something to it. You know, Pascal said, look, we cannot actually say whether there is a God or not or not.

[00:11:44]

So you have to pretend to believe because it's a good it's a good bet. OK, analogy. But he actually was talking about pretending now. So do we have to pretend if we agree that there is no such thing as moral responsibility, we have to pretend because to say that we have to behave as if it's the same, that we have to pretend.

[00:12:01]

No, I just meant reward helpful and punish harmful behavior, just as we always have that reward. And punishing doesn't actually depend on us thinking that people have free will to choose helpful and harmful behavior.

[00:12:12]

But this is actually getting a little. Yes. Is going out of track.

[00:12:15]

I think we should probably talk a little more concretely about what neuroscientific evidence we're referring to. So one of the most common is femoris. So a lot of the popular press reports of of neuroscientific evidence that that I would call neuro babble, I should say that that a lot of what I'm calling neuro babble, what people call neuro babble, is occurs at the popular press level when they're there interpreting neuroscientific results.

[00:12:39]

But some of it occurs, I think, at the at the level of the neuroscientists themselves who are just sort of not quite thinking clearly about their evidence.

[00:12:50]

There are several technical papers, even even technical papers in the literature where neuroscientists actually make claims about having eliminated the concept of morality or immunity. Consider for on for what? It's pretty clear that they don't know literally. They don't know what they're talking about. Right.

[00:13:06]

So, you know, it's not just the general press. Right. Of course, the general press is particularly betting that. Yeah, but. Right. Which is to be expected.

[00:13:13]

But so one of the most common ways I maybe I should call them neuro fallacy is one of the most common like instances of nerve fallacies that I see is that things like this happens at least every couple of months.

[00:13:24]

Another article comes out in the popular press about how families can show whether you're really in love or not or whether you really have empathy or not, because scientists have located the region of the brain in which love resides in the region of the brain in which empathy resides. And so it's very easy.

[00:13:39]

You just go into the scanner and and, you know, look at a picture of your wife and scientists can tell you whether you really love her or, you know, you look at a picture of someone in pain and find who can tell you whether you really have empathy.

[00:13:50]

And this is problematic for a whole bunch of reasons. Although before I get into that, maybe I should talk a bit about what families are actually measuring because.

[00:13:58]

Sure, because that is one of one of the issues that would be on the other side of the methodological side of the problems that I was talking about at the beginning of the episode.

[00:14:06]

So so families are actually not directly measuring neural activity. Neural activity is chemical signals and electrical potentials.

[00:14:15]

So fMRI measures changes in the oxygen content of blood in the different parts of the brain. So and the reason that's relevant is that as the more brain cells fire, the more oxygen they use. And so then oxygen in oxygenation in that area increases to compensate for the oxygen used by the brain cells. And so that increases what gets picked up by fMRI. But even that isn't quite complete because families can measure activity in a region in the regions of the brain.

[00:14:41]

But there's always sort of background activity in the brain. So when when scientists say that there was increased activity in a region of the brain, what they're doing is comparing the brain scans or the memories of, people, say, experiencing empathy to people not experiencing empathy or, you know, people looking at a picture of their grandmother, to people not looking at a picture of their grandmother.

[00:15:00]

The part of the brain responsible for facial recognition would show more, more activity in the people who are looking at the picture of their grandmother than those who weren't.

[00:15:09]

But even that that's sort of just that's a statistical like it just means that on average, there was a statistically significant increase in activity in this region for the recognition task.

[00:15:19]

So it's actually as directors. Let me make an analogy there, because science often runs into these kinds of problems. That is when when there is a new cool technique or a new cool conceptual advance advancement in science and people jump on it and start doing a bunch. Research on it, and then however they they start overclaiming, they start making claims that they go way beyond the evidence. The classic example is the concept of heritability.

[00:15:45]

Heritability. Yes. So there's been, you know, for decades, there have been these discussions back and forth about what exactly it is that you can claim about heritability. So you hear things like, oh, you know, intelligence in human beings is 80 percent or 60 percent heritable. And most people, including, unfortunately, some scientists, but certainly a lot of the people in the media and sort of popular at science popularising and so on seem to think that that means that, you know, 60 to 80 percent of the inheritance of that trait is controlled by the genes and the rest is the result of environmental influences.

[00:16:20]

Well, in fact, the term irritability refers to very specific statistical construct, which actually only measures the correlation between genetic variation in a population and variation in the trait in question that's in intelligence. So really, irritability explains the amount of variation in intelligence that is correlated with the amount of variation in the genes. That's not at all inheritance.

[00:16:44]

Right. Do you think that mistake is made just by the the public or by science writers, or are there scientists who confuse that to that particular mistake?

[00:16:51]

Seems to be limited mostly to science writers, journalists and the public. Although I have actually I do actually have examples of scientists, not not population, not quantitative geneticists toward the professionals in the area, but still biologists who do make that kind of mistake. The problem is that even the specialists make, you know, overconfident claims based on measure of its abilities because they don't seem to be particular aware of, you know, the statistical assumptions that go into making the calculations, the the intricacies of the experimental design and therefore the kinds of things that you can and cannot control for and so on and so forth, which is why there is a huge technical literature on that.

[00:17:28]

So it's a similar situation where you have certainly an interesting technique. Certainly tells you something interesting about in this case, how the human brain works, in the other case, how human characteristics are inherited. But it's very easy because of the technical complexity and the conceptual, you know, sophistication of the of the underlying ideas.

[00:17:51]

It's very, very easy to overstep your boundaries and then just go on and say things that actually make no sense. Right. Right. Absolutely.

[00:17:57]

And while we're talking about some of the methodological problems with with interpreting ephemerally the way people tend to, we should mention that there's a whole bunch of other problems. First of all, all brains are at least a little bit different.

[00:18:10]

So even if we knew for certain that that there's a single region responsible for, let's say, jealousy, the jealousy region in me is going to be slightly different in size and shape and location than the jealousy region.

[00:18:21]

And you and then so so that's a source of variation.

[00:18:25]

Then there's the source of variation and just a background brain activity, because, you know, as I was saying, the measure of activity in a region is relative to the background brain activity.

[00:18:33]

And then and then people could also vary in how susceptible their neural patterns are to experimental conditions, because these results, when we have people experience love or empathy or recognition, these are all in experimental conditions there inside this giant ethnic minority.

[00:18:46]

And this is not exactly like a natural in the field experience. And so some people's like maybe you love your wife greatly, but while you're, you know, stuck inside several thousand pounds of MRI machine, you you can't summon up the feeling as much when looking at her picture as like another person could.

[00:19:03]

And so maybe scientists would then conclude from that that you don't love your wife as much as that other person loves his wife. Yes. And by the way, I don't know if you ever had actually your expense. I have the experience of being in an MRI machine, not miss brain imaging.

[00:19:17]

I'm really nervous about having to do it. It's like I had to do it. And it is, in fact, kind of interesting to me. I mean, I'm not claustrophobic, but it is an interesting experience. It's very difficult, actually, I would imagine, to concentrate on a task. In fact, the the thing that everybody tells you when you go in the machine is simply not to concentrate on anything, just keeps, you know, still, it's possible to not think about it.

[00:19:37]

It's hard to think about a white dog.

[00:19:39]

Exactly. We talked about this in the meditation, etc. You and I both have a difficult time, like not thinking right now.

[00:19:44]

For instance, here's another thing that that one of the problems with claims of of Mazanov MRI is, of course, that that it's there's quite a bit to be debated there about what exactly the claim is, because if the claim is that, you know, certain areas of the brain are involved in, which is a set of some and it's a fairly neutral term involved in X, where X could be, you know, being in love or making decisions or whatever.

[00:20:12]

Well, despite the caveat that you brought up earlier, which is that we're not really measuring brain activity directly, it's not in real time. It's a statistical construct of a bunch of individuals and so on and so forth. I think most most scientists would say I mean, most philosophers would say, yeah, that's still a reasonable claim. Now it becomes much stronger when one says, well, you know, like Baron Cohen does. Well, there are certain areas of the brain that cause.

[00:20:35]

Yeah, you know, empathy or lack of empathy or something. First of all, 13 areas of the brain, it's a large part of the brain.

[00:20:43]

Right. And we know that the effects are not additive. I mean, these things interact with each other in complex ways which are not captured actually by and by an MRI scan, No. One.

[00:20:52]

But second of all, of course, the the obvious objection there is that all you're showing is a correlation is not actually a causal connection, because truly there is a causal connection at some level because, again, of course, your brain made you do it.

[00:21:04]

What else could possibly make you do it? But it's very much, much more difficult to get to the to the causal connection. In fact, classical studies in neurobiology that deal with brain, you know, brain injuries either caused by operations or by accident get arguably a little closer to causal connections because there you are actually, in some sense, manipulating part of the brain and you're taking out of the part of the brain, for instance, you're cutting the corpus callosum.

[00:21:34]

The objection to those, of course, is those are incredibly growth from the point of view of anatomy. I mean, you're cutting an entire area that probably does a bunch of different things. And so whatever the results are, it's it's very difficult to interpret to say how they would apply to a normally functional brain under normal conditions.

[00:21:52]

Right. I was talking to a philosopher of mind a while ago about how difficult it is to conclude that a particular psychological function is performed by a particular brain region. And he had a great analogy that I'm going to share with you. So he said, imagine you're running a desktop computer and you're you're you can observe which parts of his hardware are active during, you know, various procedures that the computer carries out. And you notice. So you're interested in what parts of the computer are responsible for complex graphic displays, as I'm trying to make.

[00:22:22]

This is analogous is possible to neuroscientists questions about what parts of the brain are responsible for X function anyway.

[00:22:28]

So let's say you notice that one part of the computer is only active when complex graphics are displayed, and it's always active when those graphics are displayed.

[00:22:37]

And you observe that when you take out that component, complex graphics can't be displayed.

[00:22:42]

So this is this is like pretty strong evidence. This better than like neuroscientists usually have.

[00:22:46]

So you might conclude that, like, the only the only way to explain those that experimental result is that that component is like the part that causes that, like does the complex graphics processing and displaying.

[00:22:59]

But another possible explanation which throws a wrench in the gears is that it could be that complex graphics are done by some other components, maybe like the same component that does all the other kinds of processing. But but but when complex graphics are processed, they're more energy intensive and so they produce more heat. And so the component that you have been observing all the time is actually just a cooling device which maintains a safe temperature in the computer.

[00:23:25]

And so if you remove the cooling device, I mean, that would explain why it's always activated when graphics are being displayed.

[00:23:32]

And then it also explains why when you remove it, there are these automatic safeguards that prevent the other processing unit from from displaying the graphics. Yeah. Recognizing that the cooling and gone. And so it's not safe to actually produce graphics.

[00:23:44]

So, I mean, that may seem like a bit of a contrived example, but it's not at all amazing to think that the brain could could have, you know, similar kinds of things that that prevent us from from drawing these conclusions we want to draw even with such complete evidence.

[00:23:57]

Absolutely. And in fact, again, the analogy as a biologist, the analogy goes it comes to mind with genetics.

[00:24:04]

So for a long time, there's been all these these claims about, you know, we found the gene for, you know, what you name it, for obesity, for homosexual behavior or for being, you know, not such a nice person, for being a skeptic, for, you know, just recently I read an article about the gene for that separates conservatives from from progressives.

[00:24:26]

Oh, my God.

[00:24:27]

You know, this is all very interesting, but in fact, the claims go way beyond what is reasonable to me, first of all, because, as usual, when people make claims like, oh, I found the gene for if you look at the details, what they found is a single allelic variant. That is a single form of the gene that actually accounts, statistically speaking, for, you know, I don't know, something like two percent of the change of the variation in the behavior or whatever.

[00:24:52]

Yeah, well, unfortunately, that's a long and not very catchy headline. No, exactly. That doesn't make for a good headline. But the more direct analogy to the example that you were bringing up earlier is this.

[00:25:02]

This discussion has been going on for decades, and I think it's actually finally tapering off in the primary literature, but certainly nothing in the public as far as the public is concerned about what is exactly how do we exactly we find out about what genes do.

[00:25:18]

And the idea is, well, if you Mutata Gene, the classic idea in genetics is if you mutated genes. And so they're turning off basically artificially or you damage it artificially, you then look at the phenotype, you look at the traits of the organism, you look at the behavior of the organism, and you see which part of the phenotype is effect. Then you say, aha, that gene causes that kind of similar. It's exactly a very similar, very similar approach.

[00:25:41]

But of course, it has been pointed out over and over that that isn't what the way genes work. Right. So first of all, you're damaging apart genes work don't work in isolation. They don't work in what's called additive additive conditions. That is that defective gene one is additional to the effect of gene two, three, four and so on and so forth. They interact with each other. We've known this for a long time.

[00:26:02]

And in fact, finally, we starting having the both the mathematical tools as well as the empirical evidence to show how gene networks work.

[00:26:13]

And one of the things that we're finding that finding out now is that, in fact, if you turn off a particular gene, most of the effects of that, most of what that gene was doing is actually taken over, but by an instant rearrangement of the network.

[00:26:25]

So the other genes start compensating for this so much harder, which makes it incredibly hard. Of course, it's much more fascinating as a scientific. Oh, yeah. Issue, but I would be stunned if that something like that were also not going on, probably, in fact, to a much higher degree in the brain, because after all, human being human beings only have about 20 to 30000 genes. If by gene you mean sequences of DNA, that code for proteins.

[00:26:50]

So the interaction between 20000 genes is nothing compared to the interactions between billions and billions of neurons. Yeah, so you can imagine what that means in terms, you know, if you follow that analogy, it really makes for a big cautionary statement about what you can tell from your biology, especially because we we can't at all be sure that the neurons work the same way in my brain as your brain.

[00:27:13]

So you can at least we do share the same genes. Yeah, exactly. And you can you can go to direct an homology between my genes and your genes, but certainly not between what my individual neurons do in my brain as opposed to neurons do or even fairly large chunks of the brain.

[00:27:28]

Right now, one of the things we should talk about is why this is important, other than just yet another cautionary tale about the scientists overstepping their defenders. But before we get there, actually, so I want to talk about something that is in the in the Slate article and in particular Rosenbaum's critique of David Eagleman Incognito, which is one of these new books that talk about the brain because it actually has very important practical consequences. But back, one more comment about the you know, what can you say about given the evidence?

[00:28:01]

And the third thing you've said you're going to talk about? I hope you actually talk about this one. Sure. Why not?

[00:28:08]

OK, I was going to bring up one of my favorite villains in this area, and that's Sam Harris.

[00:28:13]

As you know, I have been very critical about the moral landscape and his ideas about, you know, neurobiology and in morality.

[00:28:23]

But in the one of the worst parts of the moral landscape comes when when Harris actually tells you in detail how he thinks based on his own experiments and things, because he did a thesis in neurobiology and based on his own experiments, he tells you something about why is he suggesting the neurobiology will be the predominant way of looking at morality? And he basically says something on the lines of the following. Our listeners can actually find the specific reference on the blog because I blogged about this particular event.

[00:28:55]

This is a reference to this specific quote, as well as page number in the book. But anyway, it makes the following argument.

[00:29:01]

He says, look, we know that the brain makes certain kinds of of truth judgments in the same way it uses the same sorry, uses the same areas to make judgments, regardless of whether those truth judgments are concerned with, say, mathematical propositions like two plus two equals four or factual propositions such as the earth goes around the sun for moral judgments, such as it's true that genocide is a bad thing.

[00:29:33]

And then he goes on stunningly to say that from that you can you can conclude that that the brain does not make a distinction among those three kinds of judgments. Fair enough.

[00:29:43]

And therefore, that there is no distinction among those three judges, which is absurd.

[00:29:48]

It's it's a complete non sequitur. That would be like saying that because we know that your brain in the same areas and your brain are turned on when you are having sex and when you're thinking about having sex. The two are exactly the same kind of experience. I don't think so. Sure.

[00:30:03]

I mean, yeah, well, that that may not actually be true. I don't know.

[00:30:06]

But but there are definitely I've definitely seen evidence that the same area is activated when you're actually like swinging about versus when you're simulating like mentally stimulating the experience of submitting bad nights to also for sex.

[00:30:19]

Yes, absolutely. OK, now can we we can go back to, I guess, David Eagleman.

[00:30:26]

So the reason Brotherman in Slate takes to Task Eagleman is a good example of what can happen when you get when you run away with these. AIPA interpretation of neuro neuroscience, so I'm going to quote directly from the Slate article here says Eagleman depicts an Orwellian future in which fMRI scans will be used to preemptively identify those who have the potential to commit acts formerly known as evil and prescribes for such possible malfeasance a regimen of, quote, prefrontal workout's to, quote, better balance those selected.

[00:31:01]

Of course, it doesn't say how or by whom for brain remodeling. And in fact, Eagleman goes apparently as far as saying, quote, Some people will need to be taken off the streets on the basis of their memories for a longer time, even a lifetime.

[00:31:15]

Wow.

[00:31:16]

This is neuroinvasive human advocating this. Is he warning that this is what could happen? Apparently, that's a really good idea. And by the way, Sam Harris advocates a similar thing at some point in his book. Absolutely. When he says that new neuroscience, neuroscience will allow us to tell whether people lie or not. And therefore we can impose lying free areas, whatever we like in a court of law or inside, you know, during a job interview or whatever it is by turning on these scans and basically, you know, forcing a little different than keeping people off, taking someone off the street for their entire life based on their brain scan.

[00:31:52]

But yes, but it's the same principle. It's the same idea of of using a science, which, by the way, it's far from being proven.

[00:31:59]

But even if you were saying someday.

[00:32:01]

Yes, but even if you were proven, you know, there are all sorts of issues about, you know, civil liberties, for instance, that needs to be taken into account. You can't just say, oh, it's a great idea because we can do that scientifically, then. Absolutely.

[00:32:14]

Right now, let's go out and take people off the streets or enforce line free zones.

[00:32:21]

Yes. And even so, currently today, as you know, as we were saying, the sciences is far from being perfect.

[00:32:28]

And and so part of the danger of all of this neuro babble bleating like leading people to believe that we can conclude more from the science than we currently can is that neuroscience seems to be really persuasive to people even like even when it doesn't actually have anything to do with the claim that it's allegedly supporting.

[00:32:47]

So there was this a little disturbing study that came out maybe a year or so ago in which the scientists gave people a claim and and they and the claim was filed by one of four different types of explanation.

[00:33:04]

Either people got a good explanation that was psychologically sound like strongly empirically supported explanation for the claim or they got a bad explanation, just like really logically incoherent and didn't have good evidence or they got the good explanation combined with some irrelevant neuroscience.

[00:33:19]

I think it was because the brain scans that just had nothing to do with the actual claim in question or the and then the fourth group got the bad explanation, combined with the irrelevant neuroscience and everyone judged the so OK, everyone judge the good explanations as being more satisfying than the bad explanations.

[00:33:34]

That's a good. Yeah. So that's like a sanity check, right. Like if that weren't true then we should just, you know, right there.

[00:33:40]

Yeah.

[00:33:40]

Like but the presence of the logically irrelevant neuroscience evidence made people feel more satisfied by both the good and the bad explanations. The only exception was neuroscience experts very much like, thank God, right?

[00:33:55]

I mean, yes, yeah.

[00:33:57]

But even students in neuroscience classes of students who knew something that neuroscience were still more persuaded by explanations when they were accompanied by irrelevant neuroscience.

[00:34:06]

Right.

[00:34:07]

That's why we're doing these kind of particles.

[00:34:09]

And that's why we are often or certainly, you know, even on the blog pointing out when science oversteps boundaries, when the interpretation of scientific findings is oversold and so on, because there is exactly that danger that science has a very large and well deserved, of course, social cachet, except when it comes to climate change and evolution, for instance, or vaccine and autism or something like that.

[00:34:34]

But scientists themselves. That means two things. First of all, the scientists themselves need to be have an ethical responsibility to make claims that are congruent with their evidence and not not overstepping their boundaries, certainly not when it comes to things that have a potentially large societal impact.

[00:34:50]

Yeah, this is totally irrelevant to things like court testimony that we're going to go there in a second. But it also means that scientists need to be careful because they are the beneficiary of a high degree of trust. Now, if you start making bad decisions based on bad claims, you know, trust erodes over time. For instance, you know, it's American science and biology in particular went to a very low degree of trust after world war between between the right before war to and immediately after World War Two because of the damage done by the eugenics movement.

[00:35:28]

Right. Right. It was very popular. At one point, there were prominent scientists who were backing up the ideas of eugenicist and when. That got started to be associated with, you know, the Nazi and what they were doing over over in Europe, then obviously they lost credibility and they lost, you know, with the public.

[00:35:46]

So you need to be careful. When you have trust, you have trust. This is something needs to be managed very carefully, not only from obviously, I think, an ethical perspective. I mean, it's just a bad idea from an ethical point of view to overstep your boundaries, but also even out of self-interest. If you start making claims that go way beyond what you can substantiate it at some point, you know, somebody's going to ask, why are you making these claims?

[00:36:11]

Can you mentioned? Yeah, you mentioned earlier the judicial process, the legal implications of these things. Right.

[00:36:18]

Using neuroscientific evidence. Right. And how it has this really powerful, persuasive effect on people even when it's completely irrelevant.

[00:36:23]

That's right. So one case, of course, first of all, the article in the late 60s idea that this is now becoming the brain made me do it defense in a bunch of cases of individual cases.

[00:36:38]

But this is the example that the author Rosenbaum gets into some detail is very recent. And he actually, as goes all the way up to the Supreme Court. And this is a dissenting opinion by Justice Stephen Breyer about a ruling. If you remember, a few months ago, there was this ruling denying the right to California to ban violent video games. Right. And Breyer dissented from the majority opinion who actually said that California cannot ban video games even for children and so on and so forth.

[00:37:11]

And Breyer sided cutting edge neuroscience, quote unquote, in favor of his of his opinion, where he was basically saying that that that being exposed to certain violent imagery has certain, you know, certain areas of the brain.

[00:37:29]

And that shows that there is an effect of the violence on the brink. Well, of course, there is an effect of the violence in the brain. Whatever you you're watching. I mean, I have to say I had to disclose, by the way, that in that particular case, I actually agree with Breyers minority opinion that those video games should be banned. I don't know if you actually looked at what these things done. Those are the kinds of things where the main character is.

[00:37:48]

At some point, if you're going to get literally ripped off and cut into, you know, there's a woman that gets cut into from the from the from the lower lower parts of the to the to the head.

[00:37:59]

I mean, just really stuff that I frankly don't think children should be exposed to. And I really don't buy the question as to whether that decision should be in the hands of the parents or or the government.

[00:38:08]

Right. We should do an episode on that.

[00:38:10]

We should do an episode about that. Yeah. But anyway, I was saying was I was I was disclosing my sort of in this case, I guess political bias was actually in favor of. Right. Right.

[00:38:21]

But what I'm saying is that that was not the argument.

[00:38:23]

That was not the way to argue about this case, because everything that we're exposed to has an effect on the measure.

[00:38:30]

That is actually a fundamental that would be an example of a fundamental conceptual or philosophical confusion, thinking that seeing an effect of something in the brain is somehow like proves something about like that.

[00:38:41]

That's sort of a special case.

[00:38:42]

Like, no, that is every case. Exactly.

[00:38:45]

So that there's one other interesting and common philosophical conclusion or what I think is a philosophical conclusion that I want to talk about before we close. And that's the the conflation of description with explanation. So so I see this in a lot of neuroscience discussions that people seem to think that once, you know, which regions of the brain are associated with some particular behavior or psychological phenomenon, then you've explained that behavior or phenomenon.

[00:39:10]

Right. But you haven't. It's still purely descriptive information. It's not explanatory. I mean, it's still usually valuable to know.

[00:39:18]

I mean, it's valuable to have as complete of a brain function map as possible. But I don't think it's valuable in itself. It's valuable because it can help us in answering the explanatory questions that we really care about.

[00:39:30]

I can think of two ways. It's helpful, actually. The first is that if we can say with confidence that a certain region is both necessary and sufficient for the experience of some, say, emotions, a jealousy, then we can use that to do experiments on the causes or effects of jealousy because we're confident about that, about the location.

[00:39:48]

And then second, learning about whether certain functions are localized in one place or not is helpful just in our general understanding of how the brain works. So it's not really the particular location of the function that's helpful here. It's just the fact of whether it does actually have a particular location or not. That's really important.

[00:40:03]

It's not really. And in fact, let's take your case of jealousy. I mean, even under those ideal conditions you're talking about, one cannot simply conclude that therefore those areas of the brain caused jealousy. What one can conclude is that given a certain cultural milieu and certain behaviors from other people, those areas of the brains may be causally connected to the to the reaction and the emotion of jealousy. But if it's very possible and in fact, we have examples of that, that if you change the cultural milia, if the same kind behavior, it's not culturally supposed to totally.

[00:40:34]

It will not, regardless of the fact that people have the same brain, I mean, there was a very interesting study recently that I quoted in in the Russian speaking blog about the cultural dependency of the effect of alcohol. You know, most people seem to think that, oh, alcohol, it makes people more prone to violent behavior and also to sexually overt behavior and that sort of stuff. Well, apparently, that is actually true only in certain cultures and not in others.

[00:41:02]

It is true if you are in England, it's not true if you're in Italy. And even though Italians drink just as much alcohol as as the Brits.

[00:41:10]

So there's this interesting study where, you know, the same chemical acts on the brain in the same way, you know, there is no claim here that the chemistry is different or that the brain structure is different. But but the brain interprets what to do with certain situations depending on the cultural environment, which means that you cannot limit yourself to just the brain. You have to take into consideration, you know, the entire culture environment. Before we finish, I want to mention that there are several sources of available sources of neuro skepticism that people can actually get.

[00:41:40]

Take a look at when they have these things. And one of them is a blog called, appropriately enough, neuro skeptic. This is run by a neuroscientist in the United Kingdom who takes a skeptical look at his own field and beyond. As Blong says, I browse through some of the entries and there are some really interesting things, including pretty recently the post on what is brain activation on an FMRI.

[00:42:04]

And so it goes straight and it's very detailed. I mean, the guy actually goes through primate literature and explains on this blog the meaning of tables and figures and images and that sort of stuff. So it's really well done. Another is an article that is actually cited in the Slate magazine article that we we have been discussing for most of this time is will be linked also from from the podcast website is by Jonathan Marks. And it was published in this last year.

[00:42:33]

I believe the title is a Neuro Skeptics Guide to Neuro Ethics and National Security. And that one takes an interesting look at the abuse that the government is doing or may be doing, preparing to do pretty soon about of neuro neuroscience information. And finally, there's an interesting collection of essays. This is a book by MIT Press. It's called Neuroethics An Introduction with readings where there is a brain, a range of opinions and in a range on the subject and a range of topics that are covered.

[00:43:10]

Some of these include, you know, brain, self and concepts of authenticity, brain readings. And what is what does it mean to do neuroimaging, the neurobiology of intelligence, that sort of stuff. Then it gets into neuroscience and justice, which is something that we briefly talked about, as well as the idea of personhood and and even, in fact, animal neuroethics, because, of course, there's no reason why you couldn't do femoris on, you know, cats and dogs.

[00:43:38]

So so now the podcast ending spot in my brain is lighting up. And since I have no free will, I am compelled to do what it tells me to do, which is I was what you say.

[00:43:48]

I was compelled to do that. But it's hard to distinguish. So so we're out of time.

[00:43:53]

And we're going to move on now to the rationally speaking PEX. Welcome back. Every episode, Julie and I pick a couple of our favorite books, movies, websites or whatever tickles our irrational fancy. Let's start with Julia Spik.

[00:44:21]

Thanks, Massimo. My pick is a book that came out last year. It's called Rationality and the Reflective Mind by Keith Danovitch, who's a professor at University of Toronto.

[00:44:31]

So this is a really great, comprehensive discussion of what current research to date has told us about rationality and about how the human brain diverges from rationality.

[00:44:43]

And one of the reasons that I particularly appreciated it is that so I've often read and talked to people about cognitive biases and logical fallacy is. But they were never the way they were stored in my brain. All that information about biases and fallacies was sort of in a big heap.

[00:44:59]

Or if you want to be more charitable, a list.

[00:45:01]

Yes, but there was never I never really had any sort of more organized way of understanding the relationships between the biases and fallacies, even though I had the vague sense that some of them were like specific examples or, you know, manifestations of other biases or, you know, that certain ones were like, I don't know, generalizations of others anyway.

[00:45:21]

But so this book has this really great, comprehensive sort of bird's eye view taxonomy of how the biases and fallacies are related to each other.

[00:45:28]

And it's very nice. It's clear and it's and it's convincing. So just to give you a brief taste.

[00:45:35]

There's sort of three rough types of bias of ways that Human Decision-Making can go off the rails. The first is just due to cognitive limitations like it just just efficiency.

[00:45:47]

You know, the maybe the brain could actually do a better job of reading something out or making a decision. But because we have limited time and resources and energy, it just defaults to a quicker and and and imperfect process. Whether that be, you know, just drawing on one or two examples that come easily to mind that may not be representative and then, you know, not worrying about it or whether that be using intuitive and emotional decision making processes instead of like the careful, deliberative reasoning.

[00:46:13]

And then the second category biases and fallacies is what sometimes is called a mind where gap like its reasoning incorrectly because you aren't actually aware of what the proper reasoning method is.

[00:46:24]

So lack of understanding of basic probabilistic theory could lead to that or or actually not knowing about certain logical fallacy is is pretty common that like your reasoning correctly just because you don't know that that's wrong.

[00:46:36]

And then the third category is sometimes called corrupted mind where. So that's that's faulty reasoning, not as a result of the absence of knowledge about proper reasoning, but but instead it's the result of the presence of false knowledge about reasoning.

[00:46:50]

So that could be the result of either believing something about folk psychology or or just having sort of confused philosophical concepts like believing in a soul, for example, or believing in, you know, the that you sort of exist in your brain like that.

[00:47:07]

Everything your brain produces is being viewed and experienced by you and your sort of this homunculus like sitting in your brain in the Cartesian Theatre. That would be an example of corrupted mind where that leads to false reasoning.

[00:47:17]

Anyway, it's just I really appreciated having this theoretical framework in which to think about biases and fallacies. And of course, Danovitch gets into more detail than that. But but that's a good starting place. Sounds good.

[00:47:26]

The only bad thing about the book is not available for Kindle. Oh, so I just sent an email to the publisher.

[00:47:32]

Oh, good. You're so quick. Yes, either that or I'm very long winded.

[00:47:37]

No, no, I'm quick. So my pick is actually something called a hypothesis. That is, we will link to the video to introductory video on this thing. It is a new initiative that is not out yet. So I'm not going to endorse it. I'm just saying that I'm very, very curious. And I heard of a couple of skeptic colleagues were also very cautious about this.

[00:48:00]

Basically, this is it. This will be when it comes out next year will be a broad sort of social criticism and peer review of the entire Internet.

[00:48:14]

Well, yes, indeed. That's ambitious.

[00:48:17]

It is ambitious. Of course, it will be done in stages. And now it's these people seem to have thought quite a bit about what they're doing and how to implement it because other people have tried before.

[00:48:28]

But, you know, Google has reading sold on all sorts of things and Internet sites. There are you know, Facebook has readings and commentaries and all that sort of stuff. But the claim by the the people at hypotheses is that none of this was actually put together, taking into account exactly the nature of the developing nature of the Internet and what it's what is necessary and how it would work. So the basic idea is that these people will provide this user platform, for instance, Web browser extensions for a variety of browsers that will allow you when you read something on the Internet and that could be anything, you know, watch a video or read a blog or a newspaper article or something like that.

[00:49:10]

It would allow you to. Interact with a broader community that sort of basically does these peer review, constant peer review and annotation and correction of anything that people are interested in, the people at hypotheses, a claim that they will start with a broad base of experts to begin with. So this is not just anybody. So it's it's a it's supposed to be a combination of crowdsourcing and expertise. It's not just crowdsourcing, because otherwise, as you know, I'm pretty skeptical of simple crowdsourcing, but we've done some sourcing is Yahoo!

[00:49:43]

Answers. Exactly. Precisely.

[00:49:45]

And that doesn't. These people are aware of that. So they presume we're going to be staying away. In fact, the call went out to the skeptic community to get involved and for people to volunteer as experts in a variety of areas. So volunteer. Not yet. I'm looking at I just looked at the video this morning, and I'm going to be reading a little more that the people at hypotheses are raising money. They need apparently something like a half a million dollars to get started and there are more than halfway there already.

[00:50:16]

So if people are interested, they should take a look at this. The idea sounds really good. It sounds like it would be a very interesting improvement of your sort of Internet experience, which is this day. Most people's reading experience, period. Right. Right. So we'll see how it goes. Maybe we'll revisit this next year when it comes out.

[00:50:36]

And perhaps if we if we if the people involved are interested and we can we can have them on the podcast and chat about it, they're going to have to work pretty fast to outrun the growth of the of the Internet.

[00:50:46]

Like, yeah, I don't I don't know what kind of rate of I don't think the reason they're going to do it, the claim is that they're going to allow people, you know, they're going to distribute essentially tools that allow people to do these kinds of things. So they probably start with a core. One of their one of their examples was, for instance, you know, when you read the news, did you know The New York Times comment sections is often closed entirely or it's closed after a certain period of time.

[00:51:11]

So the people cannot comment anymore on certain articles. And he said, you know, the question was wired, why not living up and open to it? But more importantly, the guy in the video says, you know, why are the comments right at the bottom? We're pretty much nobody reads them.

[00:51:24]

You'd think that if there is, for instance, something something factually wrong with the article, though, should that have to be a note, an annotation, you know, right by and just exactly what right when it is where the problem is and something that says, OK, there's a flag here or something is not is not right.

[00:51:40]

Anyway, we'll see next year how these things these people are going again.

[00:51:43]

If it's going well, we should have them on the show. All right. We are more than out of time now.

[00:51:48]

And Massimo and I would like to remind all of our listeners that the fourth annual Northeast Conference on Science and Skepticism, also known as Nexxus, is coming up.

[00:51:57]

It'll be on April 21st and 22nd of 2012. We encourage everyone to go to the website NextG.

[00:52:06]

That's Annique Asphaug for more information and to buy tickets.

[00:52:11]

So this concludes another episode of Rationally Speaking. Join us next time for more explorations on the borderlands between reason and nonsense.

[00:52:26]

The rationally speaking podcast is presented by New York City skeptics for program notes, links, and to get involved in an online conversation about this and other episodes, please visit rationally speaking podcast Dog. This podcast is produced by Benny Pollack and recorded in the heart of Greenwich Village, New York. Our theme, Truth by Todd Rundgren, is used by permission. Thank you for listening.