Transcribe your podcast
[00:00:14]

Rationally speaking, is a presentation of New York City skeptics dedicated to promoting critical thinking, skeptical inquiry and science education. For more information, please visit us at NYC Skeptic's Doug. Welcome to, rationally speaking, the podcast, where we explore the borderlands between reason and nonsense, I'm your host, Massimo Luchi, and with me, as always, is my co-host, Julia Gillard. Julia, what are we going to talk about today? Well, Massimo, today on Russian speaking, we really are spanning the globe with this episode because you're in New York, as you know.

[00:00:55]

Yes, I think I know that I'm in California. And our guest today is joining us from Adelaide, Australia. Professor Gerard O'Brien is a professor of philosophy at the University of Adelaide, where he specializes in consciousness, mental representation, theoretical foundations of cognitive science and other related fields. Gerard, welcome to the show. It's great to be here. Lovely to meet you both. Great.

[00:01:21]

Gerard. So let me start out with the basic stuff that so that the general topic that we want to talk about today is the relationship between sort of computer science and philosophy of mind and the study of consciousness.

[00:01:33]

So let me start with with what seems to be the most widely accepted, at least in philosophy, philosophical circles, idea about how the mind works. And that's called the computational theory of mind. And Stephen Horst, who is a philosopher who wrote a really nice article about it for the Stanford Encyclopedia of Philosophy, defines the competition in theory in this way, and I quote, It is a particular philosophical view that holds that the mind literally is a digital computer in a specific sense of computer that develops later in the article in that faught literally is a kind of computation.

[00:02:09]

It combines an account of reasoning with an account of the mental states. Does that sound about right to you? How would you would you modify it? Would you say no? That's absolutely not what it is.

[00:02:20]

I think that's absolutely correct. OK, well, one one version of the computational theory of Mind version that that many philosophers these days refer to as the classical computational theory of mind, because it's the particular version of classic of the computational theory mind that was developed first. And then cognitive science was first conceived and started operating as a sort of an enterprise in the 1960s and 70s and even 80s. It was the it was the conception of computation in the brain that dominated the field.

[00:02:54]

Yeah. And so what's important about it is it's the qualification as to what kind of computation. Right. Is thought to occur. And that is the qualification being that digital computation. My own view is it's much better if one's talking about the computational theory of mind in general. It's much better to be more have a broader characterization than that and one that allows you to capture different ways in which computation can be physically implemented in general and obviously physically implemented in the brain.

[00:03:29]

So with an alternative, so computation that's not included in the classical view.

[00:03:35]

OK, well, clearly everyone I think is is aware of this distinction that we operate with between digital computation on the one hand and analog computation on the other. Right. And insofar as we have built physical machines that operate quite differently according to whether they're either digital computers or analog computers, it seems that we already have very a very good understanding of a distinction between two different ways that we can physically implement a complex computation. Now, if that's the case, then I suggest that the notion of computation itself is more generic than digital computation.

[00:04:13]

This introduces a different way, a different spin, if you like, on the whole topic of the computational theory of mind and allows us to go down two different paths. And and I would stress very strongly that these are two quite different ways of thinking about how neuro computation actually operates.

[00:04:33]

What are these paths for? The paths, the paths? There's the there's the view that Hostas is describing, which is the path, as I said, that cognitive scientists spent a lot of time developing and talking about in the in the 1960s and 70s and 80s. This is the that's the digital brain literally is a digital computer. This is not an analogy. Right. This is the claim at the time was that this is not something of a metaphor or was simply, you know, we're saying the brain operates a bit like a computer.

[00:05:03]

The claim was the the brain actually does engage in digital computation. And then we can say more about what that exactly means.

[00:05:11]

Well, let's say OK, let's say that for a second. Let's say there for a second, because it seems to me, as you know, my background originally before before moving to philosophy of science was in biology. And it seems from me that from a biological perspective, it we know that a literal equivalency between the brain and a digital computer is, in fact false, that that is the brain doesn't work digitally.

[00:05:34]

But perhaps there is a way of rescuing the as and by digitally, I mean, it doesn't think in terms it doesn't operate in terms of, you know, binary sequences or of any kind. So that is there a way to recover the. So the notion of digital digital computing or not? Well, OK, to two things, Masimo there. I mean, I would add many classicists and that's what I'll refer to people who support this idea of the digital computational approach to cognition.

[00:06:01]

And there's still plenty of these people around.

[00:06:04]

But they I and I would dispute that claim you make that the brain doesn't literally operate as a as a digital machine, at least. And then you gave an idea of what you thought, you know, what do we think a digital machine might be? I talked about operating in this Barny way one. There are if you even if you think that's the basis of what digital computation requires. And a lot of people think this is very important for digital computation because we need to distinguish between these discrete states, on the one hand, for digital computation and continuous states for analog computers on the other.

[00:06:35]

And again, this is another distinction that, you know, we can get back to and talk about because I don't think it's quite right. But even if one operated with that, then there are lots of things that one can point to in the brain that do seem to have a discrete that is digital flavor. And so, for example, you've got a spike. You know, you've got we've got we talk about spiking frequencies. There's a sense in which a neuron is either has an X and potential travelling down its body or it doesn't.

[00:07:03]

And so when one could think about that as a first very crude pass across the brain in terms of how we could see some kind of discrete process that could eventually become the basis for a physical implementation of a digital machine. Yeah, that's an interesting thought.

[00:07:17]

I mean, I don't want to go too far in that direction because this is no, you know, a broad topic. And I want to touch on other things. But but for instance, you know. Yeah, maybe there is a way in which said so I'm thinking about the equivalent situation in a different field, which is in a quantitative genetics, which is closer to my my my expertise. So there are genetics. There is a distinction between Baynard traits, which are called Mendelian traits.

[00:07:42]

These are traits that are inherited as a unit.

[00:07:45]

Basically, you either have yellow color, let's say, of your seeds if you're a pea or they're green or whatever.

[00:07:53]

So there's two two states basically. And then there is quantitative traits. Country traits are affected by a number of genes, like things like height and weight and things like that.

[00:08:03]

And now there is a way mathematically to treat quantitative traits as threshold traits.

[00:08:10]

That is, things that actually are the underlying sort of biology is quantitative. But in fact, they sort of phenotypically they appear as if they were just Baynard that yes, yes or no.

[00:08:21]

And but they're not really binary. That's that's the crucial point. So it's again, it seems like one can make that kind of argument in terms of the brain, or perhaps one can make the argument that, well, maybe individual neurons work in something that approximates a digital set up. But the brain as a whole, it's it's depends on, you know, different thresholds.

[00:08:42]

There are different local aggregates of neurons that are activated to different thresholds, which makes the whole thing quantitative.

[00:08:47]

What exactly hinges on this anyway? I mean, why is that important? Okay.

[00:08:51]

And in a sense, it's not all that important because that that way of thinking about the digital computation is and what it requires is somewhat a little bit simple, I think, because really what the distinction between digital and analog computation comes down to is and this is where things get controversial, because whenever we talk about computation, we've got to keep in mind that this word is used in different ways across the cognitive science literature. What I think when we're talking about computation, we're talking about a process that has to involve representation.

[00:09:26]

This is another concept that's that is difficult sometimes for a non philosophical audience to start to get the hit around. But basically, it means we've got we're talking about it and we're talking about states that in some way encode or carry information. And the difference between a digital and one way of thinking about the difference between digital and analog computation is the way the physical states of the mass of the machine, let's say physical states of your physical implementation, they actually encode that information, will carry that information.

[00:09:57]

And here the crucial distinction is one between what cognitive scientists think of as symbols and symbols, entities that are very familiar to us in many ways in terms of our public forms of representations of when.

[00:10:12]

So when I talk about symbols and cognitive scientists talk about symbols, it's best to think about language, about the way that that language is put together from atomic symbols like words and to form molecular symbols like sentences. And so we can think when we think about symbols, we can think about language or what will sometimes referred to as a language like form of representation. And we can contrast that way of encoding or carrying information or representing information as as philosophers say, with another way of doing it.

[00:10:47]

Where you don't use symbols as as such, but where you use representations that are more like other familiar things we have around us, such as paintings or maps or sculptures. And the idea being the crucial distinction there between them is between a representation that bears no sort of clear relation between itself and the thing that it's about, on the one hand, with symbols, with words and sentences and certain kinds of representations that do have some kind of resemblance relation between themselves and such as, you know, if we think of the photograph, for example, and the person if it's a portrait of someone, the person that the photograph is about.

[00:11:33]

So so there's this distinction between how how certain states of the world can encode information. And there are states that do this quite differently. One, in terms of this arbitrary relation, one in terms of this resemblance relation, how does this all relate to digital versus analog computation? Well, the suggestion is that digital computers are systems that use the a kind of representation. Namely, they are they are systems that use symbols. However, those symbols might be physically encoded by a device, an analog.

[00:12:06]

Computers instead use representations that have a different kind of relation to the information, you know, the things that they are about. And they instead use some kind of resemblance relation. This was the way the whole idea of an analog computer using analogy came from the word analogy.

[00:12:23]

That's just to take an example, something like our visual processing system. When we see an object in the environment and we just we recognize it, for example, is there is that digital? Is that analog? Is there controversy over that?

[00:12:40]

There's a controversy over that. Unfortunately, naturally, everything is consistent in this area is very frustrating. But in terms of our phenomenology, that is in terms of the way it feels to us as conscious subjects when we look around the world and have perceptual experiences like, say, visual perceptual experiences, it certainly seems to is up to us as conscious objects that we are experiencing the world in terms of obviously this is so obvious. It hardly needs to be said in terms of representations of the world that have this property of resembling the world.

[00:13:15]

I mean, think about the following thing about the fact that you might be sitting wherever you are sitting and you form a mental image of your lounge room at home and you use that mental image to count off. So someone asks you, how many chairs have you got in your lounge room at home? Well, the way you do that typically is that you form a mental image. Usually a visual perceptual image of your lounge room and you can literally count off the number of objects in that in that image, such as the number of lounge chairs.

[00:13:42]

Now, what it feels when you think about that front from a first person, phenomenological point of view, it feels as though you're counting off something while you're experiencing something that has this property of resembling. The lounge room at home, right? It's like if you're simulating something in your mind, basically that's not and it's not like and this is the contrast, it's not like you're running a whole lot of sentences through your head. It's not like you're looking at a page of writing where there are a whole lot of symbols that are describing the lounge room.

[00:14:13]

You could do that. Some people might do that. Some people might have a you know, when they think about their lounge room, they might run sentences through. They had not many people. I'm sure most people would form a mental image in terms of something more pictorial or image like that that resembles the boundary.

[00:14:30]

So just to go back to Julie's question, it feels like we're operating there with with a representation, which is in some way, you know, that somebody resembles the world. But that's can be very misleading.

[00:14:44]

Sorry, Jared, to make sure I understand the the digital way of thinking about that question of how many objects or how many chairs are in my living room or so on would be sort of like as we experienced our living room, we've stored various things in the living room, various features of the living room in our mind with like almost like tags like this thing is an object. It's a large object or it's a small object or it's a soft or a hard etc.

[00:15:12]

. And then when you ask me, how many chairs are there, I just sort of like run a query in my memory for things tagged with object or a chair.

[00:15:19]

That's that's right. OK, and that doesn't feel like I'm doing that. It doesn't feel like you're doing that.

[00:15:24]

But yes, you would you would act did the the classicist would say to you it might not feel like you were running that kind of symbolic data structure, but in fact, you are you are actually accessing a symbolic data structure. It's incredibly detailed of the world around you. And because it's so incredibly detailed, so rich in its informational content, it gives you the illusion that you are accessing something almost picture like.

[00:15:49]

Of course, it is an obvious question there before I go to my next one, which is a little broader as well, how would the classicists know that or are they simply assuming that that is what's going on there because it fits their view?

[00:16:04]

You're looking at the brain. It's very much the latter.

[00:16:09]

OK, this is where we are operating here in the territory. A very high level conjecture, right.

[00:16:15]

About about about what's going on in the brain. And look, just one one one interesting thing about the classical approach to thinking about this is typically classicists and Jerry Ford or in other words, were famous philosophers who who are famous, who defend this sort of approach. Typically, they think they are licensed in saying and when we think about these issues, we can completely ignore the brain. We it doesn't matter.

[00:16:41]

Okay. To us how these things are actually physically implemented in the brain. They are. We think we know they are. They say, you know, we we we're not doubting that they are. There's not some mysterious ectoplasm which all of this stuff is, is it's realised, of course, not as real as in the brain, but as cognitive scientists, we can ignore the brain. Yeah. So that's need to do brain science.

[00:17:02]

That's an interesting question. And actually, perhaps we could bracket it for a moment because I do I eventually want to go back to this this idea of, you know, can you do cognitive science without actually taking seriously what the brain actually is? But before we get to that point, let's zoom out for for a second and get even more general. So I get the impression that by reading, you know, I read this kind of literature as out of curiosity from from the outside, this is not my field, but I get the impression that a lot hinges on exactly.

[00:17:33]

Or even perhaps approximate what people mean by computation. And, you know, we talked about the difference between digital computing and analog computing, but there are some people that even go as far as saying these are referred to Interestingness Pen computational MT's. Yes.

[00:17:48]

They think that everything computes, that rocks are computers, that, you know, that everything is isn't.

[00:17:53]

Now, first of all, I want to hear your take on that. And in particular, what do you actually think? It's sort of sensible definitional competition. I'm not saying this is something that computation is nonsense, but it seems to me to seem to be to get very close to being useless, because if we start defining everything in the universe as a computer, well, then then I still don't know. And that doesn't tell me much about the difference between a human brain and Iraq.

[00:18:21]

And that's what I want to know.

[00:18:23]

Right. Massimo, I think your instincts here are absolutely correct in the sense that we introduce the notion of computation into this field because we think it's going to give us some kind of explanatory purchase, some kind of grip, you know, a good explanatory grasp on the topic that we're interested in. And the overarching topic that we're interested in here is in explaining intelligence. I mean, when you started the podcast, you start you talked about consciousness and other things, mental representation as well.

[00:18:52]

But really, when we think about cognitive science as a. As an enterprise, it's really about explaining intelligence. And when we when we look around the universe, we can divide it into two classes. There are those classes of of things, of objects, of systems that are intelligent. And that's a vanishingly small class and the vast majority of systems and things in the universe that lack that property. So there is a very important dividing line between intelligent systems and those that don't behave intelligently.

[00:19:24]

And obviously, Rux, for example, I hope everyone would agree. Would you would sit in that in that class of things that aren't intelligent?

[00:19:32]

I know I know a few human beings that fall into that category. But whatever let's now go that we could with this right. Is very controversial to say where the dividing line is obviously. Right. But but but at least in general, we've got these two quite different classes. And we know, you know, in particular things fall into one of the other rocks, you know, fall into the class of things that are intelligent. Most and most humans fall into the class of things that are intelligent.

[00:19:56]

And so then we wheel in this notion of computation, the computational theory model, supposed to be some kind of breakthrough. It's something that's supposed Jerry Fodor talks about it being the only idea we've ever had about how the mind works and has got a show of actually telling us something. Not everything, says Fida, but at least it's got a show of telling us something about how the mind works. So if that's the case, if computation is going to be this this concept that's going to do some explanatory work for us, we better not characterize it in such a way that it then turns out that everything performs computations and Pann computation and the view that you were describing before is such a view.

[00:20:32]

There are different ways of developing pain, computational ism, different ways, because there are different ways of defining computation so broadly that everything either implements a computation or implement a some computation or implements even all computations. That's how crazy this view can get. Sorry, Julia, go ahead.

[00:20:52]

Yeah. So I have a question that is perfectly relevant to this point, so I want to ask it now. Is there any evidence that we could acquire that would be evidence against the computational theory of mind?

[00:21:07]

Is it falsifiable? I think I absolutely think so. I think we what depending on the kind of computational theory of mind that you're developing, then it's. Yes, absolutely crucial that you would need. And this is where, for example, there are different ways of doing this. But you would need to show that a particular kind of process, a particular kind of mechanism is at work. And so you'd have to you have to show that various kinds of constraints are being satisfied.

[00:21:39]

So so in a sense, JULIAR to answer that question. Well, we'll backtrack a bit. That's my view. If you're a pan, computation is just going back to finishing off the mouse was asking about then I would say that the answer to your question, Julian, would Julia would be no, there is no there's nothing one could show, you know, because because everything is a computation and because everything is a computation, then then there's sort of all all constraints get thrown out.

[00:22:09]

And so the idea that something, you know, it wasn't a computation is just something we just wouldn't you know, we wouldn't have a chance of of shutting.

[00:22:17]

But I also just want to clarify that. Yeah, I just wanted to clarify that my question wasn't meant to be. Well, they better suggest some potentially falsifying evidence or else their theory is complete nonsense because. Right. Sure. There are philosophical viewpoints and arguments that don't hinge on empirical evidence and are actually sensemaking.

[00:22:36]

I'm not sure. So I was just sort of trying to distinguish which kind of theory this was.

[00:22:42]

It's supposed to be a more substantive empirical theory. So in that sense, it does for you know, it does, you know, you know, either continue to exist or it falls on the basis of empirical evidence. So in that sense, that we'd better have we better be able to talk about what kind of empirical evidence is necessary to falsify this particular.

[00:23:03]

And before we let competition go. Yeah, one more observation.

[00:23:08]

I don't I don't necessarily object to the idea that, you know, everything in the universe has Property X, because, of course, we do have very good evidence that, you know, everything in the universe is made of quarks or strings or whatever it is that that will come up next.

[00:23:24]

And that is both interesting and, in fact, useful information. It's empirically testable and so on and so forth. So so I don't think that that is my objection to Toubon. Computation is and is not not as much that well, it makes sort of a general claim science as opposed to make general claims. It's that it's seen it strikes me as the kind of general claim that is in fact sterol that it doesn't tell me anything particular useful. You know, once once you told me that I don't know what to do with that kind of information, basically.

[00:23:54]

That's right. Yes, exactly. And so that's what I think I was getting at when I was talking about explanatory purchase, we've gotten that. We get no explanatory pictures, we get no explanatory value, nothing, no value was added by moving into that way of talking about things.

[00:24:08]

Which brings me back to to your face to the question. So what do you think is computer savvy?

[00:24:14]

So this is this is why I think we have these these constraints that operate on us when we think about these sorts of concepts, any sort of concepts really in philosophy and in science, we've got to ask them to do some work. If they can't do any work, then we've got to wonder whether they're worthwhile having. And so for me, it's very important that computation actually does what it was originally supposed to do. It does provide us with this explanatory purchase and in particular that it explains it provides us with explanatory purchase in relation to this question as to how one can distinguish between those things in the world, those physical systems that lack intelligence and those that have intelligence.

[00:24:49]

And so we're operating on the other you know, on the on the on the latter side of that divide, we're thinking that computation has got something to do with what it is, what what was required in order for a system to behave intelligently. So the characterization to call it that, I don't think is a definition. It's much broader than that. The characterization that I favor of computation is very is very generic, but it's one that says that computations are special and they are special physical processes.

[00:25:18]

There's nothing mysterious about them in that sense. They they are special because they allow information to throw its weight around in the world. Now, the trouble now the trouble with starting to talk about information, right. Is like computation. Suddenly we've got another word that's used in all of these different ways. So when I saw it so to immediately clarify what I mean by information, because that was I was going to ask you that next year.

[00:25:42]

What I mean by information is not the kind of quantitative analysis of information that we sometimes get in mathematics and in especially in engineering that derives from people like Shannon and Weaver, who developed a quantitative notion of information, especially for purposes in communication. And here the idea of information was simply a measure of the amount of information that people that that a message might contain. For example, I'm not talking about information in that quantitative sense. I'm talking about information and more and what could be said to be a more qualitative sense where we're actually talking about the idea of what it is, what information is actually carried by a particular state, a message, an object or some such thing such that we can we can really take seriously the notion of encoding or representing information.

[00:26:36]

And so I'm saying with with computation, what we have is we have some physical process where the process itself, the physical process itself is influenced in some way. And I'm very that's sort of very vague at this point. I accept is influenced in some way by the information that is encoded by the states of the device. So the device encodes information in a qualitative sense, and that information that it encodes in some way affects the trajectory of the process in which that device engages.

[00:27:14]

Now, that's that's, I think, a very broad notion of computation, but it captures the distinction it allows for us to have it then a distinction between what we were talking about before, digital and digital computers on the one hand and analog computers on the other, because they both satisfy my view. That characterization is just how then they go about how in each case, the physical system goes about allowing information to throw its weight around is quite different.

[00:27:43]

And that's where things, I think, get very interesting for cognitive science. It's in those differences that we start to see two different pictures of how the mind might work start to emerge.

[00:27:52]

So and now you mentioned that you keep mentioning the word intelligence. Right. Which, of course, it's another one of those Pandora's box.

[00:28:01]

That's right. Now, I don't I don't I don't want to necessarily open that in the book, but I do want to contrast that one with another one and which is consciousness near your consciousness. Right. So because and it seems to me that the two are however you want to define them, the two are different.

[00:28:19]

It may very well be that the consciences require requires intelligence or at least a certain level of intelligence. But certainly intelligence doesn't seem to require consciousness. And, you know, again, if you're looking at the animal world, for instance, there are definitely degrees of both intelligence and probably consciousness. So exactly. Now, the question is why?

[00:28:37]

One of the things that strikes me as sensible to put my cards on the table here, I tend to be somewhat skeptical of at least strong versions of the computational theory of mind. Right, because I come to the problem again as a biologist. And so I think that actually one has to take the biology seriously, which probably also. At least to some extent, taking the particular substrates of which human brains are made seriously. But maybe we can talk about that later.

[00:29:03]

But the thing is, if you think of it that way, then that may be a way to sort of a way to make sense of that. Objection might be to think in terms of intelligence versus consciousness. Here's here's what I want.

[00:29:15]

I mean, artificial intelligence, for instance, has which I think it's appropriately named, by the way, it's not artificial consciousness, but it's not yet.

[00:29:24]

Artificial intelligence is made very big strides in the in the intelligence department, meaning that, you know, we're all impressed by Deep Blue and Watson and all those kind of things. Yep.

[00:29:37]

But as far as we can tell, at least we've got nothing at all in the direction of consciousness. Right.

[00:29:43]

OK, so there's that distinction then help the debate.

[00:29:46]

That's what I guess I think is I think it does help the debate. And I think absolutely it's absolutely crucial to keep these two concepts separated in people, you know, in one's mind when one's thinking about this, because some of the major objections or one I'm thinking of, one in particular, a major objection to the computational theory of mind in general and specifically to the classical computational theory of mind, I think is actually based on a problem about consciousness, not a problem about intelligence.

[00:30:16]

And here I'm referring to a objection that was put in in the early 1980s by John Searle and involved a thought experiment that I think might be familiar to many people, many listeners, and that that was because it became immortalized in terms of the Chinese room thought experiment. It started to be known as and it is responsible for a huge amount of literature as people puzzled over it and people defended it on the one hand and other people attacked it and so forth.

[00:30:48]

But what's interesting to me, when I look back on that debate that was taking place from the 1980s on and it's still going today, was that it really was orthogonal to the original debate about the nature of the brain and computation, because in that in the original debate, the issue was intelligence more than consciousness. And I think what happens with soul, even though still himself doesn't actually explicitly mention consciousness, he talks about the notion of understanding. But what he's actually addressing in that thought experiment is the notion of conscious understanding and inventing that for that reason, the discussion starts to get difficult because people talk past one another.

[00:31:28]

And so you have certain kinds of replies that assume that they're talking about intelligence, not talking about conscious understanding. And and then the replies come back that think they're talking about where the individuals think they're talking about consciousness. And suddenly the whole thing gets very messy and does not get resolved. And so much of this is just a long winded way of.

[00:31:46]

Yeah, but I agree that getting back to your point, that this this distinction is crucial. And so I think, you know, I think the computational theory of mind, as is often is objected to or at least jettisoned far too quickly by people at times because they think, well, it doesn't really touch consciousness or it doesn't seem to touch consciousness.

[00:32:06]

So it's not really a theory of mind at all, is it? Well, you know, we could talk about that. But I think the most important thing is to say that it isn't supposed to be a theory about intelligence. And, you know, that's the crucial thing about and as you say, intelligence is very hard to define. You know, it's one of those other difficult concepts. But we can at least provide, you know, very minimalist kind of characterizations of intelligence, which really just simply say things like, you know, what we're interested in is we're interested in explaining how it is that some systems have this behavioural flexibility, this behavioural plasticity that allow them to respond to the circumstances in which they find themselves in an adaptive or appropriate way, such that they can continue to operate in that domain and perform certain tasks and so forth.

[00:32:53]

And if they even with. Sorry, Julia, go ahead.

[00:32:56]

Finish your point.

[00:32:57]

I was just going to say Michael's or I was just going to say so even, you know, even with that very minimal notion of intelligence, we we can distinguish between rocks on the one hand, which do nothing, which don't seem to behave appropriately or flexibly or, you know, any plasticity whatsoever, and then even ends even desert ants as they leave their nests and go out foraging for food and then have this remarkable dead reckoning ability to even after pursuing a random trek through the terrain, once they've found some food and have, you know, and heading back for the nest, they can turn and head straight back to where their nest is.

[00:33:37]

You know, it's a remarkable ability given how few neuron's ants have. And yet they they seem and this is where we now use the language of computation, they seem to have this behavioural plasticity, and that is they seem to be able to perform some quite remarkable computations in order to perform that feat of dead reckoning. So. Julia, sorry, no, that was excellent. I just wanted to lay my cards on the table and say that the conversation, the context in which these topics tend to come up most often for me is in conversations with futurists and people sort of working in computer science and and thinking about what the role is that artificial intelligence is going to play in our future, the future of humanity.

[00:34:25]

And so I think I just want to make sure that I understand the relationship between all these different topics we've been talking about. And I think that stop me if you want to correct me at any point. But I think that for the question of thinking about how powerful can artificial or machine intelligence become and what might its effects be on the world, on society, et cetera, you don't need to worry about consciousness. You only need to worry about intelligence as you were defining it.

[00:34:52]

And then for another, maybe more farfetched question that still does get discussed a lot in my circles, the question of uploading of whether we could ever have human consciousness transfer consciousness from an organic human brain onto a an artificial, you know, a machine there. You need both intelligence, sorry. There you need both the computational theory of mind in some form and you need consciousness on a non non-human brain substrate. And so we've been discussing most we've spent most of this discussion on the computational theory of mind.

[00:35:35]

But it's as you were alluding to, that's necessary but not sufficient for having a human consciousness on a machine. And so the second part of that is is addressed by the Chinese room thought experiment. And so I was hoping I'll just briefly summarize the Chinese in thought experiment for our listeners who aren't familiar with it. And I was hoping we could hear your thoughts because I think Masimo and I might have disagreements. And I'm curious.

[00:35:58]

I was I was going to wonder that that Julie and I have very different ideas about this thing. So you're about to tap into an audience, understand that that's about this, but also might be wrong about what I think.

[00:36:12]

But nevertheless. But go ahead.

[00:36:16]

Sounds like a philosophical discussion, basically philosophical discussions where everyone disagrees with everyone else.

[00:36:21]

So, yep, this is so great. I moved from economics into sort of, you know, amateur philosophy. And in economics, the joke was that, you know, in any room of 10 economists, you have 11 opinions. So familiar territory to me.

[00:36:37]

Anyway, it is very brief summary of the Chinese room thought experiment is that you have a room with a guy sitting in the room who doesn't know Chinese, but he has a rule book of the Chinese language that, you know, that gives them instructions. And there is a little slot in the door and people passing pieces of paper through the slot written in Chinese. And he uses the instructions in the book to figure out what to write down on another sheet of paper, which he passes out through an output slot.

[00:37:09]

And from the outside, it looks like the room is is responding to questions in Chinese. But in fact, there's only sort of symbol manipulation going on. The man doesn't understand Chinese and then, you know, by stipulation. And then the question is, what does the room understand? Chinese does the sort of the whole system of the man in the room with the book and the slots understand Chinese. And I think the implication originally by straw was, well, no, of course it doesn't.

[00:37:36]

But some people have that bullet. And so it is really badly that.

[00:37:45]

Well, well. So the typical reason for writing that book, I think, in which I was I found pretty appealing was it seems like could use the same logic to argue that the human brain doesn't understand anything because. No, none of the individual parts, you know, when I grab my water bottle, none of the individual parts of my brain understand what a water bottle is. Yeah, it's just the whole system and together that understands it. And so I was wondering, Gerard, if you think that Serles original argument holds weight and if that means that we, you know, maybe we can't replicate a human conscious understanding mind on a non-human brain substrate or but there's some distinction between the Chinese room thought experiment in the human brain, et cetera.

[00:38:28]

Okay, so there's a lot there. And yeah. And we only have another six or seven minutes, so go ahead.

[00:38:35]

So to be as succinct as possible, I, I think it's very important when I mean when thinking about discussing the Chinese room argument, I think it's just it's important to step back and see why, why we're even discussing that idea of of symbol manipulation in the first place in the context of digital computation. And this gets back. What I was saying earlier about how digital computers perform their computations and they perform their computations, as Julia was just sort of mentioning, in terms of symbol manipulation, but when we humans, when we think about what it means to manipulate symbols, to the extent that we talk that way, we think, well, I've got language like representations and I understand what they mean.

[00:39:14]

And so someone writes something on a piece of paper, you know, how are you today? And I understand that. And I and I respond if I'm writing it down on paper. But that piece of paper, I respond. I'm feeling very well. Thank you. But the crucial point here, and this is why the Chinese room where the Chinese room thought experiment comes from is that when it comes to digital computation, the standard line is that when the computer when the digital computer operates on those symbols, it it doesn't have access to the meanings of those symbols.

[00:39:41]

It only has access to what's known as the syntactic property, which really is just a technical term used in this area to just mean this shape, how they shaped what they look like. And so the person in the Chinese room is being his taking this input, which is just a series of of Chinese characters. And the point being that they do not understand Chinese and they are able to respond to those Chinese characters with an instruction book that tells them just simply to play around with the shapes, match one shape with another shape, put the shape here if the shape appears and so on and so forth, and then they produce as output a series of shapes.

[00:40:19]

And in doing that, the claim and the Cecile's argument in doing that by duplicating the operations of a digital computer.

[00:40:28]

And John, so is very characteristic way, will then say, well, look, you know, it's obvious, isn't it? If you just if that's all you're doing, if that's all the digital computer is doing, manipulating these these shapes according to these instructions, that doesn't that doesn't tell you what the shapes mean. It doesn't it doesn't result in you having an understanding of what the Chinese language means. And so then so conclusiveness digital computers, he says, well, they can't they can't understand.

[00:40:59]

They don't this manipulation of of syntax, this manipulation of symbols, according to this syntactic properties, doesn't constitute an understanding of what these symbols mean. So we they don't actually duplicate what it is like for us as conscious, no minded individuals, what it is like for us to perform the same kind of operations, what it would be like for a Chinese speaker, for example, to respond to these sorts of questions in Chinese with Chinese answers.

[00:41:27]

So that's that's where this whole discussion starts. And then there's then there's the issue of will it perhaps it's not the person in the Chinese room, perhaps it's the whole system. I his his he's just to cut through all of this, I, I don't find John Sayles argument Chinese Remagen compelling either as a story about the nature of intelligence, because there's a sense in which from the outside we're thinking about the behaviour of this Chinese room. It it is actually able to behave intelligently if intelligence in this situation means it's able to respond appropriately to the questions it's asked from the outside.

[00:42:06]

It does everything that it's you know, and we can expand this whole discussion to get it to move around the world and behave appropriately in the world and all sorts of ways. And even if it was all just generated by these same kinds of processes from the outside, it would strike us as an intelligence system. Right. And so in that sense, this particular objection to classicism doesn't touch the whole intelligence issue at all. It's that's why I was saying before, it's just orthogonal to that debate where it does get more interesting perhaps and where I think it has exercised more people is in relation to consciousness and the relationship between consciousness and computation.

[00:42:41]

Right. Even here, however, I think I'm on with a philosopher, his name and many people familiar with with Daniel Dennett. And Dennet is well known for sometimes talking about biting bullets. He does he does bite a lot of bullets and take very seriously the consequences of positions that other people find very counterintuitive. But I'm I'm I'm sort of sympathetic with Dennett's view. And Dennett's view is often in these kinds of thought experiments, it actually turns out to be impossible to perform them because we're not given enough information.

[00:43:10]

And in the case of the Chinese room thought experiment, there's a there's a well known reply that that still makes it what's called the systems response, which is the system understands and in this case, let's say even consciously understands, Chinese self says, well, that can't be the case because I can imagine memorizing all the instructions so that there isn't a need for the Chinese room and slots and inputs and outputs all it would be all memorize all the instructions. People give me pieces of paper with Chinese written on them.

[00:43:39]

I will without understanding Chinese, I will simply manipulate them and give them appropriate responses because of memorized all the instructions in terms of matching shapes with shapes. Right. The problem with that reply, the problem of that reply is that no one could perform that. Particular feat of memorization, because if you are to have a set of instructions that are sufficient to be able to, you know, allow the Chinese room to call it that, to respond appropriately to Chinese inputs of these kinds, then you would have such an incredibly complex instruction set that it would just be beyond the capacity of human beings to memorize that particular construction instruction set.

[00:44:22]

The result is this is the sort of Dan Dennett kind of line. The result is you're being fooled by Jonesville into thinking you can imagine something that you can't imagine.

[00:44:30]

And the result then doesn't fail just because the human brain doesn't have enough computing power or because memorizing such an instruction book would be equivalent to understanding Chinese.

[00:44:40]

That's that's what so says. I'm sorry. Sorry. Let me get that right. Let's start. Dennett says write the letter.

[00:44:46]

If the latter, Adenhart says if you were to memorize that instruction set, who knows what your conscious experience might be? Who knows?

[00:44:54]

But if you were to do all of that work, I memorized that whole construction set. The result might be that you actually understand Chinese. And the point he makes he's making here is how will we know we can't actually perform that thought experiment or the intuitions that the thought experiment is supposed to drive? Just it just doesn't drive them anymore. It doesn't can't it can't issue in those in those intuitions anymore because it's beyond our imaginative powers.

[00:45:19]

So let me let me make one final comment, because I think Julia is about to wrap it up. But so, again, I must say, I am actually more sympathetic to the Chinese room experiment than either you apparently, or Julia, although I see exactly what you were saying and I understand then its argument. But without getting back into the details of that, it seems to me that one way to move, perhaps the discussion forward is going back to something we were doing a few minutes ago before we withdrew the distinction between intelligence and consciousness.

[00:45:52]

Right now, it seems like it would be probably helpful to actually draw even more distinctions. And I have written down in my notes here, for instance, distinctions between computation, intelligence, understanding and consciousness. And here's what I mean by that. I don't think there is any any reason at all to use the Chinese room or any other toys. The thought experiment or any empirical evidence to deny that we are building computers are faster and better at doing computation.

[00:46:18]

I don't think that's a problem at all right now.

[00:46:21]

Certainly something that is faster doing computation may appear or or may, in fact even satisfy some people's definition of intelligence as because, as you were saying earlier, it's, you know, behaving appropriately. So if one uses a behaviorist approach to defining intelligence so, you know, you look at the behavior and see what happens, then, you know, Watson and Deep Blue and all those computers where computers are, in fact, remarkably intelligent, maybe not an intelligent human being, but they're definitely intelligent.

[00:46:53]

But they it's a top of intelligence that comes out of the fact that they're doing very fast computations.

[00:47:00]

Now, if we move further into this sort of set of of of terms that I that I wrote down, now we get to understanding and consciousness and noticing that you notice yourself that that cell actually uses one understanding there.

[00:47:13]

Probably what he meant was, was consciousness really the discussion it seems to exercise a lot of people is in fact. Well, yes.

[00:47:21]

But is Watson or, you know, the next iteration of Watson intelligent in the sense of actually having an understanding of what it does? Or is it just a matter that, you know, it's very fast, the computing stuff. And it seems like some people, if you certainly my definition of understanding involves consciousness, that is you do not understand things unless you're actually conscious. You can't really reflect on what it means to respond in a certain way. Right.

[00:47:46]

So you think that that those kinds of decision may help draw the discussion a little out of the sort of these kind of loopholes in infinite loops where guy gets right.

[00:47:56]

Exactly. And it does and it and therefore it allows us also to draw some very important distinctions between, for example, the the goals of certain fields. And so we've been talking about AI and we've been talking about cognitive science. The point is the goals of AI and cognitive science are quite different. The goal of artificial intelligence is, as it says, is to construct artificial intelligence systems. And it doesn't matter how you do it, it doesn't matter in a sense.

[00:48:20]

It's not to a to a narrowly, you know, someone who's just narrowly interested in artificial intelligence. It really doesn't matter if it's doing it consciously or not consciously what we want as some result. And so the whole AI industry can go off and go in a direction where they just can use especially digital computation, because it's incredibly powerful to build ever more intelligent machines in this behaviour's way that we've been characterizing intelligence that may have nothing whatsoever to do with consciousness.

[00:48:48]

I mean, some people will want to say, well, actually it does. But but it is a good read. I think there are good reasons for thinking that it may have nothing whatsoever. To do with consciousness, cognitive science, on the other hand, is completely different goals. Yeah, it's interested in understanding us and understanding what's going on in our brains that that generates our minds and minds aren't just things that enable us to behave intelligently. We've also got this very important property of consciousness.

[00:49:13]

And that means the goals of cognitive science are quite different. Cognitive science has to confront consciousness. And so then we have to talk about what is the relationship between, say, the computational theory of mind on the one hand and in theories of consciousness on the other. Right. And there we find that there are individuals who think that the computational theory of mind can do one on the same job and it can explain intelligence and it can explain consciousness as well.

[00:49:35]

All you need is this is a very sophisticated kind of computational system to generate consciousness. Dan Dennett is exactly that kind of person.

[00:49:43]

And that's I think well, I think what we would do is I would part way with with an idea, and I would, too. And I think a number of us, even though I'm very keen on a certain view about the computational theory mind, I'm someone who actually thinks most of the computations that occur in the human brain, analog computations, and that's where I've done most of my own work, is trying to understand what it means to talk about those analog computations called trying to understand how that involves the processing of information and so forth.

[00:50:09]

But so so I certainly am not opposed to the computational theory of mind, broadly considered. But I'm not a classicist. I don't think that the brain engages in very much digital computation at all. But so I but on the other hand and so on, when it comes to consciousness, I think that that you you need to distinguish talk of consciousness and talk of intelligence in this way. And you need to be very careful about about theories of consciousness and about what you're trying to explain.

[00:50:37]

And unfortunately, consciousness is incredibly difficult. I think I think we've kind of cracked intelligence. I think we really have we've got enough materials that at our hands, the resources at our hands to try and say it's no longer a mystery why there are physical systems in the universe that behave intelligently. But when it comes to consciousness, it's not that I'm a mysterious now. I'm not wallowing in mystery. I dearly want us to try and explain it. But but I just think we are a long way short.

[00:51:06]

We're just not there. Well, so so without trying to begin cracking the hard problem of consciousness, I want to ask you one final very short summary question before we wrap up Jarrard, which is uploading a human mind onto computers. Is it definitely possible, although we may never get it, maybe possible or definitely impossible? Definitely impossible.

[00:51:27]

I'm sorry.

[00:51:30]

Go ahead. My own view, for what it's worth, is that it theoretically in principle, it is possible if we when we talk about uploading onto a computer, we're talking about uploading onto a specific kind of computer, namely an analog computer. An analog to computers depend very much on the physical media from which they composed. And so what I'm actually saying is we'd have to upload it onto a onto a physical medium that looks a lot like the brain.

[00:51:56]

And so in that sense, it's definitely possible because human brain is in existence, proof that you start talking about uploading onto a different kind of computer, well, then you're less sure whether it's possible. That's right.

[00:52:07]

It certainly if it's uploading onto what our standard digital computers that we have around today, who just more and more of the same with just added memory and all the rest of it, they know that my my view is it's not that's not possible. But that's because of this very important distinction that I was drawing earlier between different kinds of digital. Right. Excellent.

[00:52:23]

OK, we're so over time and I knew this would happen. I'm 99 percent confident from getting we would go over it because we were talking about intelligence and consciousness that it was inevitable. But but it's been an excellent discussion. So thank you. Nevertheless, I have to move along to the next segment, the rationalist blog.

[00:52:58]

Welcome back. Every episode would pick a suggestion from our listeners that has tickled our rational fancy. This time we ask our guests, Gerard O'Brien for his suggested dirt Masimo.

[00:53:08]

What I'm going to suggest is a book that was actually published quite a few years ago now, a book published in 1995 by Andrew Hodges, which is called It's About Alan Turing. It's a biography of Alan Turing called The Enigma of Intelligence. And I suggest that because we in our discussion, we've talked about the computational theory of mind. And one thing we didn't get a chance to talk about, actually, was Alan Turing's role in developing that whole account of mind.

[00:53:40]

And what I think is what is fantastic about Andrew Hodges's biography is that unlike some biographies of steering that just focus on the mathematics and his his particular, you know, contributions to computing theory or other biographies that focus just on his life, this one puts the two together and kind of intertwined the two and explains perhaps why he had certain views that he had related to certain very interesting aspects of of his life and gets you allows you to feel both a very good sense of the man himself and what he was like.

[00:54:16]

And he was an unusual person and had very difficult struggles at certain times of his life, but also doesn't shy away from explaining in a sophisticated way the the actual theoretical background of all the work that he was doing. And so in that sense, it's a it's it's a very satisfying intellectual biography of of one of the great figures of the of the 20th century. Excellent. Well, thank you for the recommendation, Jared, and thank you so much for being a guest on the show.

[00:54:47]

Thank you. Thank you.

[00:54:49]

And thank you, both of you. That was found that very much. Lots of fun. So thank you very much. Well, this concludes another episode of Rationally Speaking. Join us next time for more explorations on the borderlands between reason and nonsense. The rationally speaking podcast is presented by New York City skeptics for program notes, links, and to get involved in an online conversation about this and other episodes, please visit rationally speaking podcast Dog. This podcast is produced by Benny Carlin and recorded in the heart of Greenwich Village, New York.

[00:55:29]

Our theme, Truth by Todd Rundgren, is used by permission. Thank you for listening.