Transcribe your podcast

Rationally speaking, is a presentation of New York City skeptics dedicated to promoting critical thinking, skeptical inquiry and science education. For more information, please visit us at NYC Skeptic's Doug. Welcome to, rationally speaking, the podcast, where we explore the borderlands between reason and nonsense. I'm your host, Julia Gillard. And with me is our guest today, Professor Eric Schwartz Cable. Eric is a professor of philosophy at UC Riverside, where he focuses on a bunch of different interesting topics from philosophy of mind to moral psychology, epistemology and science fiction.


And he also blogs at The Splintered Mind, which is one of my favorite philosophy blogs. Eric was actually a guest on rationally speaking about a year ago, talking about the moral behavior of moral philosophers or lack thereof. And he's returning now to discuss another topic entirely called Crazy of Them. Eric, welcome back. Thanks for having me. So what is crazy of them? Well, let's define a position as bizarre. Just in case it's highly contrary to common sense and position as crazy in my technical sense of the term, just in case it's highly common, contrary to common sense.


And you're not epistemically compelled to believe it, not epistemically compelled to believe it. Does that mean that there isn't a good there isn't good evidence or arguments supporting it?


There might be some good evidence, but not enough to compel belief or bring you all the way to rationally justify high level of confidence in a position.


OK, so that's crazy. So that would be for a position. OK, so there can be just to clarify the terms just a little bit more. This is all just technical terms that I invented. So a position is bizarre if it's contrary to common sense, but some bizarre positions where epistemically, epistemically compelled to believe. So, for example, the twin paradox of relativity theory seems like there's excellent scientific evidence that if one twin is traveling at high velocity relative to another trend and then turns around and comes back, the traveling twin will have aged less than the untraveled.


Now, to someone who hasn't been trained in the field as a non expert, this seems this is highly unintuitive, but we now have very good scientific evidence for that. So that would be a bizarre position. It was not crazy in my sense. Because it's not Dub's, because it's not evidence for it's great, right? It's not dubious, right. So, yes. So that dubious is another term that I'm using technically to a position is dubious just in case we're not epistemically compelled to believe it.


So these are just all just my vocabulary for talking about this.


What do you say that the concepts of bizarre and dubious sort of map onto the Bayesian concepts of of having a low prior on something and having low evidence for something respectively, like we have like before any evidence before we get any evidence at all, we should put very low probability on something being true if it's if it seems bizarre. But if we and if we don't have very strong evidence for it, then then it's dubious.


Yeah, I think that's probably a reasonable translation. I'm not sure I'd commit to that exactly. OK, there. But at least as a first approximation or and maybe as a final translation, that would be fine. OK. Right. So a bizarre position would be one that. Well, so I define bizarreness really well. I don't find it in terms of Prioress because I don't know what Prioress exactly are. They seem to be to work out differently in Bayesian.




And I think it also gets a little complicated philosophically when we're talking about logical or philosophical arguments as opposed to empirical claims. So. Right. I think you're especially justified in in being hesitant to commit to that operation here.


So so the way that I prefer to talk about businesses is that non specialists would be highly confident that it's false, perhaps implicitly they might not have an explicitly thought about the issue before, but. Right. So maybe implicitly or explicitly or are confident that is false.


So maybe maybe someone would would officially defer to experts and say, OK, well, you know, the physicists say that I'm not a physicist, OK? But they it still seems like intuitively impossible that it could be true. But they're willing to defer to authority, sort of at least at least explicitly.


Yes, that would be bizarreness. So right before Copernicus won the day. Right. The idea that the earth moves around the sun would be would have been bizarre. And also, I think crazy in my sense. When it was first proposed, the evidence that Copernicus appealed to was probably not sufficient to to compel belief that deep earth did, in fact, travel around the sun. Right. So so Pernick is first proposed the position or when Darwin Darwin first proposed the theory of evolution by natural selection.


These positions are both contrary to common sense and dubious and so crazy. But then as eventually as scientific evidence came in, they lost first there to Bayati and they became rather bizarre. And then common sense, I think, can change over time. So it's now maybe no longer as strongly contrary to common sense, maybe not strong, maybe not contrary to common sense at all, to think that the earth goes around the sun rather than the way round.


Yeah, to something to some extent, I think common sense evolves and to some extent I think it just gets stretched out. Like my common sense has been stretched enough by things like like quantum mechanics that it's sort of there's sort of more room for for other things that I would have considered crazy to slip in for me to go, wow, OK, maybe I.


I mean, if quantum mechanics is true, then, God, I just it's hard for me to reject anything or many things out of hand I otherwise would have.


Quantum theory is a good example of a domain in which I think crazy ism is pretty appealing. So there are various ways of interpreting what's going on with quantum mechanics. So there might be no collapsed views in which the world is splitting into many worlds. All right. That's a common interpretation these days. There are also collapsed type views in which the observation of a process causes the wavefunction to collapse, which is also and both of those views seem pretty strange by the standards of common sense.


So I think both of those interpretations are crazy in the sense that I've defined. Obviously, that doesn't mean only a clinically insane person would accept them. But there you know, there's a there's a sense in which it's not too unfamiliar to say something like, well, it's crazy to think that the world is splitting into uncharitably many universes. So I don't think epistemically compelled to accept that interpretation of quantum mechanics.


So we haven't actually defined crazy ism yet. We've just defined crazy. Right. So then crazy ism would be the view that something crazy must be true about a domain in question will be defined relative over relative to a domain, and then you would be committed to crazies. And if you're. It's the idea that something crazy must be true, right, that that whatever the truth turns out to be, some part of it must be crazy, right?


So, for example, there might be four different plausible approaches, maybe four broad approaches to quantum mechanics. Each of them is bizarre and dubious, but one of them must be true. You think or alternatively, maybe something even more bizarre and dubious is truth. Right. But whatever the truth is, it's going to be something bizarre and dubious. That is crazy. Right. So crazy isn't about interpretations of quantum mechanics, then? Well, there are various options, but whatever the truth turns out to be, it's going to be something that's highly contrary to common sense and that we currently don't have a compelling epistemic reason to believe.


Yeah, when I when I read your description of Crazy, as I'm reminded me of this quote from Niels Bohr, I don't remember who he was talking to, but he said, We all agree that your theory is crazy, but we don't yet agree about is whether it's crazy enough to be true.


It sounds like a crazy about whatever that topic was in physics, right?


Yes, I think crazy ism is pretty plausible and certain cutting edge areas of science. I mean, I think so the way academic works sometimes goes is that, you know, kind of adventuresome people, intellectual adventurers find themselves endorsing theories that are highly contrary to common sense and for which the evidence is less than compelling. And then they put the work into developing those theories. And eventually, if they're really successful, like Copernican theory was or like Darwin was, eventually scientific community comes around to that.


So kind of trying to chase down the crazy is an important academic task.


So before we get into the reasons why we should expect crazy ism to be true in certain domains, maybe we could just discuss what other domains outside of outside of physics do you think it might be reasonable to be a crazy post about?


Well, the one thing I've thought about in most detail is the metaphysics of mind. So that's one weather, which is broadly the issue of what sorts of beings in the universe have minds, have conscious experiences, and how does how does having a mind relate to existing in the physical or material world if the physical material world exists?


So that would be one domain where I think crazy is pretty plausible. I've also been thinking about extending it to ethics. And so that's something that I haven't worked on in as much detail. But I'd like to think about that also.


Yeah, the I've thought more about about moral philosophy than I and media ethics than I have about the metaphysics of mind. And I've definitely I keep bumping up against these these situations where I just I'm sort of forced to choose between unpalatable options or between bullets that I have to bite, essentially. In fact, one of my favorite works of philosophy is a paper, a relatively recent paper by someone named Gustaaf Arrhenius, in which he it's a very rigorous, precise paper for philosophy.


And he basically lays out all of these seemingly common sense principles that we would want a moral system to have principles like I may slightly misquote or mis paraphrase some of these, but there are things like all else, equal. Adding more happy people to the world isn't bad and and all else equal. Making currently existing people happier isn't bad. And so here's a list of of six or seven or so of these principles, each of which we just want to accept almost unquestioningly.


It just seems self evidently true. And then he shows rigourously that there they cannot all be true. There has got to be a you've got to give up at least one of them if you want an internally consistent moral system.


That's interesting. Yeah, I should check out that. Yeah. Oh, absolutely. I'll send you it. You send me some of the paper. That'll be awesome. In fact, maybe we should link to it on the on the site but and there's a bunch of sort of more concrete, you know, philosophical, moral philosophy, thought experiments that can arise out of this where like our intuition sort of produced these paradoxical results. But Arrhenius, this paper is a nice formalization of why we get these paradoxes in.


This is this is specifically in utilitarian philosophy. So if you're willing to abandon utilitarianism, maybe don't have a problem. Right. But anyway, so in in metaphysics of mind, what would be are there specific questions in metaphysics of mind where we're we're stuck between a rock and a hard place?


Right. So I think there are some questions and I'll I'll give those in a minute. But first I want. But maybe a general connection to the issues that you raise about media ethics also, right? So one just to bring out, I think, kind of in my own way, what you said pretty explicitly already was that if common sense is incoherent in some domain, then it's not going to be possible to have a well-developed theory that respects every aspect of it is going to have to conflict with common sense in some respect.


Yeah, right. So I guess that I think might be true in moral theory, although in moral Syria, I think it's a little hard to tell sometimes whether you have in common sense. Straight up conflicts versus different criteria that edge against each other, that can be weighed against each other.


Sort of like I value I value people having autonomy and I also value people being happy. Sometimes those things conflict with each other. But that's not necessarily a logical paradox, right?


Exactly. Whereas if you're committed to exceptionals principles that sort cases differently, they can straightforwardly conflict with each other and then then create robust violations of common sense. I think so. That's that's one thing that I'm trying to think about with the moral theory case. To what extent it's merely competing considerations that can be that can be weighed against each other versus outright contradiction in the psychological principles underneath.


Yeah, I mean, I think with the reason it was possible for Arrhenius to do such a nice, clean job of showing inconsistency and utilitarianism is that utilitarianism just has this one thing that it's prioritizing, which is utility. It's a very poorly defined thing. But, you know, whatever whatever the good thing that is, you know, happiness or flourishing or whatever, you want to say that that's just that one good that that utilitarianism is trying to maximize in some sense.


And it's just those words in some sense where you get into the tricky bit.


So I want to ask you a question about this before we get into the metaphysics side, which I've thought about more. But one interesting case that that I kind of puzzled about a little bit on the utilitarian picture is the kind of hedonism case from Nick Bostrom. Do you know do you know this?


Yeah, but why don't you explain? So I was right.


So just postulate that hedonism is whatever substance or structure it generates the most pleasure, let's say, with the fewest computational resources. Right. So on a simple version of consequentialism, say pleasure maximizing one, then it seems like the best thing to do would be to convert all of the mass of the solar system into it only.


Now, even even the mass of the beings who would want to use or enjoy the hedonism, well, the hedonism would be whatever substance it is.


That's that is doing the enjoying, right.


Oh, I see. I say it's not right. Right. It's not a it's not like a drug. It's just not a drug, a thing that itself experiences whatever whatever good we care about.


That's right. So you might think of it as like an artificially intelligent being that's basically programmed to most efficiently have happiness. Right. So on a kind of Bostrom Sidonie in the case, then what you might want to do basically is convert the entire solar system into one giant kind of orgasmic.


You know, that that doesn't seem very in accord with normal common sense values, and yet it's a pretty straightforward kind of way of or not that totally straightforward, but it's one way of thinking about if you accept certain premises about computation and maximizing pleasure or whatever. Right. It's it's one thing you might think, well, you know, from a utility, certain kind of utilitarian perspective, the best thing to do, the best possible thing to do would be to just commit suicide of the entire system, to create the, you know, the giant solar system sized orgasmatron.




So Orgazmo blob. Orgazmo blob. Yeah, right. So so I think that's an interesting kind of case for thinking about the boundaries of common sense. Right. So you might say, well, look, you know, I'm just going to take as a common sense supposition, starting point that that's not what we want. That's not the moral ideal. Right. And then based on that, I'm going to make my consequentialism less simple or less focused on, you know, simple hedonic pleasure or something like that.


Right. Because I don't want that case to turn out that way.


Yeah, it's funny. In these in these cases, sometimes what one person intends as a reductio ad absurdum, like, well, X implies Y, you know, Y is clearly absurd and therefore that shows a problem with X. Another person will just say, like, well, I guess Y then because X implies that there's an expression, one man's motives. Conlon's is another man's modus tolan's. Right. But yeah, it's two different ways to react to that.


X implies Y, right.


So that the I think the thing that happens once you think that common sense is no longer trustworthy as a basis for philosophical opinion is that you lose a little bit of a hold on that game. Right. So you say, OK, well, look, this is highly contrary to common sense. Contrary to our cultural presuppositions, but now I don't know how much weight to give to the fact that this does violate that in that way.


Yeah, yeah. So I mean, as I used to be really quite fascinated by paradoxes in moral philosophy cases in which my moral intuitions strongly suggest X and also strongly suggest Y, and also my logical mind can see that X and Y are in conflict with each other. And I still am sort of interested in those paradoxes. But I am a little less interested because I just thinking apriori about my moral intuitions and how they evolved. I sort of I mean, my moral intuitions, human moral intuitions were not programmed from the top down to be an internally consistent set of intuitions.


They sort of we have different intuitions that evolved in response to different pressures. And there was not a ton of of intentional coordination between those different intuitions. And so just thinking about that system, you know, in in an outside view, you wouldn't expect that system to produce consistent judgments. And so I guess I've become a little less fascinated and and intrigued by cases in which I see these conflicts between my intuitions, because I sort of expect that to be the case.


And I'm and I'm also a little more pessimistic about resolving those inconsistencies. And sort of the best that I think I can hope to do is reach some kind of reflective equilibrium where I try to make whatever changes I need to my moral positions that produce rough consistency overall and require the least amount of violence to what seems to be common sense to me. But I allow that some violence to common sense will have to be done. I just want to minimize it essentially.




So I'm not sure about the reflective equilibrium thing. But up until that point, you were the position you were expressing is very close to the kind of position that motivates me in thinking about creating crazy isn't right. So human beings in thinking about minds and in thinking about morals. So tracking back a little bit to the metaphysics of mind. Right. Our intuitions are common sense, evolved and was culturally selected in arranged environments for a range of purposes. And it doesn't.


Stepping back, you might think, well, it's probably satisfying in whatever environment it was it emerged in where satisfying is finding finding the solution that's good enough to work but doesn't have to be the best.




So and if we look at how intuition fared in fields where we've had a chance to kind of test intuition against rigorous empirical evidence, it turns out that intuition say physical intuition is great for picking berries and putting in baskets and throwing stones and that sort of stuff. But when it comes to highly energetic and tiny and the huge and the fast right, it's a mess. Yeah, right.


So so likewise, I think when we start stepping outside of the kinds of cases that we're really familiar with and thinking about unfamiliar types of cases like artificial intelligence types of cases or alien mind head cases, or if we think about the possibility of beings with minds very differently different from us, that we could design computationally, then the kind of culturally given and evolutionarily selected processes that that did give us our intuitions might not be expected to have anything very clear or high quality to say about that stuff.


Yeah, yeah. I was thinking about this with respect to mathematical or and sometimes logical paradoxes, which in my experience, something like 95 percent of all of the of all the mathematical paradoxes out there involve either infinities or self reference. And and something like infinity is just not like infinities are not a thing that human brains would have had to deal with as they were evolving. And so, you know, I mean, this is setting aside the question of whether infinity is even a coherent concept in itself, because if it's not, then this could explain why the paradoxes arise.


But but regardless, it's also true that our brains did not evolve to be able to think well about infinity. And so, of course, things are going to seem counterintuitive to us.


Right. And we evolved in an environment in which the only beings who are capable of linguistic thought of the kind of quality that we're used to as human beings, are other human beings with forms similar to ours and with certain kinds of maximum capacity. Right. So we did not involve in the context where there were highly intelligent group intelligences or artificial intelligences we did not involve in context in which we might interact with a being who is capable of vastly more pleasure than we are or hugely more intelligent than we are.


So our moral intuitions and our intuitions about the metaphysics of mind are we might not expect them to transfer very well to those unfamiliar types of cases.


Yeah, so this is kind of an apriori argument for crazy ism that we shouldn't just knowing about our brains and how they evolved. We shouldn't expect there to be we should expect there to be domains in which are our common sense intuitions. Just don't that there isn't a way to not conflict with them in some sense. And then. There's also more I think there's you have other pieces of evidence pointing towards racism in some fields, right? Like like the fact that that areas of physics and cosmology have continually generated crazy answers that have turned out to be correct, where the debate has gone down over time.


And so there's precedent for crazy solutions turning out to be correct.


Right. Right, right. Yeah. So I think what we've seen in the history of science is often especially when we're talking about the science of the very large and very small and very energetic, we've seen things go from crazy to bizarre. Right. So basically, all the common sense options get left behind centuries ago. Right. And there are only bizarre options left, right, right. I think there's so that's one good reason to think that an empirical reason, just looking at history of science, to think that crazy ism is likely true of a very large and a very small and a very energetic in metaphysics of mind.


I think the argument is similar, although a little different, because the metaphysics of mind, we haven't got the kinds of consensus answers over time that we got in physics. Right. We we gave up geocentric centrism. We basically agree about relativity theory.


Maybe if we can figure out how to reconcile with quantum mechanics is still an issue, but we made progress in those things. It's not as clear that we've made that kind of progress in the metaphysics of mind. But there is a similar type of empirical argument, which is this, which is that in the history of philosophy of mind, every single well developed view of the metaphysics of mind has been bizarre and dubious. Right. So every single option that's been on the table, well developed option is crazy, right?


So from an economic kind of market point of view, you'd think if it was possible to create a metaphysics of mind that accorded with common sense, then someone would have done it.


And surely the rest of us would go, oh, thank God.


Finally, something may not be as fun as Liveness or Nietzsche or whatever. Right. But, you know, you'd think that some people would be attracted to it and would be famous, right? Yeah. But in fact, every single my contention is I've argued for this in a paper are willing to take on challenges, but it's hard to somewhat hard to defend a universal claim. But my contention is my challenge is every single theory that's been put forward in the metaphysics of mine that's well developed enough to commit on specific details like mental, physical causation and the scope of mentality in the universe.


What sorts of beings have minds and what sorts of things don't? Every single theory is bizarre, right. And that would include even say I think Cartesian interaction is dualism. Thomas Reid's so-called common sense philosophy, even though is when you start look at the details of them, they they're pretty bizarre stuff.


So I, I do want to get into some of the examples of crazy theories and metaphysics of mind. But first, I just want to go a little deeper into this, where we were kind of making this inference, where we said, look, a lot of these crazy theories in science have turned out to be correct. And we were we were drawing an Arab not to say, therefore, we should put higher probability on crazy, as I mean in areas of philosophy, like metaphysics.


And I guess I'm not quite confident that that arrow is justified because it seems like the goals of science and philosophy are relevantly different, where the goal of philosophy you could see the goal philosophy is to make sense of the world. And so if the answer is that philosophy gives us some nonsensical to us, then it hasn't really succeeded at that goal. Whereas there's no such constraint on science, right? The universe doesn't owe us a reality that we can understand or that makes sense.


Here's a case where I think the metaphysics of mind and morality might come apart again. I'm still inclined toward criticism about morality, but I think the case is easier for metaphysics of mind here. And I think that there are metaphysical facts about what types of beings have conscious experiences. Right. And just as with the physics there, those facts might not be accessible to us. The universe does not owe us, as you say, an explanation or the ability to understand or make sense of what sorts of weird alien beings or group consciousnesses or whatever would be conscious or unconscious.


But there are still to be facts about those. So I think that we the there's license for some more. Tourism in philosophy, because we don't have the scientific tools, I think, to detect phenomenology in quite the same way that we have the scientific tools in cosmology, at least for some of the cosmological questions. But I think there still is this realm of facts that's independent of us that we wouldn't necessarily expect common sense to be well tuned to deal with.


Now in morality, I think it might be slightly different. Right. And this is, again, why I'm a little hesitant about extending criticism to morality. I'm inclined to think that I would at the end of the day. But when one reason for hesitation here is, you know, you might think of morality as something constructed by us. And in that sense, we kind of make it so by accepting something in a way that we cannot make an alien conscious by accepting that it's conscious or unconscious.


Right. Yeah. So that that creates a kind of there's at least a possible bridge there for us to reconcile our morality with our common sense, perhaps.


It's funny, I was going to go the other way and say that I'm I'm more inclined towards crazy ism and morality and moral philosophy than in metaphysics, because moral questions are more like questions about our preferences than they are questions about how does the world work, what is true. And and I think there's a stronger case that our preferences didn't evolve to be internally consistent then than the case you can make about how does the world work? Questions not making sense inherently.


Yeah, well, maybe so.


I could kind of see that going either way. I on the on the metaphysics of my case. Right. I think there's a case from analogy to the sciences and then there's a case from there's the empirical case of the kind of market based empirical case that one hasn't been developed yet. And so I you think that one would be developed if one were available to be developed. And those neither of those is completely decisive, I think. But those two considerations, along with these kinds of, as we're saying, apriori evolutionary considerations combine all that together.


I think there's a fairly there's a good reason to have fairly high credence in craziest.


Yeah, OK. Well, we keep alluding to all these crazy sounding metaphysical theories. Let's finally let's finally give an example of one for our listeners. Right.


Well, one that I've been working on quite a bit. It's not the only one, but it's definitely has some shock value for some people at least, or I think maybe most people, but not everyone is the idea that the United States is literally phenomenally conscious and what's the case for that?


So most theories of mind, most material, most contemporary philosophers are minor either materialists or pretty close to materialists like David Chalmers has a kind of dualism that's got a lot of structural similarities to materialism for the issue in question. Mm hmm. Most philosophers of mind think that what's necessary for mentality is something like complex information processing, sophisticated responsiveness to the environment, maybe a kind of evolutionary embeddedness in a historical environment that gives your actions and reactions meaning and function and stuff like that.


And if you look at the kinds of features that most philosophers of mind describe as characteristic of maybe sufficient for the existence of consciousness in an entity, it looks like the United States or any country, I choose United States because I think it's perhaps the best case country for this, but has those features. So the United States. So what I want you to do is kind of imagine the United States the way a planet sized alien might imagine the United States.


Right. So think of all the individual people in the United States as something like cells in your body. Right? They trade information. They as as an entity, it does things like, you know, it invades Iraq. It sets it sends this kind of army like pseudo pod out to invade another country. Right. And that and in doing so, it's responsive to sensory input. Right. It doesn't hit the mountain. It goes around the mountain.


Right. It hunts down Saddam Hussein or whoever. Right. The United States as a collective entity, imports goods, exports goods, develops its environment, monitors space for asteroids, speaks collectively as a group. It's got lots of the citizens of the United States trade huge amounts of information with each other. The United States represents itself in certain ways. Self represents right. It monitors its own. States monitors how many people it has, monitors, unemployment rate, all that kind of stuff.


So I'm not saying the United States is in fact literally financially conscious, although I think it's possible that it is.


The the first point that I want to make here is that if you look at what most philosophers of mind say about what makes something of being with mentality and consciousness, and then you just apply those criteria straightforwardly to the case of the United States, it looks like the United States meets those criteria.


Yeah, and I imagine that you could you could make this thought experiment even more compelling to people who don't yet find it compelling by by asking them to imagine a country, maybe the United States, that literally copies the processes that a human brain is going through over the course of, say, an hour. But with humans playing the role of neurons and sending signals to each other the way neurons send signals. And so it's the same the same processes happening, the same information being transferred in the same patterns.


But, you know, by humans in physical space or, you know, in sort of larger geographic space instead of neurons. It's the same pattern, right?


Yeah. So, yeah. So Nitmiluk has an example, something like this. Right. So you could imagine that scenario. And then I think people have different. I mean the brain has like 80 billion. Yeah. So you'd have to take more than any one nation. Right.


But it's not like logically impossible to imagine the magnitude more.


Right. And, you know, it would be a lot slower than the brain, you know, probably realistically. Now, what kinds of intuitions do we have about what would happen in that kind of case right away when Bloks sets up this kind of case? He doesn't do it actually. Exactly what neurons he does it with functional states. But I guess I think is a similar idea. Right. This he kind of invites the reader to think, well, it's absurd to think that, you know, that entity constituted of people trading information with each other would have a higher level of conscious experience in addition to the conscious experience of all the individuals constituting it.


So if he's right that well, I think he's right that that's somewhat contrary to common sense. I think it's even more contrary to common sense. It's even a sharper violation of common sense to say that the actually United States, as it exists right now without further messing around, has a stream of experience to it so that you can have districts of violation and nonviolence. Right. Right.


Yeah. And there is one I forget who said this, but there was one attempt to approach this question from a different angle that said, OK, imagine that we replace the neurons in your brain gradually, you know, piece by piece with these little robots that that are programmed to do the same things that neurons do to take in the same inputs and produce outputs in the same according to the same rules. And and so, you know, gradually your Norns get replaced by these robots.


Fine. So most people, I think would still say, OK, I'd still be conscious, even though I have robots instead of neurons doing the processing. And then this person said, well, the neuron the robots themselves could not possibly be conscious because if they were, then the whole system would stop being conscious. And that doesn't seem very intuitive to me. Like as long as as long as the robots are doing their job properly, why can't they be conscious without my own consciousness ceasing to exist?


But this is just the change the, you know, U.S. or Chinese or you giant 80 billion person each nation thought experiment, but on a much smaller scale. It's the same thing. Right, yeah, so some people think for some reason that consciousness can't nest in each other, that you couldn't have consciousness in at two levels of organization at once, right at the lower level and at the higher level at the same time.


And that principle has been put forward by a few people. Julio Teniente has defended it. Francois Kemmerer recently has defended it. And I think part of what they want to do is avoid they see the possible implication of, say, standard theories of consciousness for a group level consciousness of entities like the United States. Right. And they want to avoid that conclusion. And so they introduced this, I think, as a means to avoid that conclusion, but introducing it as a means to avoid that conclusion if it really is justified in that way.


And I don't think it's totally clear how it's justified. Right. But if it's justified because you want to avoid that conclusion, then what you're doing is you're engaging the philosophical method that takes as a fixed point. Groups like the United States couldn't be conscious. And then I guess kind of one of the questions that ask about that is how do you know that this is it's contrary to common sense. Right. But if if what we've been saying earlier is correct, then common sense might not be a very good guide to these kinds of issues.


So why should we take that particular violation of common sense as an evidential fixed point?


Right, right. I mean, at the least, it seems clear that, well, it seems pretty likely that we have to choose between counterintuitive conclusions, whether that's Consciousnesses Nest or a country couldn't be conscious. I mean, it's possible there's a logical, you know, loophole that I'm missing or something. But I mean, in all these cases, it's possible. But this is, I think, a good example of where crazy ism seems seems pretty well supported.




And, you know, and if you look at nesting people who have what I call a.. Nesting principles or his views on which consciousness get nest, when you push on those principles, they tend to have their own counterintuitive consequences. Right. So, again, that block is suggested, for example. Right. If we this is really far fetched. Right. But it's a clean, simple example. Right. If it were possible for there to be be very tiny beings who acted out the role of one neuron.


Right. And you inhaled one and became part of your brain, you would lose your consciousness as a result. And that seems that seems unintuitive. I mean, maybe it's true, right? We're not I'm not sure what role to play in this. Right. But it's not not all the intuitions are on the same side in this issue or in another kind of group consciousness. Intuitive kind of case, I think is you can imagine a science fiction case where we were visited by beings who look like woolly mammoths.


Right. And who behaved in an intelligent linguistic ways. Maybe they're a little slower pace than we are. Maybe they take ten times as long for I have to say anything as it takes for us. Right. But that doesn't seem like that big a deal. Right. And it turns out perhaps in the scenario that their mentality is instantiated by a hundred million insects that they contain in their heads and their hopes. Right. Each insect has a tiny little set of sensory organs grown in its own insect like intelligence.


Right. In that case, it might be intuitive to think, well, the insects have insect level consciousness, but also these beings who maybe you can imagine a science fiction story in which we've already established social relationships with them. Maybe some people, maybe there's been cross species marriage. Right. You know, that. See, there would be definitely under certain circumstances, there would seem highly shamanistic to say, no, those beings can't be conscious because they're constituted by their their their mentalities, instantiated by the interaction.


And just that's just anti-infectives them right there. Right. I mean, well, there is actually another way around, like a way out of this rock and a hard place, a dilemma that I didn't mention, which is just you could say no, consciousness is can only be instantiated on a brain, can only, you know, have have like biological substrates and not other substrates, which feels unintelligent, but not to some people, I think. Right.


So, yeah.


So, you know, people do say stuff like that. I guess it was kind of assuming the falsity of that and what I was saying.


But that does get to the point that what feels like a violation of common sense varies between people.


Yeah, I mean, I think we you mentioned at the beginning that I've been interested in science fiction and actually, I think. One of the wonderful lessons of science fiction is that it's intuitive that consciousness and intelligence could be instantiated in a wide variety of beings. Once you think about the way science fiction authors have set up mentality fairly plausibly and a wide range of possible cases, then, you know, readers are drawn in to think of these beings as having mentality.


If they behave in a sufficiently sophisticated ways and they exist in societies and they have kind of recognizable interactions and morality and cares and things like that. So I think anyone who would insist upon neurons specifically or something like that would be violating that aspect of common sense that so nicely drawn out in the scientific literature.


We have a few months left and I think my top thread to close on would be whether the unreliability of common sense as a guide to these questions means that we what should we do about that fact? Does that mean common sense doesn't apply to philosophical reasoning? That seems too harsh, like there are many, many cases in which I think we need to be able to say, you know, that seems absurd. Clearly, we must have gone wrong somewhere in our reasoning, right.


Without because philosophy is just not it's never going to be a purely logical deductive enterprise where you can just prove something the way you would prove in math. Right. So aren't we just going to have to use common sense a lot?


Yes, I do think we have to use common sense. I think we're stuck with basically three unreliable tools that one is common sense or culturally given assumptions. Another is empirical methods, and the other is appeal to kind of abstract virtues like simplicity. And I don't think I guess what I think is the case about the metaphysics of mine in particular is that none of these tools is going to give you very solid answers. So we kind of have to rely on all of it, know there are some things that are have no basically no merit.


But in terms of scientific know, scientific merit, no merit in terms of simplicity or elegance, no merit in terms of common sense and. Right. And we can discard those. Right. So, for example, here's a theory. Am I on your 18th birthday? You get an immaterial soul for exactly 17 seconds and is like there's no scientific merit for this. There's no it violates common sense. It's completely inelegant. Right. So it's not like all theories are going to be equal.


Right. What I think we're I think we're in a tricky epistemic situation where we have various means of trying to figure things out. But none of these means are very powerful. But that doesn't mean that we kind of just left completely shrugging their shoulders. We can some theories have more plausibility than others, but I think we have to be we left in a position of Dubai, right, where we can't resolve confidently upon any one theory or I think or even any broad class of theories like materialism.


Well, that sounds like a very commonsense thing to say. I can get behind that. Yeah, crazy ism is not itself crazy. Yeah, perhaps it's bizarre. I'm not even sure about that. Let me just conclude with one one one thought about about this way of doing metaphysics as opposed to some other ways of doing metaphysics. I think most metaphysicians are interested in kind of resolving upon what they see as the one metaphysical truth right here is materialism is right.


And here's my version of it and here's why it's right. Right. Well, here is transcendental Kantian transcendental idealism. And here's why this is the correct view. What I the way that I am approaching these issues, I think is I think of it as disjunctive. Right. In the sense of disjunction is this or that or that or that. I'm more interested in opening possibilities that you might not have thought of or taken seriously before like that. There could be a stream of consciousness in the United States that I am in closing possibilities and resolving upon a single answer.


Right. I think once we no longer think of common sense as a decisive criterion, even though it has some value as a criterion, and we start thinking about all the different possibilities that are out there, the a variety of bizarre and beautiful possibilities open up. And I find that kind of exciting. Can we if we lose our moorings a little bit and the world seems to me kind of more wonderful and amazing and incomprehensible and beautiful, once you once you see the weakness of the presuppositions that you might have had entering into doing philosophy, well said.


All right, well, let's wrap up the section of the podcast and we'll move on to the rationally speaking pick. Welcome back. Every episode, we invite our guests to introduce the rationally speaking pick of the episode that's a book or article or website or something that has influenced his or her thinking in an interesting way. So, Eric, what's your pick for today's episode?


My pick is Borgese is Labyrinths.


Oh, excellent. Tell us a little bit about that book. So that was a favorite book of mine as a college student and still is a favorite. It's a collection of his most philosophically interesting short stories gathered it translated into English, and it's full of ideas about infinitude and idealism in the metaphysical sense of idealism, where a mentality is fundamental to the universe full of paradox and weirdness. And I guess also for me, kind of a little bit of schooling and how you can write philosophy, science fictionally or speculatively, or how you can do speculative fiction philosophically.


Hmm. I remember I think it was in Labyrinth. I was reading a poetic passage about a different civilization that just had a totally different ontology, like they divided up the world in a totally different way that seemed very arbitrary, where like there was a whole category of, you know, things that had five legs or. Yes, I don't know what's weirder than that. It's hard to be weird on the spot. Yes.


Boris's taxonomy. I'm not sure if that's in Leverett's or not, but yes, he's got this wonderful taxonomy of animals and it's like 14 different categories that make no sense in relationship to each one of them is things that, when viewed from a distance, look like flies. One is animal. This taxonomy of animals, animals that belong to the king. Yeah. One is it's so hard to remember because the categories are so weird, so weird, seemingly arbitrary.


And it was first of all, it was sort of whimsical and poetic and absurdist and in a pleasing way. But it also made you reflect on the fact that we have there's a reason that the the like the way that we categorize the world, the categories of animals we come up with or, you know, fruits or vegetables or people, etc., could be seen as equally arbitrary for a totally different creature with different, you know, needs and ways of interacting with the world.


And there's a reason that we came up with we developed the taxonomy that we use, but it was just a very nice sort of poetic way to make that point.


I thought, yes, Poorhouses, I was talking at the end of the episode about how, you know, I think metaphysics can be kind of bizarre and beautiful once you let go of, you know, insistence upon common sense.


Mm hmm. And, boy, this is just this is an example of someone whose thinking is bizarre and beautiful. It's it's just really, to me, in an amazing an amazing book, it's like just bends your mind and makes you think about things in new ways. Yes.


Yeah. I love that book. Great. Well, we'll link to Borges as well as to your blog. And I guess the craziest metaphysics of mind would be a good thing to link to as well. Yeah. Paper that you wrote on Crazy ISM and Metaphysics of Mind. Right.


And maybe the USA consciousness paper too, since I talked about that. Great. Yeah, for sure. Thanks so much for for coming back on the show.


It's always a pleasure having you. Yeah. Thanks for having me again. This concludes another episode of rationally speaking. Join us next time for more explorations on the borderlands between reason and nonsense. The rationally speaking podcast is presented by New York City skeptics for program notes, links, and to get involved in an online conversation about this and other episodes, please visit rationally speaking podcast Dog. This podcast is produced by Benny Pollack and recorded in the heart of Greenwich Village, New York.


Our theme, Truth by Todd Rundgren, is used by permission. Thank you for listening.