Transcribe your podcast

Today's episode of Rationally Speaking is sponsored by Give Well, a non-profit dedicated to finding outstanding charities and publishing their full analysis to help donors decide where to give. They do rigorous research to quantify how much good a given charity does, how many lives does it save or how much does it reduce? Poverty per dollar donated. You can read all about their research or just check out their short list of top recommended evidence based charities to maximize the amount of good that your donations can do.


It's free and available to everyone online. Check them out at Give Weblog. Welcome to, rationally speaking, the podcast, where we explore the borderlands between reason and nonsense. I'm your host, Julia Gillard, and I'm here with today's guest, Eric Jonás.


Eric did his PhD at MIT in Brain and Cognitive Sciences, and he's now a postdoc at UC Berkeley's Center for Computational Imaging. Eric made a splash last year in the worlds of neuroscience and computer science by coauthoring a paper provocatively titled Could a Neuroscientist Understand a Microprocessor? So we're going to be talking about this paper today and what it implies about whether the tools used by neuroscientists are, in fact, as informative as we think they are. Eric, welcome to the show.


Thank you, Julia. I was saying on a recent episode of the podcast that I collect this I've been compiling this list of papers or studies that have a particularly clever experimental designer approach. And your paper with Conrads chording definitely belongs on this list. It was very clever. So so you guys basically took a number of the common tools that neuroscientists use to study the human brain and applied those tools to a computer chip, essentially like studying the chip as if it were a brain using the tools that neuroscientists use.


Why did you do that? Can you walk us through the rationale behind that study? Great.


So I was in industry for a little while, and one of the things that brought me back to academia and wanting to do science again was the Obama administration had this brain initiative and they said that what we're going to do is put a lot of research money into trying to record every neuron in the brain and in some of the neuroscience background. That's very exciting. Right? It's traditionally felt like a lot of the limitations in terms of understanding how these biological systems compute are dependent upon kind of the paucity of data that we have.


But I also have this machine learning background, and I did this machine learning startup. And I'm now spend most of my research days doing things that look more like kind of traditional machine learning and data analytics. And so it was very curious about, you know, to what degree are the answers that these kind of advanced statistical and and scientific techniques that we're developing capable of giving us insights on these sorts of systems. And there was a famous paper that by a cellular biologist, Gerry Lesnik, in 2001 when I was an undergraduate a long, long time ago, called Could a biologist fix a radio?


And he asked this question, well, biologists are doing all this reverse engineering these biological systems. But then at the end of the day, they draw a couple of silly block diagrams and say this one influences that one. And we're there to try and use a technique like this to understand the radio. They'd be hopeless. And she really like this analogy because, of course, radios are engineered by people and it's very clear kind of how they work.


And we have all this engineering, science and and that the I've been kicking around this idea of trying to do a kind of a similar thought experiment for the past 10 or 15 years or so. And when people started getting really excited about kind of the insights, this is kind of high throughput neural data was going to give us, it seemed like it was a really good time to kind of actually try and make this work part of the impetus or, you know, these things are kind of I feel like many of the things that I do, the networking kind of start as jokes.


So this was partly a joke. Gary Marcus and at a Marble Stone had organized a Kavli workshop at NYU on kind of ideas about cortical computation. So they assemble these workshops are kind of these nice things where they assemble a bunch of famous scientists to talk about kind of the state of the art in the field, and then they bring along some young blood. And so because I know Gary's like come along to this thing and say something provocative and I'm like, well, I think the most provocative thing I could say is that, you know, we're all kind of screwed.


This is hopeless. Like, no matter how much data we think we're going to magically acquire, our analytic techniques are still far from where they need to be. And so that was that was kind of the impetus of the project. A group in actually based out of San Francisco called the Visual Sixty five 02 team had reverse engineered the microprocessor that we used the most sixty five 02 probably for like kind of retro computing like history reasons. Right. It's kind of this classic chip.


And they wanted a very kind of accurate model of how it worked and what was the use for that. So that chip was originally used in the Atari 2600, the Commodore 64, an early version that you produce, an early version of the Apple one. So I had this kind of very exciting history. And, you know, everyone everyone loves retro computing these days. But the nice thing was that the simulator that they built kind of simulated the processor at the right level.


So instead of just basically doing the things that the processors would do. Kind of regardless of how it would do it or kind of emulating the instructions, the way that many of the you know, if you play Super Mario world on your we or you download some emulator, it's if you down the simulator on your computer, those things are not trying to actually act exactly the way that the original, you know, Nintendo acted. Right. They're just trying to make sure they get the same result.


The thing on the screen looks the same way, etc. The simulator that the five 02 team put together actually tries to be accurate down to kind of the transistor level. So every every voltage level, every every state bit is identical to what you would have gotten in this original piece of hardware that, you know, you would have purchased before I was born. And that's important because you can you can intervene in specific parts of the chip, so to speak, in a way that you couldn't using an actual chip.


Well, exactly. Or you remember just checked right on both sides. Right. So on one side, it's much better to have kind of this perfect simulator than the actual physical hardware, because the physical hardware is is you know, we just we would have effectively destroyed a lot of them in the process. I think that's right. And you have far less access. And then it's better to have kind of the physically accurate simulator as opposed to this kind of abstract version, because that's really the level I think we're trying to understand how these sorts of systems compute.


Right. We're very interested in this question of how does computation go from kind of things happening at the single wire, single transistor, single compute computational unit level up through these more complicated dynamical systems in these complicated behaviors.


So just zooming about out a bit more to the rationale behind the experiment, was your reasoning basically like we know how this chip works and we know we've built it, we like understand how it works down to the transistor level, so. We can sort of test the tools of neuroscience on it the same way that we could take like let's say we had a ruler and we wasn't we weren't sure if the ruler was actually, you know, the, like, inch markings on the ruler were accurate or not.


But we had a piece of wood and we knew exactly how long the wood was. We could measure the wood with the ruler and then thereby see, is this actually an accurate ruler? And that would give us some confidence before we use that ruler to measure other things that we don't know the length of.


I think that's fair. I mean, the the thing for me, the appeal here is that, you know, neuroscientists, all you someone who identified through self identifies as a neuroscientist. You know, what level of understanding are they seeking with the brain? You'll get many different answers to some neuroscientists. The answer of understanding is about how the chemicals and neurotransmitters interact at kind of the cellular level between synapses. On the other hand, you have people who do fMRI studies who are interested in kind of this whole brain activity and how that gives rise to computation.


And there's literally hundreds of years of philosophical thought about what does it mean to understand how kind of thinking systems kind of think. Right. And we actually where this is very nascent. Right. Where there are lots of interesting questions that we're even having to ask today with kind of more and more advanced computing systems. But the nice thing about computer science and about the chip is that we can kind of throw away all those philosophical questions. Right? We can we can stop asking this question.


What does it really mean to understand what this system works? Because we feel like we have total understanding, right. We understand how it works every way from like the the physics of the silicon level of the transistor all the way up through to like the gameplay and ultimately kind of the evolutionary purpose of the processor and the video game console. Right. Which was, you know, to get parents to buy it for their kids and at Christmas time. But we don't really have that level of understanding for a lot of these biological systems.


Right. And people, in fact, if you go to a computational neuroscience conference, there's a lot of argument about, you know, is this is Method X actually providing understanding or as a method by providing understanding. And there are there's no kind of ground truth there. So the processor, we can say, well, look, you know, gave us this answer. Algorithm X on the processor gave us this answer both is that answer correct, based upon what we know?


And then does it really have this kind, the same kind of, I guess, qualia? Does it does it feel like the kind of understanding that we no one would have to have to say something interesting about how the processor really works right now? It just is the answer that it gives us correlate with something that we know to be true, but rather, do we do we feel that that's a combination of necessary and sufficient to describe its functionality to a level that we're happy with.


I see.


So you're you were interested not just in testing the accuracy of these the tools being used by neuroscientists like, you know, are the inferences we're making from these results, from these experiments. Correct. But also to help us interrogate what we mean by understanding a system?


I think so. I mean, for me, the question of whether or not Method X gives the correct in some sense answer or on on System Y is much less interesting than saying, well, it's the answer that Method X gives us actually advancing the kind of understanding we want for these sorts of computational systems. And that's really hard to do in biology. Right? It's really hard to do when you have something as complicated as a mouse or even a worm.


Right. To say, you know, oh, well, now I know that this neuron is related to this behavior. And neuroscientists where our PhDs actually have an entire class on using weasel words suggests and indicates and is putatively correlated with. And we become professionals are kind of talking this way. I just sprinkle the word meh everywhere. Exactly. Why don't you get your masters degree? No, this becomes a this becomes a real challenge. And then I think is again, with something as concrete as this processers this kind of 40 year old processor that, you know, every undergraduate in computer science could basically build, kind of starting with sand.


We can we can obviate a lot of that and say, well, yes, we know that this signal's correlated with that signal, but we also know that that doesn't really tell us anything about what's actually going on here. And so maybe if this method is in some sense accurate, the view that it's giving us is kind of functionally useless.


And let's talk a little more about the relationship between a chip and a brain. So so clearly, a microprocessor in the brain are much more similar than many other things, but they're not identical. And so it's not sort of self-evident that we should expect the methods that we're using to try to investigate the brain to to give us meaningful and accurate answers about a chip. Like, you know, there's there's an analogy between neurons and transistors. And you have the like the wiring of the chip is like the connectome of the brain.


But then there are some analogies to. Right. Like we you know, we have this sort of sharp distinction between software and hardware. On a you know, on a microprocessor and we don't quite have the same thing in the brain, I don't know, maybe that's a controversial statement you're making a face. But basically, instead of trying to explain the similarities and differences myself, I'll just ask you, how did you think about the question of whether a chip would be similar enough and similar enough in the relevant ways to a brain to make this experiment meaningful?


Great. So, I mean, I think the number one thing that when I talk to my neuroscientist friends who haven't read the paper, I'm often like, come on, dude, it's my paper, why don't you read my paper?


But then after we get past that awkward interaction, the then the first kind of visceral reaction is yes.


But brains obviously are chips. Right. And it's true. There are these tremendous differences. Right. And you highlighted some of them. The one that I always think is is they're kind of two actually that I think are most interesting. One is that kind of the structure of connectivity in the brain is just radically different than what we see on the processor. Right in the brain, neurons have, you know, thousands of inputs and outputs or that neurons have thousands of inputs and then they have a single output, which then touches many different many different potential targets.


Whereas on chips, you know, every transistor has kind of three wires. Right. Two inputs, an output or one input, two outputs, depending on how you structure. It's a much simpler it's a much simpler system with very different dynamics, but are very different kind of structural things like the connectome just looks radically different. But on top of that, you know, there's your brain appears to be very stochastic, right? Like, no no single neuron appears to matter.


The patterns of activity are kind of always looks subtly different in a way that like we call a lot of that noise, even though it probably isn't. Whereas the the chip, if you start the chip up in one state and let it run, it will do exactly the same thing over and over again. Deterministic, very deterministic. And so these are these are very, very different structures. On the other hand, as you said, I would argue that they're especially for the purposes of these analyses, they're much more similar than we would think.


And that the chip, much like the brain, shows kind of temporal activity at multiple time scales. Right. One of the things that actually not very many systems in nature show is behavior that's stereotyped at a kind of different time scales. So you can imagine that if you look at an animal, right, if you look at one of us, we breathe at, you know, kind of whatever, roughly 60 times per minute. Right. When we walk, we have stereotyped walks, but we also have stereotyped behaviors like I, I move my hands in a certain way.


We have all this kind of fun language structure that we use. And then this even extends out to temporal behaviors like we raise we go to bed at night, we wake up in the morning. There's kind of all these different kind of hierarchical levels of temporal behavior that we also see exhibited in the chip. And so we see that, you know, the chip does things where it on a very short timescale tries to like move the character one pixel and then on a larger time scale, it tries to change the entire frame periodically.


Right. So there and then at a larger time scale to even higher, there's this score that's continually increasing or decreasing depending upon in the game. Right. And so it's kind of this notion that there's the temporal structure we expect to see there. Right. We believe is similar to the kind of structure of activity that we would see in a biological system. Right in a in a in it some sort of brain like system. But on top of that, I mean, I think it's important to note that the techniques that we're using from neuroscience, we all call them neuroscience techniques, but we actually also stole them from electrical engineers over the past like 50 to 100 years.


Interesting, right? Everything like these dimensionality reduction methods, the spectral analysis methods, look at kind of the frequency content. These are all things that we we push really departments over the past kind of 30 to 50 years. So they're not really even neuroscience techniques right there. They're just kind of general purpose techniques that neuroscientists happen to be using to try and reverse engineer these systems. In fact, I think that that might be the most kind of powerful argument against why we why we have challenges ahead for me, which is that, in fact, we are using a lot of techniques that are kind of cribbed from the understanding of much simpler systems.


Right. When when physicists and electrical engineers were developing these mathematical techniques, they were trying to use them to describe radio or small electronic circuits or or kind of these sorts of much simpler systems. And we use them in neuroscience. But then they spit back some answers and we're like, oh, yeah, that mean that oscillation is important in the brain. But if these techniques were originally developed for use in electrical engineering, then shouldn't we think shouldn't you're just a priori assumption have been they they will give us like meaningful and useful results when applied to a chip?


Why then is that useful to test? Well, right.


So but the interesting part about the chip is remember that the the chip is actually doing this computation, right? So while these techniques were developed to understand kind of small, simple circuits, OK, right. And simple systems. Like radio transmission, they weren't really designed to understand how computation works. OK. And in fact, that's this whole new thing. We really don't know how to do that, even at the chip level. Right. There was a follow on paper to ours where some electrical engineers tried to reverse engineer microprocessor using like some new ideas from, like formal rule verification stuff.


Basically, like they they were using new methods to try and reverse engineer a chip because they're like, what an interesting question and wonder if kind of new technique could even just work on the chip. And I think that that the the question for for us is neuroscientists is that when we're applying these techniques and we're claiming that they give us answers, are we really just doing a kind of quantitative phenomenology or what do you mean by that? Have we really just gotten good at kind of describing a system that's very complicated with kind of in quantifying its behavior in a way that we can't really trace back to either causative mechanisms or understanding or we are we are we are we very much still in this notion where you, Ernest Rutherford, had this quote where he said, well, science is physics or stamp collecting.


Right. And are we how to what degree are we still in the stamp collecting phase of neuroscience? And I think that this this to me suggests that we're much more there than I would have I would have thought we were originally.


OK, so let's take an example of a neuroscience.


Well, the thing that I would have called a neuroscience technique before you corrected me and how you applied it to the microprocessor and what your results looked like when the most classic things people have been doing in neuroscience for a long time has been a collision studies right where you you have some biological system and you go in and you break some part of it and kind of see what happens. Right. And this is most classic the most classic example of this, I think, is this patient, H.M., who H.M.S., a patient that suffered from severe epilepsy in the 50s.


And one of the ways that you treat epilepsy back then and still even to this day, as you localize where the seizure starts and you remove that part of the brain. And so for them, what they did is they went and they removed kind of this medial temporal lobe structure because they believe that was where his seizure seizures were originating from. And when he came out of the surgery, he had lost the ability to form new memories. And for scientists, this was this incredible of scientists.


This was this incredibly exciting kind of discovery that both memory could be split up into kind of a system that was responsible for the acquisition of new memories and a system that was responsible for kind of the retrieval of existing or stored memories, but also that this particular brain structure was important for that. Right. And these sorts of lesion studies have informed a tremendous amount of neuroscience over the years. Right. In fact, we now know that the part of the brain that was primarily removed, the hippocampus, is vital for the formation of new memories and their interaction with kind of things like dreaming and these sorts of dreamlike states.


And it's actually I think it's a crucial part of the brain. But the problem there is that we do a lot of lesion studies in neuroscience and then say, oh, this part of the brain, I removed X and saw behavioral deficit. Why, therefore, X axis was responsible or, you know, EXIS. We're, of course, always very careful with our language professionally. We say, well, X is obviously involved somehow in something that gives rise to Y.


And of course, by the time it hits the, you know, the cover of Wired or whatever, it's like X is the whatever the Y region. Exactly. Exactly. Yeah.


And and the of course this is this is extra hard in neuroscience because as you said, computer brains are not like computers. In fact, there are lots of parts of your brain that if you remove them and give yourself some time, you'll completely restore function. Right. There's all sorts of there's lots of cases of people with what would seem to be otherwise completely, you know, extremely traumatic brain damage, recovering near full functionality because your brain is very plastic.


It has the ability to kind of adapt and change both at kind of a at a hardware as well as a software level to continue that analogy. And of course, your computer is not like that. But so the the we thought, well, these Legian studies, though, we now have the ability to do what are very, very precise, effectively lesion studies. We develop technology in the early 2000s called Opta Genetics. And what optogenetic lets us do is take certain types of neurons in the brain and make them sensitive to laser light.


And with this that lets you do is you can say, hey, look, here's an animal running around and I can just for a brief period of time, turn off these neurons and see what happens. And this gets at some of the kind of more traditional challenges with Legian study effects where if I go in and I remove some part of the brain, maybe at the time, like the trauma resulted in the animal not being good at this task or so so objects as its very powerful technique that people are using and kind of start teasing apart how these systems happen.


But we still end up you still end up reading a lot of things in the. Or even in the literature that boil down to brain, region X is responsible for Behavior Y and the even if that's true, even if in some sense, you know, this particular cortical column in the medial prefrontal cortex is the thing that results in you clicking on Facebook ads or whatever, it's research and trying to get get Mark to find, even if you could show that that's not really I feel like that doesn't really give you the kind of understanding that we're hoping for.


And so part of what we did with the processor was we said, OK, well, we have three games that we focus on the processor playing Space Invaders, Donkey Kong and Pitfall, and we can go through with the processor and we can run exactly these types of experiments for every single transistor. That's like thirty five hundred transistors. I can just break that transistor individually and then see if the game can be played right. And so I have three games that I can run for.


Thirty five transistor segments work on ten thousand experiments like in an afternoon. It's every graduate students dream but we can say we can then look on the other side and say, well, which transistors were kind of necessary for the playing of Donkey Kong? And when we do this we go through and we find that about half the transistors actually are necessary for any game at all. Right, if you break that, then just no games play and half the transistors, if you get rid of them, it doesn't appear to have any impact on the game at all.


And there's just this very small set, let's say 10 percent or so that are less than three percent or so that are kind of video game specific. So there's this group of transistors that if you break them, you only lose the ability to play Donkey Kong. And if you were a neuroscientist, you'd say, yes, these are the Donkey Kong transistors. This is the one that results in Mario having this aggression type impulse to fight with this this ape.


And of course, that's not true at all. And the reason is, yes, the electrical engineering reason is that there are parts of the chip that are kind of potentially only selectively engaged in this particular video game. So maybe, maybe a particular video game has some counter that only ever counts up to, let's say, 63. And so it only ever uses those bottom six bits. And so if you break that seventh bet, you don't really realize it doesn't really matter.


That video game doesn't care. It was the other one. Maybe it has some counter that counts higher at some point in time. So there's all but that feels much more like the level of understanding we're going after. And so simply kind of being able to point to kind of a chunk of stuff, be it circuit, be it transistors or be it neurons, and say when this is damaged, you lose some sort of functionality like that. Doesn't that's not really the level of understanding we're going for.


Right. So when I think about what would be what would we need to understand to really understand the connection between the activity on in the chip and the activity that we see on the screen when we're playing the game? My answer would be, while we need the program, like the code that the chip is running, and as I was sort of alluding to earlier, it doesn't seem like there's an analogue for the brain. So I'm wondering what what kinds of tests, what do we need to do in order to understand how the brain works?


Great. So I'll I'll I'll split those into kind of two different questions, if that's OK. So on the hardware versus software side.


Right. We certainly recognize kind of on for a chip that's as simple as for chips, that's as simple as the sixty five to there's not a lot of what we in neuroscience we call kind of functional specialization. Right. You know, in neuroscience we think that we have a pretty strong and we are very strong and we call them Pryors. But the results of kind of 100 years of data. So we have strong posterior belief that like the back of your brain is responsible for vision and the optic nerve optic nerves project there to the ELGAN.


And we know that V1 does at this early stage visual stuff. And so we believe that kind of that's the part of the brain that really handles that task. And similarly, there's auditory cortex and like there are different nuclei in your thalamus that, like, relate different signals. We have all these different that like here's a task and it's done by this particular unit. Right. Kind of like in your car, the radiator does one thing. And people generally, when they think of computers, they don't think of them like that.


Right. You think that it's this you know, we have this kind of Turing machine is mental model where it's this general purpose computing machine. But that's not entirely true. Even in the processor that we're that we were looking at. It has, you know, an arithmetic logic unit that has certain circuits that just add numbers together or certain circuits that just detect whether or not you're in a loop or certain circuits which just get data from external memory. So there is that level of functional functional localization.


That doesn't necessarily mean that the software doesn't necessarily mean it's like this homogeneous blob of petroleum or whatever. Right. But moving out more and more contemporary processors have even greater degrees of functional specialization. So if you look at something like the processor in your phone. It actually has a dedicated part of it, a small chip inside the processor itself, a small area of silicon that just as video decoding and it has a different line that just handles kind of processing stuff that's coming from the camera.


Right. And doing the early stage imagery and processing. And at that point, you know, this kind of notion of hardware and software, this kind of clean dichotomy that we're used to starts breaking down a little bit. Right. But still, it's the case. I would argue that, you know, all of these systems that we built do have a much stronger distinction between kind of hardware and software than we're used to thinking about in the brain.


But the fact is, we don't necessarily know what the hardware software distinction is in areas like the cortex. Cortex is really amazing because it's organized into these cortical columns. So one way of thinking of the surface of your brain. So when you when you see a picture of a or a diagram of a human brain, it's all kind of folded up. Right. It has all these these folds. And if you unfold it, you get kind of this large area.


That's this laminar structure. Right. So it's kind of imagine if you can imagine kind of unfolding it onto a baking sheet. And when you look at that baking sheet, it's actually made like a one of those seven layer bars, although it has six layers ribbon. There's some areas you're ruining them, making them also talking about you give me not augmentation, dream about eating brain. The the this isn't the zombie podcast that took a turn, a wrong turn someplace.


No. So there's across all of cortex, you have this kind of six layer cortical structure where there's, there's different layers of cells that have different patterns of interconnectivity but are all kind of each layer and cortex is very similar or sorry across all of cortex within a layer you tend to see similar patterns of connectivity. But then if you look down, if you look down on the brain, those layers are actually then organized into these little cortical columns. It kind of looks like a honeycomb type structure.


And in there, there are these kind of there's dense connections between all of the cells inside a column and the similarity of those columns across different types of sensory cortex, across different types of of motor cortex or kind of all of this prefrontal cortex that does the thinking. Part of the thinking. I guess my air quotes don't translate terribly well. I think I heard. Yes, I think again.


But the the interesting part there is that, you know, it may be the case that actually the physical structure of that hardware is very, very, very similar. And in fact, it's the patterns of activity across those areas that give rise to different functionality. And this is strongly, especially in when we start thinking about these higher order cognitive functions, executive control, emotion, these sorts of things, we know that kind of damage to one part of the brain can be very rapidly compensated by other parts of the brain, again, suggesting that this is not simply purely a hardware type phenomena.


Right. We can we can watch functionality be regained kind of faster than would be. One might naively think it would be the result of the hardware reassembling itself. Right. So it's it's true that there is this distinction. I think that that I think it's dangerous to kind of push the analogy too far, but especially for kind of cortical computation. We still really don't know very much about the sorts of in some sense software or the kind of the patterns of activity that we see running there or OK, maybe a somewhat different way to ask.


My question would be, let's say we couldn't get access to the program that was running Donkey Kong on that chip. Would there be any way with some technology to, like, reverse engineer it just by, you know, messing with the chip and trying different inputs and getting different outputs?


I think so. I mean, the one way imagine if you had and this is this is very much a technique that the neuroscientists take. Right. Which is is imagine if I could just kind of cut some part of the chip out like that atter unit that I was talking about earlier. Right. Alio, if I could remove that Aliu and control its inputs and its outputs. Right. I could probably go through and eventually say, look, when I put in this binary pattern, I get up that binary pattern and go through and maybe I have to try and exponentially large number of these binary patterns, but especially for the case of like an atom, it's not actually that large of a space.


I can probably figure out this little block here is doing additions or what I recognize. As you know, somebody took way too many math classes as additions right now. And that is a thing that neuroscientists do. So one of the preps that a lot of neuroscientists use is what's called slice, where you basically take some slice of the brain out and you put it in artificial cerebrospinal fluid and then you kind of connect wires to different parts and you put in patterns or you do things to it and you watch you watch the output.


And this is a very as you can imagine, this is a very challenging experimental technique for graduate students to learn how to do it are the real heroes. But the challenge with Slice, of course, is that when you do actually. Learn a lot, right, you can actually learn a tremendous number of things about how different neurons interact and these sorts of things, but the challenge there on the neuroscience side is that the problem is the systems are so densely interconnected that it's almost always impossible to kind of accurately physically isolate the relevant part because it's probably getting inputs from far away different parts of the brain.


And so the challenge we have on the neuroscience side is that it's very, very difficult to cut out a chunk and kind of properly control its inputs and outputs. That technology is kind of dramatically beyond kind of anything we can imagine having today. On the chip side, though, I mean, I do think that there's a there is a role for reductionism in science. Right. And there does these sorts of bits of functional modularity are important in figuring out the kind of, well, I think this box does a thing or this chunk of stuff does a thing.


I'm going to control its inputs and control its outputs and try and understand what's going on is a path forward. So I guess I'm just trying to understand what stage we're at in neuroscience. Do we know what kinds of studies or interventions, tests we would run in order to get the kind of understanding we want if we only had the right technology? Or are we more like in a stage equivalent to, you know, the ancients before anyone who even come up with the idea of a randomized experiment where like the whole concept of like, how would we figure that out?


We just hadn't we didn't even have the conceptual conceptual understanding of what questions to ask or a test to run. Great.


So, I mean, I think that I I'm going to try and choose my words carefully. This is such a sad day. Right, exactly. Well, and not not sufficiently alienate all of my colleagues that I'm not hirable. But no, I mean, I think everyone I think it's true that everyone in neuroscience has some questions that they would like to ask and they feel that they are kind of technique limited by. I think it is also the case that there's a real gulf between the questions that some people would like to or that the part of the system they're trying to ask questions about and what would really constitute understanding for kind of the rest of the community.


So the interesting thing about neuroscience is that because the system is so complicated, right. Just like the in the electrical engineering computer scientists, you have device physicists who like who build new transistors. Right. People at Intel trying to make the transistors smaller, you know, computer architects who wire them up into actual chips. And then you have kind of operating systems programmers and you have kind of people working at every layer of the abstraction in neuroscience. Similarly, we have we have cellular biophysicist who study receptor dynamics and we have kind of cellular physicists, cellular bio physicists who study then how the individual neurons work.


And we have systems, neuroscientists who study small circuits. And we have then kind of all the way up to kind of cognitive neuroscientists who just try and kind of build computational cognitive models of how these systems work. But the interesting thing about neuroscience is if you ask most neuroscientists, you know, why did they pick the level that they want to understand? They they often think that it's the interesting level and the people in the lower level are kind of studying useless details.


The people above them are kind of full of it. And the question is for for to what degree are the questions that people are trying to ask at a particular level, depending upon the answers from a lower level or a higher level. And in neuroscience, we often don't have good computational models. So I come from my advisor was Josh Tennenbaum, computational cognitive science lab at MIT. And there was this real tradition of kind of trying to quantify human behavior in a way that we could, like, test specific models about what the computation was that the human was performing.


So if you see people kind of you know, people tend to be very good at seeing three pattern three examples of an object and figuring out whether a fourth one is a member of that class or not. Right. We have kind of we have all these kind of very quick abilities to do this sort of induction. And often it's very Bayesian, right? Often it's clear that we have Pryors that we're bringing to bear. And you can model a lot of that computationally.


And Joshes is research program. And I think that the larger goal of of that kind of research program is to kind of try and understand. I guess one way of looking at is the question they're trying to answer are what are these systems even actually doing right? What is human cognition actually trying to achieve and what are the functions that it's trying to optimize? Because the argument is like before we really understand what that's doing, everything else is kind of suspect, right?


If we don't really know what this system is trying to do. If David Marr was the scientist who tragically died early, but in the early 70s, he wrote this book on vision and he argued that there kind of four systems like this that compute. There are three levels that you can imagine understanding how they compute. One is that this kind of computational level. Right. What is the task that the system is trying to solve? And you have a good understanding.


Can you kind of write down in some sense the code? The or an example of the code or the goals for the system to understand it, and then below that, there's kind of this algorithmic level where, OK, well, let's say that that's the goal. Maybe the goal is, you know, running right. Getting from point A to point B, what is the what is the actual algorithm that the system would use to do that? And there might be many different algorithms that would support this particular task.


And then below, there is kind of this implementation or hardware level where you say, well, OK, let's say I had a given algorithm. How would I implement that? Right. And would I implement it with these types of neurons or those types of neurons or maybe this chip or maybe in some other sort of substrate? And it's a real attempt to kind of start drawing these boundaries such that we can ask these sorts of questions. But I think a lot of neuroscience is still kind of struggling with these top level questions, right?


People who do certain types of vision, science, for example, I think have a really good mental model of the exact questions that they're trying to ask about how does stimulus come in through the retina, through the L.G., an individual cortex, to, you know, trigger this neuron or that neuron. And we have made a fair amount of progress. And in that part of neuroscience, haven't we?


We have. I mean, vision science is on one hand vision. Science is incredibly like mature. On the other hand, vision science still can't really tell us how to build a computer vision system. Right. So that's sort of the ultimate the ultimate test of understanding is could we actually build I really buy into this idea. Exactly how could we can we synthesize this in hardware and software? Could we could we build one? And in spite of all of the recent success with with deep learning and artificial neural networks, they actually work very differently from the way these biological systems work and are still kind of substantially underpowered.


The when you see papers or in The New York Times talks about how, you know, computer vision is now better than humans, what computers provisions really better at is in a scene with, like a single cat or single dog telling the difference better than humans. Right. As soon as you get into these real world situations where you have multiple objects and whatever, we're still kind of very far from from human or animal performance. But, yeah, I think synthesis is really the really the goal there.


And I mean, synthesis is also kind of a it's a little bit of an unfair goal because it's like, OK, great. We still don't have a good synthesis level model for the liver. Like, let's take a much simpler system in terms of, like, computational behavior. Right. It's still very hard to build an artificial liver. Right. And the the not just because we couldn't get it to be small enough for like something similar, but because the metabolic liver behavior that the liver has are actually kind of these you know, all those enzymes matter in different ways.


Right. As a function of different biological responses. So but I would still say that we understand a lot about the liver in a way that we don't about the brain.


OK, I'm going to try a third way. Not that your answer. No, no, sorry. Your answers have been, like, super helpful. And on point, I just this is complicated. So I need to, like, attack it from a few different things. So another way to ask this question is, let's see, we just had an arbitrarily large amount of computing power or an arbitrarily large amount of data or maybe both. Could we just brute force it basically?


Could we just like test all all possible inputs in all possible combinations until we figure out like, let's take an analogy. Let's say let's say you're someone who you know, you just like have no understand no intuitive understanding of social dynamics and customs. And so, you know, you but but you're very smart.


I'm sorry. I'm just saying I'm for that. Right. And so you want to be able to, like, function socially, so you just like practice a lot and study conversations as you're practicing and you gradually learn like, OK, if I smile this amount and with this frequency and if I smile in response to these kinds of statements like that will cause the person to like me and want to see me again. And like, you know, obviously that's way too simplistic.


It like depends on a bunch of other things, like the kind of conversation it is and like your relationship to the person. But like if you had enough practice and tried enough different strategies in different combinations with enough people, you could like basically become an expert conversationalist despite having no, you know, intuitive understanding of why people like it when you smile in certain situations. And so I'm just saying, like, maybe we could do something like that for the brain with enough computes where we just, like, understand how it works because we just, you know, through all the compute in the world.


Well, but again, this gets to this question of understanding. Right. So I could imagine if the technology existed such that I could kind of understand how every neuron in your brain interacts with every other neuron and just clone them. But in software. Right. If I could do this kind of handson in esq like a create a bunch of MS. Right. It's a complete kind of synthetic consciousnesses that that we create simply by emulating everything that a human being does.


But. Software, right? Like, that's great, now I have, you know, a virtual human being in software, but do I really understand what that system is doing? I can perfectly predict behavior. I can perfectly understand how it will respond to inputs and outputs. But I don't really feel like I in some sense have the level of understanding. Right. Just as you know, if I have a if someone gives me a good physics simulator like Grand Theft Auto, right.


That doesn't necessarily mean that I understand physics, even though I can tell you I can in the simulator, you know, push the ball off the ledge or Avrin over the person in the case of Grand Theft Auto and like have a reasonable, reasonably physically accurate copy of what happens for neuroscience. I think I mean, the the challenges right now are that both the techniques aren't there. Right. The techniques are very far away from where they would need to be.


But also and I think the thing we try and get at on her paper is that even if you can purely observe the system and so much the research effort right now is on kind of trying to record or observe a large number of neurons, writing in our processors, observing a large number of transistors. If you don't really have the ability to do that kind of perturbation, perturb the system carefully, then all you really get out of these behavioral outputs.


Right. The animal's not as good at remembering the animal turned left instead of right. And what we're trying to argue is that kind of even if you have the best analytic techniques, it's hard to conceive of analytic techniques that would from observing that data. Right. Give you the level of understanding that we we kind of seek. Right. And so kind of the passive, the LHC model, let's say, of the Large Hadron Large Hadron Collider, they're kind of big science model of neuroscience where we're going to these institutes and they're going to kind of acquire this high throughput data.


And then, you know, math nerds like me who maybe learned social interaction via simple relation with the other humans will go through and figure this out. That is that is unlikely to bear the kind of fruit that we hope without kind of coupling it to experiments. Right. I often describe it as some sense. The hypothesis space for how this computation happens is so incredibly large that unless we can perturb the system right, unless we can shift from kind of primarily an observational paradigm to an one that has interaction, kind of, you know, that you don't think that like lesion studies count is perturbation.


They certainly do. But the level of granularity is extremely coarse. So very, very fine grained. So there might be you know, there's there's a universe where you could start teasing apart some of these systems pretty well. And I do think we'll get there. Right. And I think we'll probably get there over the next kind of 30 to 50 years, let's say, where if we could do kind of single cells, specific simulation and measurement, and we could also do a reasonable job of kind of removing this either in very simple organisms or via something that looks like we're moving the system to kind of do this reverse engineering.


We could make some progress, but just as if I didn't understand what addiction was and I try to understand how that the arithmetic unit and the processor works and I put in all possible inputs and I get out all possible outputs. Right. If I have no concept of addition, I'm going to look at that and say, well, you know, these are the one that go in, then these go out great. You know, I'll get my paper in Nature or whatever, and it's fine.


And but but it doesn't really until we understand what that computation is. Right. That's just going to be kind of numbers. It's just going to be this kind of quantitative phenomenology. And that's where I think that kind of thinking about with what understanding means and thinking deeply about kind of this computational level becomes really important. Right. Even if I have kind of all of the dynamics of my underlying system, if I don't really know what it's doing, I'm basically going to be curve fitting.


Do you think that neuroscientists now are going in? So when you when you were doing these experiments on the chip, it was basically a theoretical. Right. You were just sort of putting in different inputs or doing different perturbations and observing the outputs. But do you think that neuroscientists really are approaching their experiments with the similar degree of a theoretical ness, or do they have sort of, you know, a sense at least what they think might be a sense of what?


You know, addition is in this case, it really depends on the system.


But no, I think there's there's in fact, there's a real push towards being more theory driven. And people come in with kind of specific theoretical models about how particular kind of systems are subsystems work. And the closer you get to either sensory systems or motor systems, the kind of the closer you are to the inputs, the outputs where the neural activity kind of correlates much more strongly with things we can observe. The easier that gets. Right. I mean, Conrade made his career on your.


My co-author. Yes, my co-author and I, Gastón Science made his career on showing that a lot of motor control actions were actually like based on. All right, the human that that organisms were doing the correct job of kind of incorporating prior information to make the next decision, and in fact, there is kind of this rigorous computational framework of of Bayesian statistics that kind of models how these systems are working. And no, I think that that you see a lot of that on their hand as you get more towards the closer it gets to kind of things that look more like computation rights or critical activity or decision making or anything like that, because we don't tend to have good models.


What will happen is will record a lot of this data, will require a lot of the data, and then we throw it into some algorithm and it spits out some numbers. Right. And says, well, look, I think that there's a load there's this low dimensional structure in the data. And Syria Ganguli, a Cumnor guy at Stanford, give this nice talk at one of the computational neuroscience conferences a few years ago where he said, well, look, we record this high dimensional activity, right?


We record like a thousand neurons or something. We put into these algorithms and then we say, OK, show me kind of the the two true dimensions that the system is operating on. Right. But we do that because the animal, while we're recording this data, is doing like a two dimensional Ritsch task. And in fact, the dimensionality of the activity that we seem to pull out when we observe this is related to the intrinsic dimensionality of the behavior.


Right. Basically, the animal, especially for areas like motor cortex, the thing that the animal is doing, it's kind of very strongly correlated, has the same kind of structure with the activity that we're seeing in the brain. Now, that's not surprising, but that suggests that kind of naively recording a bunch of data and then throwing it into some algorithm. It's just going to tell us stuff we already know.


And so that was the other side of this was trying to say, hey, look, guys, the algorithms that we actually have right now for understanding this data are woefully incomplete. Right. And if we just sit around waiting for this big data to get here in 10 years, we're all going to wake up and realize that, well, maybe we have this data now, but we have another 10 years of algorithm development.


You're saying the areas where we've made more progress are the ones where we can we can sort of observe the outputs like motor function or vision and connect that on a pretty granular level to what's happening in the brain. Is there any way to get an equivalent of that for, you know, something like computation or decision making? Not that we know of.


I mean, all of our world. Right. So if you if you study something like Decision-Making, let's say you try and kind of reduce what the animal is doing to the simplest kind of possible decision making task. Right. So maybe you run to the end of the maze and then decide to turn left or right. Right. Or, you know, there's kind of these these what are called alternative force choice tasks, where basically you make the animal pick one thing or another.


Right. Based upon and if it does it right, and then it gets some sort of reward. And so you can try and do this. Right. But the problem is the dimensionality of the behavior that is so incredibly low that you're going to get out a couple of bits of information, Max.


Right. Just like correct or not correct. Well, so right. So, you know, that animal was the right thing or the wrong thing, and maybe you watched it get better doing the right thing over time. But then when you try and map that back to neural activity, it's a far sparser and less informative signal than tracking where every like every muscle activation and every joint position in my arm or being able to control every pixel that's coming into my eye.


Right. OK, so hopefully you can answer this without similarly worrying about annoying everyone in your field. But would you say that the the data that we're getting back from the results of the experiments that neuroscientists are currently doing, would you say that it is necessary for an eventual understanding of how the brain works? It's just far from sufficient? Or would you say that it's not even necessary, that we're just basically barking up the wrong tree and we should be running different kinds of tests?


Well, if I mean, the reality is, if I knew the right kind of experiments to run, to actually advance kind of our understanding of these systems, I just go and do those things and I collect my Nobel Prizes. You could say, like, I don't know the right ones, but I can tell you these aren't at all. And so, I mean, one way of one way of looking at is I think, you know, I have to be careful.


I think of making the same mistakes that the the stereotype neuroscientist makes where I think that everything that's at a higher level than the one I'm interested in is just handwaving and everything at a lower level. Because as someone who is kind of classically studied kind of systems level neuro, where we're interested to have circuits give rise to behavior like that's the level I'm kind of most interested in. Right. But, you know, I'm I'm generally skeptical of a lot of the kind of value of fMRI work.


I think that the the the analogy I make there is that it's like trying to understand how your. Computer works with a thermal imager. Good luck with that. And but, you know, the the everyone I know who's working on some neuroscience question generally has a fairly refined question that they're attempting to answer with very precise with very precisely formulated hypotheses where they're kind of carefully trying to advance the field. Right. The I think, though, that when you sum up when you integrate across all of this.


Right, we're still not going to be a kind of maybe the level that we've been promising Congress in terms of neuroscience insight. Right. And it might be the case that these insights come from kind of weird corners. But like I I'm very partial to people who study much simpler systems. So people study C. elegans, this tiny little worm that has three hundred and two neurons. We still don't really understand how its behavior works in every single cell against has the same three hundred and two neurons.


And you like guys like if we can't do this, there are people who do incredibly detailed, careful work of the this is going to sound weird. The cluster of neurons in the lobster stomach. Aha. So the digestion is a complicated process. Right. And so most aminals, most vertebrates have a cluster of neurons or a ganglion of neurons that control kind of that entire process. And that process is very complicated. And some people study what's called the lobster's dramatic gastric ganglion, which is this cluster of cells.


And it's responsible for all these kind of incredibly complicated motor patterns that has to have to happen. And so people like Eve Martyrs' Group at Brandeis have kind of this is a nice prep because you can take out this from a lobster. You can basically do exactly the experiments I was talking about. You can fake input's, you can fake output's and you can start understanding it. And then at the point where they're really close to actually understanding how that system works so that they could so they could build a fake one.


Right. Right. And the the thought is that, you know, that's that's like a very small, simple system. Right? There's a but that kind of work, I think points points in the right direction at least. Right? It is. I think there are no there's no place that's more interesting to be asking questions than in a lot of these kind of higher order cognitive systems. And, you know, where is where is the neuron that does X or the the part of the brain that does Y?


But it's hard, especially given current technology to even if I identify that this part of the brain is responsible for me clicking likes on Facebook, even if I we don't really have the technology then to start kind of opening up and experimenting on that box. And so that's one of the reasons why I think these lower level systems, even though they're less they might be less sexy, hopefully use they won't kill me for saying that. But even though there might be harder to convince Congress, let's say, to study these systems.


Right periodically, whenever whenever some Republican senator or congressman decides that they seem to inevitably be Republican, that like the NIH is wasting money, they always find they go and they find someone who's studying something about Drosophila and they say, well, look, this person is. Yeah, fruit flies, right? And they say, well, they're spending, you know, the NHS budget, you know, we're spending three hundred million dollars a year on fruit flies.


This is insane. But of course, no fruit flies are like this incredibly simple organism with one hundred thousand neurons that where we understand the genetics incredibly well and we can start kind of picking apart using genetic techniques that you can breed a million fruit flies in your living room in like two days. In my image, you know, my house probably sounds great right now. The my poor wife, the but you can do all these experiments with these very simple systems.


And so even though, you know, Senator X or Congressman Y, he or she might not necessarily be excited about that kind of research. It's like is this is this might be the only way we have to make real progress that isn't just painting kind of pretty pictures. Right? Isn't just imaging like thermal imaging. Right. I mean, it's so, you know, these these FMI cities are so sexy because they do have pictures of people's brains.


You can see it lights up and everyone says that's interesting. Malcolm Gladwell writes a book about it. It's great, but it's not really, especially from a computational side telling you all that much right now. Of course, the people who actually do this work are very careful. And like the challenge for even as neuroscientists is that a lot of this thing gets filtered basically through the popular press, even into us. So I'm no doubt being incredibly unfair to my my former colleagues.


But the the part that's interesting about these systems, I think or I guess one thing we haven't talked about, which I still I think is worth worth mentioning, is that none that nothing I've talked about today I think necessarily has clinical implications because of my hunches.


And I think that the kind of current state of the art in neuropharmacology suggests that many of the cognitive. Or brain disease problems that we see with people today are while there is a computational component to them. But ultimately they result in some sort of computational dysfunction. Often that seems to be the result of very low level or kind of molecular or genetic problems with the system. So it may be the case that we can actually treat a very large number of diseases without ever having this level of computational understanding.


Right. We may you know, we even see that with you know, we don't really I mean, we all worship at the altar of Scott Alexander. We've all read his blog posts on depression and kind of understand the the state of the art with the research there. But also understand that even though we don't have good mental models for what's actually happening inside, people like some of the the pharmacological interventions are actually quite astounding. And we don't necessarily even understand why they work.


Right or so.


I mean, my my hypothetical person with no social intuition could still be extremely effective. Right. So despite not having any understanding of why the smiles work the way they do. Exactly. And so many of our neuro pharmacological interventions that work are actually often discovered serendipitously. Right. You know, you you give patients with Parkinson's a drug and they happen to have like less of phenotype. So you say, oh, maybe that's causing X, Y or Z or, you know, we even know that diseases like Parkinson's are or multiple sclerosis have a very firm cellular bases.


Right. These are not necessarily what we think of as diseases of computation. And from that perspective, I guess on the funding side, it gives me hope that there will continue to be kind of the American public will continue to fund us to understand these systems. I kind of all of these levels, because I'm on the high end, you know, you have cool pretty pictures and the low end you have kind of, I think, much closer, a more direct path to clinical impact.


Well, Eric, before I let you go, I want to ask you the question I ask all my guests, which is for a book or article or blog that has influenced your thinking in some way.


What would your pick be? What would my pick be? So I am getting this this question ahead of time. I racking my brain a little bit because, of course, you know, I have my favorite bloggers and Megan McArdle forever. And but the I think the article that actually impacted me the most is so I grew up in Idaho, which is a great place to grow up, but there aren't there and we were very small group of nerds in my high school.


And so every every month and I was like this teenager in the late 90s, kind of the height of Internet one. And we get copies of Wired and we like rewired cover to cover. And of course, as a kid in Idaho, you're like, what is this Burning Man thing? They're talking? I don't understand. Who are these venture capitalists? But there was an article in the February nineteen eighty seven edition of Wired about Julian Simon. And so I read this.


So Gene Simmons, the late Julian Simon is an economist who was most famous for kind of being this anti Malthusian. Right. He made these bets with with Paul Erlich and the other kind of doomsayers, arguing that, in fact, no things are getting better. And kind of was a strong popularizer of this kind of Soloveichik model of, you know, kind of technological innovation and human ingenuity give rise to wealth. They were there. There were results in kind of us getting past these kind of resource catastrophes and that had this incredible impact on me because it maybe, you know, being someone who always like building things and like science and these sorts of things made me realize that, like, in fact, maybe both.


We aren't all collectively screwed, but also that, in fact, this kind of innovation was a force of good. Right.


In fact, that it was just being an intellectually satisfying fundraising or a thing that's destroying the world. Right. If you read Silent Spring, you think, oh, my God, you know, there were all these chemists who were doing all this work, you know, and in fact, it was turned into Agent Orange and these horrible pesticides and we're all going to die. But in fact, it seems that as countries get richer, they take environmental controls more seriously.


As countries get richer, everyone starts doing kind of better. And in fact, you know, the the I feel like even though if it were held today, the Simon bet may have come out differently because China's had this tremendous appetite for natural resources and maybe some of the resources actually aren't cheaper today than they were, say, 10 or 20 years ago. It really impacted me that that that both kind of a combination of technological innovation and capitalism could have this kind of really positive impact.


And set me my high school commencement speech was was called the antithesis of mouthes.


And I got up there and told everyone, we're not screwed, actually. And, you know, I open quoting Balthus and everyone's like, whoa, he's telling us we're all going to die. This is horrible. Why didn't we ask Eric to do this? And I know it's actually all getting better because we're putting in this kind of effort. And I know that's great. I know that everyone. Right. Kind of it's very in vogue to kind of take issue with a lot of these tech companies, and I think that's probably deserved for a whole host of societal reasons.


But I think it's I hope that it will not along the way turned to us being opposed to technology in general. Right. I mean, I think the most impressive human success story of the past, say, 50 years, is the fact that China's lifted, you know, whatever. Two hundred eighty million people out of rural poverty. Right. Yeah. People really underestimate the the decline in the fraction of the world's population living in extreme poverty. It's amazing.


It's like one of the biggest stories of the last 20 years of my life. I mean, I don't understand why why this is not something. And obviously, there are problems and there are challenges and all of these sorts of things. But but I guess, Julian Simon, the piece about Julian Simon in Wired, it was called the Doom Slayer, really made me realize that, in fact, if you if you scale out your view to kind of what's happening in your neighborhood or your city or your state, and in fact, you start looking at these things globally and looking at them historically, we're doing great.


And in fact, that's because we're really, really creative monkeys. And I think that, you know, it gave me a real sense of optimism for both understanding how all of these systems work and then kind of trying to make them better.


Well, just like with your commencement speech, we started, our trajectory is good, right? We started with the you know, things are tough and maybe not as great as they seemed and ended on a positive note. Yeah, that was good. Good architecture there. I learned it from observing other humans scared and taking care of them.


Exactly. Eric, it's been a pleasure having you on the show. Thanks so much for joining us. Thank you. This concludes another episode of Rationally Speaking. Join us next time for more explorations on the borderlands between reason and nonsense.