Editor's Note: This transcript was automatically transcribed, so mistakes are inevitable. You can contribute by proofreading the transcript or highlighting the mistakes. Sign up to be amongst the first contributors.
The following is a conversation with Carl Feresten, one of the greatest neuroscientists in history, cited over two hundred forty five thousand times, known for many influential ideas in brain imaging, neuroscience and theoretical neurobiology, including especially the fascinating idea of the free energy principle for action and perception. Cars mix of humor, brilliance and kindness to me are inspiring and captivating. This was a huge honor and a pleasure. This is the artificial intelligence podcast. If you enjoy it, subscribe on YouTube, review it with five stars and up a podcast, support one patron or simply connect with me on Twitter.
Allex Friedman spelled F.R. Idi Amin as usual. I'll do a few minutes of ads now and never any ads in the middle that can break the flow of the conversation. I hope that works for you and doesn't hurt the listening experience. This show is presented by Kashyap, the number one finance app in the App Store, when you get it, is called Lux podcast. Cash app lets you send money to friends, buy Bitcoin and invest in the stock market with as little as one dollar.
This cash app allows you to send and receive money digitally. Let me mention a surprising fact related to physical money of all the currency in the world, roughly eight percent of it is actual physical money. The other 92 percent of money only exists digitally. So again, if you get cash from the App Store or Google Play and use the Collects podcast, you get ten dollars in cash. I will also donate ten dollars. The first, an organization that is helping to advance robotics Estima education for young people around the world.
And now here's my conversation with Carl Feresten. How much of the human brain do we understand from the low level of neuronal communication to the functional level to the to the highest level, maybe to the psychiatric disorder level?
Well, we're certainly in a better position than we were last century. How far we've got to go, I think, is almost an unanswerable question.
So you'd have to set the parameters, you know, what constitutes understanding? What level of understanding do you want? I think we've made enormous progress in terms of broad principles, whether that affords a detailed cartography of the functional anatomy of the brain and what it does and right down to the micro circuitry in the neurons. That's probably out of reach at the present time.
So the cartography, some mapping the brain, do you think mapping of the brain, the detailed, perfect imaging of it, does that get us closer to understanding of the mind of the brain? So how far does it get us if we have that perfect cartography of the brain?
I think there are no bounds on. That's a really interesting question. You and it would determine the sort of scientific career you'd pursue if you believe that knowing every dendritic connection, every sort of microscopic synaptic structure right down to the molecular level was going to give you the right kind of information to understand the computational anatomy, then you choose to be microscopist and you would study little cubic millimeters of brain for the rest of your life.
If, on the other hand, you were interested in holistic functions and a sort of functional anatomy of the sort that a neuropsychologist would understand, you'd study brain lesions and strokes, you know, just looking at the whole person. So, again, it comes back to what level do you want understanding? I think there are principled reasons not to go too far. If you commit to a view of the brain as a machine that's performing a form of inference and representing things, there are.
That understanding, that level of understanding is not necessarily cast in terms of probability densities and ensemble densities distributions, and what that tells you is that you don't really want to look at the atoms to understand the thermodynamics of of probabilistic descriptions or how the brain works.
So I personally wouldn't look at the molecules or indeed the single neurons in the same way if I wanted and understand the thermodynamics of some non equilibrium steady state of a gas or an active material, I wouldn't spend my life looking at the the individual molecules that constitute the ensemble. I would look at their collective behavior. On the other hand, if you go to coarse grain, you're going to miss some basic canonical principles of connectivity and architectures. I'm thinking here this bit colloquial, but there's current excitement about high field magnetic resonance imaging that and tests that.
Why? Well, it gives us for the first time the opportunity to look at the brain in action at the level of a few millimeters that distinguish between different layers of the cortex. That may be very important in terms of evincing generic principles of clinical microcircuitry that are replicated throughout the brain. That may tell us something fundamental about message passing in the brain and these density dynamics of neuronal population dynamics that underwrite our, you know, our brain function.
So somewhere between a millimeter and a meter lingering for a bit. And the big questions, if you allow me, what to you is the most beautiful or surprising characteristic of the human brain?
I think it's hierarchical and recursive aspect is recurrent aspect of the structure or of the actual representation of power of the brain.
Well, I think one speaks to the other. I was actually answering in a dull minded way from the point of view of purely its anatomy and its structural aspects. I mean, there are many marvelous organs in the body. Let's take a liver, for example. Yeah. Without it, you wouldn't you wouldn't be around for very long. And it does some beautiful, delicate biochemistry and homeostasis and your evolved with a finesse that would easily parallel the brain.
But it doesn't have a beautiful anatomy. It has a simple anatomy which is attractive in a minimalist sense, but it doesn't have that crafted structure of sparse connectivity and that recurrence and that specialization that the brain has is, he said, a lot of interesting terms here.
So the recurrence, the sparsity. But you also started by saying hierarchical.
So I've I've never thought of our brain as hierarchical, sort of I always thought is just like a giant mass interconnected mass, which is very difficult to figure anything out. But in what sense do you see the brain as hierarchical?
Well, I said it's not a magic soup. It, of course, is what I used to think when I was before I studied medicine in the like. So a lot of those terms imply each other. So hierarchies, if you just think about the nature of a hierarchy, how would you actually build one? And what you would have to do is basically carefully remove the right connections that destroy the completely connected soups that you might have in mind.
So a hierarchy is in and of itself, defined by a sparse and particular connectivity structure. I'm not committing to any particular form of hierarchy. But your sense is there is some. Oh, absolutely.
In virtue of the fact that there is a sparsity of connectivity, not necessarily of a quality.
It's obviously for quantitive thought. So there it is demonstrably so.
And that the far further apart two parts of the brain are, the less likely they are to be wired, you know, to possess external processes, neuronal processes that directly communicate your message or messages from one part of the brain to the other part of the brain. So we know there's a sparse connectivity. And furthermore, on the basis of anatomical connectivity and trace of studies, we know that that that has that sparsity underwrites a higher hierarchical and very structured sort of connectivity that might be best understood, like a little bit like an onion, you know, that there is.
A concentric sometimes referred to as centripetal by people like Musallam Asylum Hierarchal organization to the brain. So you can think of the brain as in a rough sense, like an onion and all the sensory information and all the afferent outgoing messages that supply commands to your muscles or to your secretory organs come from the surface. So there's a massive exchange interface with the world out there on the surface. And then underneath there's a little there that sits and looks at the exchange on the surface.
And then underneath that, there's a layer right the way down to the very center through the deepest part of the onion. That's what I mean by a hierarchical organization. There's a discernible structure defined by the specific connections that lends the architecture a hierarchical structure that tells what a lot about the kinds of representations and messages that come back to your early question is this is about the representational capacity or is it about the anatomy? Well, one underwrites the other. You know, if one simply thinks of the brain as a message passing machine, a process that is in the service of doing something, then the the circuitry and the connectivity that shape that message passing also dictate its function.
So you've done a lot of amazing work and a lot of directions.
So let's look at one aspect of that, of looking into the brain and trying to study this onion structure.
What can we learn about the brain by imaging it, which is one way to sort of look at the anatomy of it. Broadly speaking, what what are the methods of imaging, but even bigger, what can we learn about it? Right. So well, most imaging human neuroimaging that you might see, you know, in science journals. That speaks to the way the brain works, measures brain activity over time, so, yeah, that's the first thing to say that we're we're effectively looking at fluctuations in neuronal responses, usually in response to some sensory input or some instruction, some task and not necessarily a lot of interest in just looking at the brain in terms of resting state endogenous or intrinsic activity.
But crucially, at every point, looking at these fluctuations either induced or intrinsic in neural activity and understanding them at two levels. So normally people would recourse to two principles of brain organization that are complementary. One, functional specialization or segregation. So what does that mean? It simply means that there are certain parts of the brain that may be specialized for certain kinds of processing. You know, for example, visual motion, our ability to recognize or to perceive movement in the visual world.
And furthermore, that specialised processing may be spatially or anatomically segregated, leading to functional segregation, which means that if I were to compare your brain activity during a period of static, viewing a static image and then compare that to the responses of fluctuations in the brain, when you were exposed to a moving image, say, a flying bird, I would expect to see restricted segregated differences in activity. And those are basically the hot spots that you see in the in such scale parametric maps that test for the significance of the responses that are circumscribed.
So now basically we're talking about. Some people have perhaps unkindly called a neo cartography. This is a phrenology augmented by modern day neuroimaging, basically finding blobs or bumps on the brain that do this or do that. I'm trying to understand the cartography of that functional specialization.
So how much how much is there such? This is such a beautiful sort of idea to strive for. We we humans scientists would like us to hope that there is a beautiful structure to. This was, like you said, there are segregated regions that are responsible for the different function. How much hope is there to find such regions in terms of looking at the progress of studying the brain? Oh, I think enormous progress has been made in the past 20 or 30 years, you know, so this is beyond incremental.
You know, at the advent of brain imaging, the very notion of functional segregation was just a hypothesis based upon a century, if not more, of careful neuropsychology, looking at people who had lost via insult or traumatic brain injury, particular parts of the brain, and then say, well, they can't do this or they can't do that.
For example, losing the visual cortex and not being able to see or using losing particular parts of the visual cortex or regions known as V5 or the middle temporal region empty. I'm noticing that they selectively could not see moving things. And so that created the the hypothesis that perhaps movement processing, visual movement processing was located in this functionally segregated area. And you could then put go and put in place of electrodes in animal models and say, yes, indeed, we can excite activity here.
We can form receptive fields that are sensitive to or defined in terms of visual motion. But at no point could you exclude the possibility that everywhere else in the brain was also very interested in visual motion.
By the way, I apologize to interrupt a tiny little tangent. He said animal models. Just out of curiosity, from your perspective, how different is the human brain versus the other animals in terms of our ability to study the brain? Well, clearly, the far further away you go from a human brain, the greater the differences. But not not as remarkable as you might think. So people will choose that level of approximation to the human brain, depending upon the the kinds of questions that they want to answer.
So if you're talking about sort of canonical principles of microcircuitry, it might be perfectly okay to look at a mouse. Indeed, you could even look at flies, worms. If, on the other hand, you wanted to look at the finer details of organization of visual cortex and V1, V2, these are designated sort of patches of cortex that may or may do different things. Indeed. Do you probably want to use a primate that looked a little bit more like a human?
Because there are lots of ethical issues in terms of, you know, the use of non-human primates to, you know, to answer questions about the about human anatomy. But I think most people assume that most of the important principles are conserved in a continuous way. From right. From. Well, yes.
Worms right through to to to you and me.
So now returning to that was the early sort of ideas of studying the that the functional regions of the brain, because if there's some damage to it, to try to infer that there is that part of the brain might be somewhat responsible for this type of function. So what what does that lead us? What are the next steps beyond that? Right.
Well, what this actually is reverse a bit. Come back to your sort of notion that the brain is a magic soup. That was actually a very prominent idea at one point, a notion such as lushes law of mass action inherited from the observation that for certain animals, if you just took out spoonfuls of the brain, it didn't matter where you took these spoonfuls. They always showed the same kinds of deficits. So, you know, it was very difficult to infer functional specialization purely on the basis of lesion deficit studies.
But once we had the opportunity to look at the brain lighting up in its it's literally it's sort of excitement, neuronal excitement when looking at this versus that one was able to say, yes, indeed, these functionally specialised responses are very restricted and they're here or they're over there.
If I do this, then this part of the brain lights up and that became. Doable in the early 90s.
In fact, shortly before, with the advent of positron emission tomography and then functional magnetic resonance imaging came along in the early 90s, and since that time, there has been an explosion of discovery, refinement, confirmation. You know, there are people who believe that it's all in the anatomy. If you understand the anatomy, then you understand the function at some level. And many, many hypotheses were predicated on a deep understanding of the anatomy and the connectivity.
But there were all confirmed and take much further with neuroimaging. So that's what I meant by we've made an enormous amount of progress.
And in this century, indeed, and in relation to the previous century, by looking at these, frankly, selective responses. But that wasn't the whole story. So there's this sort of near phrenology finding bumps and hot spots in the brain that did this or that. The bigger question was, of course, the functional integration, how all of these regionally specific responses were orchestrated, how they were distributed, how did they relate to distributed processing and indeed representations in the brain.
So then you turn to the more challenging issue of the integration, the connectivity, and then we come back to this beautiful, sparse, recurrent hierarchical connectivity that seems characteristic of the brain and probably not many other organs.
And but nevertheless, we come back to this this challenge of trying to figure out how everything is integrated. But what's your feeling? What's the general consensus? Have we moved away from the magic soup view of the brain? Yes. So there is a deep structure to it. That and then maybe a further question. He says some people believe that the structure is most of it, that you can really get at the core of the function by just deeply understanding the structure.
Yeah. Where do you sit on that?
Do I think it's got some mileage to it? Yes. Yeah.
That's a worthy pursuit of going of studying through imaging and all the different methods to actually study. Oh absolutely. Yeah. Yeah.
So I'm just noting you you were accusing me of using lots of long words and then you introduce one that which is deep, which is interesting, and because deep is the sort of millennial equivalent of hierarchical.
So if you put deep in front of anything, you like it, you're very, very trendy, but you're also implying a hierarchical architecture. So it is a depth which is for me, the beautiful thing. That's right. The word deep kind of. Yeah, exactly. It implies hierarchy. I didn't even think about that. That indeed the implicit meaning of the word deep is a hierarchy.
Yeah. Yeah. Now, so deep inside, the unknown is the center of your soul.
But maybe briefly, if you could paint a picture of the kind of methods of neuroimaging, maybe the history which you were a part of, you know, from statistical parametric mapping. I mean, just what what's out there that's interesting for people maybe outside the field that to understand of what are the actual methodologies of looking inside the human brain.
Right. Well, you can answer that question from two perspectives. Basically, the modality, you know, what kind of signal are you measuring? And they can range from? And let's limit ourselves to sort of imaging based, non-invasive techniques.
So you've essentially got brain scanners and brain scans can either measure the structural attributes, the amount of water or the amount of fat or the amount of iron in different parts of the brain. And you can make lots of inferences about the structure of the organ, of the thought that you might produce from an X-ray, but, you know, a very nuanced X-ray that there's looking at this kind of property of that kind of property.
So looking at the anatomy, noninvasively is would be the first sort of neuroimaging that people might want to employ.
Then you move on to the kinds of measurements that reflect dynamic function. The most prevalent of those fall into two camps.
You've got these metabolic, sometimes hemodynamic blood related signals.
So these metabolic and or human adamec signals are basically proxies for elevated activity and message passing and and, you know, neuronal dynamics in particular parts of the brain. Characteristically, though, the time constants of these hemodynamic or metabolic responses to neural activity are much longer. Then the neural activity itself, and this is this is referring forgive me for the dumb questions, but this would be referring to bloodroot, like the flow of blood.
Absolutely. So there's a tunnel of it seems like there's a ton of blood vessels in the brain. Yeah. So but what's the interaction between the flow of blood and the function of the new neurons? Is there an interplay there? Yeah, yep. And that interplay accounts for several careers of world renowned scientists. Yes, absolutely. So this is known as neurovascular coupling is exactly what you said. It's how how does the neural activity, the neuronal infrastructure, the actual message passing that we think underlies our capacity to perceive and act?
How is that coupled to the vascular response that supply the energy for that neural processing? So there's a delicate web of large vessels, arteries and veins that gets progressively finer and finer in detail until it refuses at a microscopic level the machinery where little neurons lie. So coming back to this sort of alien perspective, we were talking before using the Onion as a metaphor for a deep hierarchical structure. But also I think it's just an anatomical, anatomically quite a useful metaphor for all the action, all the heavy lifting in terms of neural computation is done on the surface of the brain, and then the interior of the brain is constituted by fatty.
Wires, essentially axonal processes that are enshrouded by myelin sheaths and these when you dissect them, they look fatty and white and so it's called white matter as opposed to the actual new which does the computation constituted largely by neurons, and that's known as grey matter. So the grey matter is a a a surface or a skin that sits on top of this big ball.
Now we are talking magic soup, but a big ball of connections like spaghetti, very carefully structured with sparse connectivity that preserved this deep hierarchical structure. But all the action takes place on the surface, on the cortex of the onion. And that means that you have to supply the right amount of blood flow, the right amount of nutrients, which is rapidly absorbed and used by neural cells that don't have the same capacity that your muscles would have to basically spend their energy budget and then claim it back later.
So one peculiar thing about cerebral metabolism, brain metabolism is it really needs to be driven in the moment, which means you basically have to turn on the taps.
So if there's lots of neural activity in one part of the brain, a little patch of a few millimeters, even less possibly, you really do have to water that piece of the garden now and quickly and that by quickly I mean within a couple of seconds.
So that contains a lot of that. Hence the imaging could tell you a story of what's happening.
Absolutely. But it is slightly compromised in terms of the resolution. So the the deployment of these little micro vessels that that water the garden to enable the activity to to the neural activity to play out the spatial resolution is in order of a few millimeters. And crucially, the temporal resolution is the order of a few seconds. So you can't get right down and dirty into the actual spatial and temporal scale of neural activity in and of itself. To do that, you'd have to turn to the other big imaging modality, which is the recording of electromagnetic signals as generated in real time.
So here the temporal bandwidth, if you like, on the the lower limit on the temporary resolution is incredibly small. Talk about, you know, nanoseconds, milliseconds, and then you can get into the phasuk fast responses that there is in and of itself the neural activity and start to see the succession or cascade of hierarchal recurrent message passing evoked by particular stimulus. But the problem is you're looking at electromagnetic signals that have passed through an enormous amount of magic, super spagetti of connectivity and through the scalp and the skull, and it's become spatially very diffuse.
So it's very difficult to know where you are. So you've got this sort of catch 22. You can either use an imaging modality.
It tells you within millimeters which part of the brain is activated. We don't know when or you've got these electromagnetic EEG MGG setups that tell you to within a few milliseconds when something has responded, butut aware. So you've got these two complementary measures, either indirect via the blood flow or direct via the electromagnetic signals caused by neural activity. These are the two big imaging devices.
And then the second level of responding to your question, what are the you know, from the outside, what are the big ways of using this technology?
So once you've chosen the kind of neuroimaging we want to use to answer your questions, and sometimes it would have to be both, then you've got a whole raft of analyses, time series analysis usually that you can bring to bear in order to answer your questions or address your hypotheses about those data.
And interestingly, they both fall into the same two camps we're talking about before, you know, this dialectic between specialization and integration, differentiation and integration. So it's the cartography, the biology analyses and probably shouldn't entranceways.
But just heard of one word, the blurr, not biology, but biology.
It's neologism, which means the study of blobs of nothing.
Are you being witty and humorous or is there an actual does the word biology ever appear in a textbook somewhere?
It would appear in a popular book, it would not appear in a the specialist journal, but it's the fond. For the study of literally little blobs on brain maps showing activation, so it's like the kind of thing that you'd see in the newspapers on ABC or BBC reporting the latest findings from brain imaging. Interestingly, though, the maths involved in that stream of analysis does actually call upon the mathematics of blobs so seriously. They actually called oilor characteristics and they have a lot of fancy names in mathematics.
We'll talk about your ideas in free energy principle. I mean, there's a echoes of blob's there when you consider sort of entities mathematically speaking. Yes, absolutely. Yeah, yeah. Well circumscribed, well defined you entities of well, from the free energy point of view, entities of anything. But from the point of view of the analysis, the cartography of the brain, these are the entities that constitute the evidence for this functional segregation. You have segregated this function in this blob and it is not outside of the blob.
And that's basically the if you were a map maker of America and you did not know its structure, the first thing you do doing, constituting or creating a map would be to identify the cities, for example, or the mountains or the rivers. All of these uniquely, spatially localized features, possibly topological features, have to be placed somewhere. And of course, that requires our mathematics to identify what does a city look like on a satellite image or what does a river look like?
Or what does a mountain look like? What would it you know, what data features would it would evidence that that particular you know, that particular thing that you wanted to put on the map, they normally are characterized in terms of literally these blobs or these sort of another way of looking at this is that a certain statistical measure of the degree of activation crosses a threshold in crossing that threshold in the spatially restricted part of the brain. It creates a blob.
And that's basically what it's called parametric mapping does. It's basically mathematically for nest biology.
OK, so you kind of describe these two methodologies for one is temporally noisy, one especially noisy, and you kind of have to play and figure out what what can be useful.
It'd be great if you can sort of comment. I got a chance recently to spend a day at a company called NewLink that uses brain computer interfaces, and their dream is to well, there's a bunch of sort of dreams, but one of them is to understand the brain by sort of, you know, getting in there past the so-called sort of factory wall, getting in there and be able to listen, communicate both directions. What are your thoughts about the future of this kind of technology of brain computer interfaces to be able to now have a have a window or direct contact within the brain to be able to measure some of the signals, to be able to send signals to understand some of the functionality of the brain ambivalent?
My sense is ambivalent. So it's a mixture of good and bad, and I acknowledge that freely. So the good bits, if you just look at the legacy of that kind of reciprocal but invasive brain stimulation, I didn't paint a complete picture when I was talking about some of the ways we understand the brain prior to neuroimaging. It wasn't just a lesion, deficits. That is some of the early work. In fact, literally 100 years from where we're sitting at the institution, neurology was done by stimulating the brain of, say, dogs and looking at how they responded either with their muscles or with the salivation and imputing what that part of the brain must be doing.
The AI stimulated, then you're an AI. This kind of response then that tells me quite a lot about the functional specialization. So there's a long history of brain stimulation, which in continues to enjoy a lot of attention nowadays. Positive attention. Oh, yes, absolutely. You know, deep brain stimulation for Parkinson's disease is now standard treatment and also a wonderful vehicle to try and understand the neuronal dynamics, underlie movement disorders like Parkinson's disease, even interest in transit, because it's magnetic stimulation, stimulating with magnetic fields.
And will it work in people who are depressed, for example? Quite a crude level of understanding what you're doing. But, you know, there there is historical evidence that these kinds of. Brute force interventions do change things, you know, a little bit like buying the TV when the valves aren't working properly, but it still it works. So, you know, there is a long history. A brain computer interface or BCI, I think is a beautiful example of that.
It's sort of carved out its own nation, its own aspirations, and they've been enormous advances within limits, advances in terms of our ability to understand how the brain. The embodied brain. Engages with the world. I'm thinking here of sensory substitution, the augmenting our sensory capacities by giving ourselves extra ways of sensing and sampling the world, ranging from sort of trying to replace lost visual signals through to giving people completely new signals. So the one I think most engaging examples of this is equipping people with a sense of magnetic fields so you can actually give them magnetic sensors that enable them to feel through, say, tactile pressure around their tummy where they are in relation to the to the magnetic field of the earth.
Incredible. And after a few weeks, they take it for granted. They integrated them by their assimilate this new sensory information into the way that they literally feel their world, but now equipped with this sense of magnetic direction. So that tells you something about the brain's plastic potential to remodel it and its plastic capacity to suddenly try to explain the sensory data at hand by augmenting augmenting the the sensory sphere and the kinds of things that we can measure.
Clearly, that's purely for entertainment and understanding that the are the nature of the and the power of our brains. I would imagine the most BCI is pitched at solving clinical and human problems, such as locked in syndrome, such as paraplegia or replacing lost sensory capacities like blindness and deafness.
So then we come to the more on the negative part of my people on the other side of it.
And so I you know, I don't want to be deflationary because much of my deflationary comments are probably largely out of ignorance than anything else. But generally speaking, the bandwidth and the bit rates that you get from brain computer interfaces, as we currently know them, we're talking about bits per second. So that would be like me only being able to communicate with any world or with you using very, very. Very slow Morse code, and it is not in the in even within an order of magnitude near what we actually need for an inactive realisation of what people aspire to when they think about sort of curing people with paraplegia or replacing site.
Despite heroic efforts, so one has to ask, is there a lower bound on the kinds of recurrent? Information exchange between a brain and some augmented or artificial interface, and then we come back to interestingly when I was talking about before, which is, you know, if you're talking about function in terms of inference and I presume we'll get to that later on in terms of the free energy principle and the moment, there may be fundamental reasons to assume that is the case.
We're talking about ensemble activity. We're talking about basically. For example, let's. Pinked, the challenge facing brain computer interfacing in terms of controlling another system that is highly and deeply structured, very relevant to our lives, very non-linear, that rests upon the kind of non equilibrium, steady state and dynamics that the brain does the weather. Right.
So that example, you imagine you had some very aggressive satellites that could produce signals that could perturb some little parts of the of the weather system.
And then what you're asking now is, can I meaningfully get into the weather and change it meaningfully and make the weather respond in a way that I want it to? You're talking about chaos, control on a scale which is almost unimaginable.
So there may be fundamental reasons why BCI, as you might read about it, in a science fiction novel, aspirational BCI may never actually work in the sense that to really be integrated and be part of the system is a requirement that requires you to have evolved with that system that, you know, you you have to be part of a very delicately structured, deeply structured, dynamic ensemble activity that is not like rewiring a broken computer or plugging in a peripheral interface adaptor.
It is much more like getting into the weather patterns or I come back to your magic soup is getting into the active matter and meaningfully relate that to the outside world. So I think there are enormous challenges there.
So I think the example, the where is a brilliant one. And I think you paint a really interesting picture and it wasn't as negative as they thought, essentially saying that it might be incredibly challenging, including the low amount of the bandwidth and so on. I kind of so just to full disclosure, I come from the machine learning world. So my my natural thought is the hardest part is the engineering challenge of controlling the weather, of getting those satellites up and running and and so on.
And once they are, then the rest is fundamentally the same approaches that allow you to be to win in the game of goal will allow you to potentially play in the soup in this chaos. So I have I have a hope that so machine learning methods will will help us play in the soup.
But perhaps you're right that it is by biology and the brain is just an incredible, incredible system that may be almost impossible to get in. But for me, what seems impossible is, is the incredible mass of blood vessels. The also described without you know, we also value the brain.
You can't make any mistakes. You can't damage things. To me, that engineering challenge seems nearly impossible.
One of the things I was really impressed by had neural link is just just talking to brilliant neurosurgeons and the roboticists. That made me realize that even though it seems impossible if anyone can do it, it's some of these world class engineers that are trying to take it on.
So so I think the conclusion of our discussion here is of this part is is basically that the problem is really hard, but hopefully not impossible.
Absolutely. So if it's OK, let's start with the basics.
So you've also formulated a fascinating principle of the free energy principle. Can we maybe start at the basics? And what is the free energy principle when in fact the free energy principle inherits a lot from the building of these data analytic approaches to these, you know, very high dimensional time as you get from the brain.
So I think it's interesting to acknowledge that and in particular the analysis tools that try to address the other side, which is the functional integrations and the connectivity allowances. On the one hand, but I should also acknowledge it inherits an awful lot from machine learning as well. So the free energy principle is just a formal statement that the the existential imperatives for any system that manages to survive in a changing world is can be cast as an inference problem in the sense that you can interpret the probability of existing as the evidence that you exist.
And if you can write down that problem of existence as a statistical problem, that you can use all the maths that has been developed for inference to understand and characterized the ensemble dynamics that must be in play in the service of that inference.
So technically, what that means is you can always interpret anything that exists.
In virtue of being separate from the environment in which it exists as trying to minimize variation of free energy, and if you're from the machine learning community, you will know that as a negative evidence, lower bound or a negative elbow, which is the same as saying you're trying to maximize or it will look as if all your dynamics are trying to maximize the complement of that, which is the marginal likelihood or the evidence for your own existence.
So that's basically that, you know, that the free energy principle to even take a sort of a small step backwards, he said. The existential imperative.
There's a lot of beautiful poetic words here, but sort of to put it crudely, it's a it's a fascinating idea of basically just trying to describe if you're looking at a blob, how do you know this thing is alive?
What does it mean to be alive? What does it mean to be to exist?
And so you can look at the brain, you can look at parts of the brain or you this is just the general principle that applies to almost any any any system. That's just the fascinating sort of philosophically at every level question and a methodology to try to answer that question. What does it mean to be alive? Yes.
So so that that that's a huge endeavor. And it's nice that there's at least some from some perspective, a clean answer. So maybe can you talk about that optimization view of it? So what what's trying to be minimized to maximize a system that's alive? What is it trying to minimize? Right.
You've made a big move for to big moves, but you've assumed that the things the thing exists in a state that could be living on non-living. So I may ask you, what license's you to say that something exists? That's why I use the word existential. It's beyond living. It's just existence. So if you drill down onto the definition of things that that exist, then they have certain properties. If you borrow the maths from non equilibrium, steady state physics that.
Enable you to interpret their existence in terms of this optimization procedure, so it's could you introduce the word optimization?
So what the free energy principle in its sort of most ambitious but also most deflationary and simplest says is that if something exists, then it must by the mathematics of not equilibrium, steady state. Exhibit properties that make it look as if it is optimizing a particular quantity and it turns out that particular quantity happens to be exactly the same as the evidence. Lower bound in machine learning or Bayesian model evidence in Bayesian statistics or and I can list a whole other, you know, list of ways of understanding this this key quantity, which is a bound on surprisingly self information.
If you have your information theory, there are whole there are a number of different perspectives on this. It's just basic analog probability of being in a particular state. I'm telling this story as an honest an attempt to answer your question, and I'm answering it is as if I was pretending to be a physicist who was trying to understand the fundamentals of not equilibrium and steady state.
And I shouldn't really be doing that because the last time I was taught physics, I was in my twenties.
What kind of systems when you think about the free energy principle, what kind of systems are you imagining as a sort of more specific kind of case study?
Yeah, I'm imagining a range of systems, but, you know, at its simplest as a single celled organism that can be identified from its economy or its environment.
So at its simplest, that that's basically what I always imagined in my head. And you may ask, well, is there any how on earth? Can you even elaborate questions about the existence of a single drop of oil, for example?
Yeah, well, but there are deep questions. Why doesn't the oil why doesn't the thing the interface between the drop of oil that contains an interior and the thing that is not the drop of oil, which is the solvent in which it is immersed? How does that interface persist over time? Why doesn't the oil just dissolve into the solvent?
So what special properties of the exchange between the surface of the oil drop and the external states in which it's the most? If you're physicists say it would be the heat bath, you know, you've got a you've got a physical system of an ensemble. Again, we talk about stomaches ensemble dynamics, an ensemble of atoms and molecules immersed in the heat path. But the question is, how did the heat get there and why is it not dissolved while maintaining itself?
Exactly what actions is it?
I mean, it's such a fascinating idea of a drop of oil and I guess it would solve the water wouldn't dissolve in water, so. Precisely. So why not? So why not? Why not? And how do you mathematically describe I mean, it's such a beautiful idea. And also the idea of like where does the thing where does the drop of oil and. Yeah. And where does it begin. Right.
So I mean, you're asking these questions deep in a sense for what you can do, you say. So this is a deflationary part of it. Can I just qualify my answer by saying that normally when I'm asked this question, I answer from the point of view of a psychologist. We talk about predictive processing and predictive coding. And, you know, the brain has an inference machine. But you have Nazmi from that perspective answering from the point of view of a physicist.
So, you know, the question is not so much why, but if it exists, what properties must it display? So that's the deflationary part of the Frangipanis principle does not supply an answer as to why it's saying if something exists, then it must display these properties. That that's the other sort of the thing that's on offer. And it so happens that these properties must display actually intriguing and have this inferential gloss.
This is sort of self-evidence and that inherits from the fact that the very preservation of the boundary between the oil drop and the not oil drop requires an optimization of a particular function or a functional that that defines the presence of the existence of of this order, which is why I started with existential imperatives that it is it is a necessary condition for existence, that this must occur because the thing the boundary basically defines the thing that's that's existing. So it is that self-assembly aspect.
It's that that you were hinting at in biology, sometimes known as or to paresis in computational chemistry with self-assembly. It's the what what does it look like? So how would you describe things that configure themselves out of nothing, the way they clearly demarcate themselves from the states or the soup in which they are immersed?
So from the point of view of computational chemistry, for example, you would just understand that as a configuration of a macro molecule to minimize its free energy, its thermodynamic free energy is exactly the same principle that we've been talking about, that thermodynamic free energy is just the negative elbo. It's the same mathematical construct. So the very emergence of existence, of structure, of form that can be distinguished from the environment or the thing that is not the thing necessitates the, you know, the existence.
Of an objective function that it looks as if it is minimizing its finding of free energy M.A. And so just to clarify, I'm trying to wrap my head around. So the free energy principle says that if something exists, these are the properties that should display. Yes. So what that means is we can just look, you can't just go into a soup and there's no mechanism if the energy principle doesn't give us a mechanism to find the things that exist.
Is that what's implying is being implied that you can kind of use it to reason to think about like the study of a particular system and say, does this exhibit these qualities?
That's an excellent question. But to answer that, I'd have to I have to return to your previous question about what's the difference between living and non-living things.
Yes, it was actually actually say, yeah, maybe we can go there. You kind of drew a line and forgive me for the stupid questions, but you kind of draw a line between living and existing. Yeah. Is there an interesting sort of distinction? Distinction?
Yeah, I think there is. So, you know, things do exist. Grains of sand, rocks on the moon, trees, you. So all of these things can be separated from the environment in which they are immersed and therefore there must at some level be optimizing their free energy, taking this sort of model evidence, interpretation of this quantity. That basically means that self-evidence seeing another nice little twist of phrase here is that you are your own existence proof.
You know, statistically speaking, which I don't think I said that somebody did, but I love that phrase.
And you are your own existence proof. Yeah. So is there existentialism that is. We need to think about that for a few days. Yeah. That that's a beautiful line. So the step through to answer your question about what's good for it would go along the following lines. First of all, you have to define what it means to exist, which, as you rightly pointed out, you have to define what probabilistic properties must the states of something possess so that it has so it knows where it finishes.
And then you write that down in terms of statistical independence's, again, sparsity again, it's not what's connected or what's correlated or what depends upon what. It's what's not correlated and what doesn't depend upon something. Again, it comes down to the the deep structures, not in this instance hierarchy, but the certainly the the structures that emerge from removing connectivity and dependency. And in this instance, basically being able to identify the surface of their oil drop from the water in which it is immersed.
And when you do that, you start to realize, well, there are actually fourth kinds of states in any given universe that contains anything, the things that are internal to the surface, the things that are external to the surface and the surface in and of itself. Which is why I use a metaphor, a little single celled organism that has an interior and exterior and then the surface of the cell. And that's mathematically a mark of Plunket.
Just a pause, I mean, of this concept that there's the stuff outside the surface, stuff inside the surface and the surface itself, the marka blanket.
It's just the most beautiful kind of notion about trying to explore what it means to exist mathematically. Apologizers is the this is the beautiful idea, but it came out in California so that I changed my mind. Take it all back. So anyway, so what you were just talking about the surface, about the market. Yeah.
So the surface of this blanket, these blanket states that are the you know, the because they are now defined in relation to these independence's and you know what what different states, internal or blanket or external states can which ones can influence each other and which cannot influence each other. You can now apply standard results that you would find in non equilibrium physics or steady state or thermodynamics or hydrodynamics, usually out of equilibrium solutions and apply them to this partition. And what it looks like is if all the normal normal gradient flows that you would associate with any non equilibrium system apply in such a way.
To be part of the mark of Plunket and the internal states seem to be hill climbing or doing a gradient descent on the same quantity, and that means that you can now describe the very existence of this oil drop. You can write down the existence of this oil in terms of flows, dynamics, equations of motion where the blanket states or part of them, we call them active states and the internal states now seem to be and must be trying to look as if they're minimizing the same function, which is as a long probability of occupying these states.
Interesting thing is that. What would they be called if you were trying to describe these things? So what we're talking about are internal states, external states and blanket states. Now, let's carve the blanket states into two central states and active states. Operationally, it has to be the case that in order for this carving up into different sets of states to exist, the active states, the mark of blanket cannot be influenced by the external states. And we already know that the internal states can't be influenced by the external states because the blanket separates them.
So what does that mean? Well, it means the active states, the internal states are now jointly not influenced by external states. They only have autonomous dynamics. So now you've got a picture of an oil drop that has autonomy. It has autonomous states. It has autonomous states in the sense that there must be some parts of the surface of the oil job that are not influenced by the external states and all the interior. And together, those two states endow even a little oil job with autonomous states that look as if they are optimizing their very free energy or their negative elbo their model evidence.
And that would be an interesting intellectual exercise.
And you could say you could even go into the realms of sarcasm, that everything that exists is implicitly making inferences on self-evident thing. Now we make the next move. But what about living things? I mean, so let me ask you, what's the difference between an oil drop and a little tadpole or a little lava? Plankton, the picture is just painted of an oil drop just immediately in a matter of minutes, took me into the world of pantheism where you just convinced me, made me feel like an all drop is a living, certainly an autonomous system.
But almost the living system serves as a capable sensory capabilities and acting capabilities, and it maintains something. So what is the difference between that and something that we traditionally think of as the living system?
That it could die or it can't?
I mean, a mortality. I'm not I'm not exactly sure. I'm not sure what the right answer there is because they can move like movement seems like an essential element to being able to act in the environment. But the oil drop is doing that. So I don't know. Is it? The all drop will be moved, but does it in and of itself move autonomously? Well, or the surface is performing actions that maintain its structure. You're being too clever.
I was sat behind a passive little old sitting there at the bottom of the top of a glacier for sure.
I guess what I'm trying to say is you're absolutely right. You nailed it. It's movement. Yeah. So where does that movement come from? It comes from the inside. Then then you've got, I think, something that's living. What do you mean from the inside? What I mean is that the internal states that can influence the active states where the act states can influence but they're not influenced by the external states, can cause movement. So there are two types of oil drops, if you like.
There are oil drops where the internal states are. So.
A random they average themselves away. And the thing cannot on balance, on average, when you do the averaging move. So a nice example of that will be the sun. The sun certainly has internal states, there's lots of intrinsic autonomous activity going on, but because it's not coordinated, because it doesn't have the deep in the millennial sense, a hierarchical structure that the brain does.
There is no overall mode or pattern or organization that expresses itself on the surface, but allows it to actually swim. It can certainly have a very active surface, but en mass, at the scale of the actual surface of the sun, the average position of that surface cannot in itself move because the internal dynamics are more like a hot gas. They are literally like a hot gas, whereas your internal dynamics are much more structured and deeply structured. And now you can express on your mark and your active states with your muscles and and your secretory organs, your autonomic nervous system.
And it's the factors you can actually move. Mm hmm. And that's all you can do. And that's something which if you haven't thought of it like this before, I think it's nice to realize there is no other way that you can change the universe other than simply moving. Whether that movement is articulating my with my voice box or walking around or squeezing juices out of my secreting organs, there's only one way you can change the universe.
It's moving. And the fact that you do so non-random only makes you alive. Yeah.
So it's that non-random. So that's so and would be manifest if we realized in terms of essentially swimming, essentially moving, changing one's shape, a morphogenesis that is dynamic and possibly adaptive. So that that's what I was trying to get at between the difference from the oil drop and the little tadpole that the tadpole is moving around. It's its active states are actually changing the external states. And there's now a cycle and action perception cycle, if you like, a recurrent dynamic that's going on that depends upon this deeply structured autonomous behavior that rests upon.
Internal dynamics that are not only modeling the data impressed upon their surface or the blanket states, but they are actively resampling those data by moving them, moving towards, say, chemical gradients and keyman taxes. So they've gone beyond just being good little models of the kind of world they live in. For example, an oil droplet could, in a psychic sense, be construed as a little being that has now perfectly inferred. It's a passive nonliving oil drop living in a bowl of water, no problem.
But to now equipt that oil drop with the ability to go out and test that hypothesis about different states of beings so it can actually push its surface over there, over there and test for chemical gradients, or then you start to move to much more lifelike form. This is a fun, theoretical, interesting, but it actually is quite important in terms of reflecting what I have seen since the turn of the millennium, which is this move towards an inactive, an embodied understanding of intelligence.
And you say you're from machine learning. Yes. So what that means this this sort of the central importance of movement, I think is yet to really hit machine learning. It certainly has now defused itself throughout robotics and perhaps you could say certain problems in Activision, where you actually have to move the camera to sample this and that. But machine learning of the data mining, deep learning, so simply hasn't contended with this issue. What it's done, instead of dealing with the movement problem and the active sampling of data, it is that we don't need to worry about.
We can see all the data because we've got big data. So we need ignore movement. So that for me is, you know, an important omission in current machine learning curve.
Machine learning is much more like the oil drop.
Yes, but an oil drop that enjoys exposure to nearly all the data has to be exposed as opposed to the tadpoles swimming out to find the right data. For example, it likes food. That's a good hypothesis. That's testable. Let's go and move and ingest food, for example, and see what that, you know, is the evidence that I'm the kind of thing that likes this kind of food.
So the next natural question and forgive this question, but if we think of sort of even artificial intelligence systems, which has just been a beautiful picture of existence and life, so do you do you ascribe or do you find within this framework a possibility of defining consciousness or exploring the idea of consciousness, like what, you know, self-awareness and expand to consciousness?
Yeah. Or how can we how can we start to think about consciousness within this framework? Is it possible?
Well, yeah, I think it's possible to think about it whether you get any worse.
And again, I'm not sure that I'm licensed to answer that question. You I think you'd have to speak to a qualified philosopher to get a definitive answer there. But certainly there's a lot of interest in using not just these ideas, but related ideas from information theory to try and tie down the the maths and the calculus and the geometry of consciousness, either in terms of sort of a minimal consciousness. And I even less than a minimal selfhood. And what I'm talking about is the ability effectively to plan to have agency.
So you could argue that a virus does have a form of agency in virtue of the way that it selectively finds hosts and cells to live in and moves around. But you wouldn't endow it with the capacity to think about planning and moving in a purposeful way where it countenances the future. Whereas you might Annand you might think and it's not quite as unconscious as a virus, it certainly seems to have a purpose. It talks to its friends en route during its foraging.
It has a different kind of autonomy, which is biotic, but beyond a virus.
So there's something about. So there's some line. There has to do with the complexity of planning, yes, that may contain an answer. I mean, it would be beautiful if if we can find a line beyond which we can say are being is conscious.
Yes, it will be is a wonderful line that we've drawn with existence, life and consciousness. Yes, it will be very nice.
One little wrinkle there, and this is something I've only learned in the past few months is that the philosophical notion of vagueness. So you're saying it would be wonderful to draw a line? I had always assumed that line at some point would be drawn and until about four months ago. And the philosopher taught me about vagueness.
So I don't if you've come across this, but it's a technical concept and I think most revealingly illustrated with at what point does a pile of sand become a pile? Is it one grain, two grains, three grains or four grains? So at what point would you draw the line between being a pile of sand and a collection of sand grains of sand in the same way? Is it right to ask where would I draw the line between conscious and unconscious?
And it might be a vague concept. Having said that, I agree with you entirely.
I systems that have the ability to plan.
So just technically what that means is your your inferential self-evident thing, by which I simply mean the dynamics. Literally, the thermodynamics and gradient flows that underlie the preservation of your oil droplet like form are described as a company has an optimization of log Bayesian model evidence your elbow. That self-evidence thing must be evidence for a model of what's causing the sensory impressions on the sensory part of your surface or your mark of blanky. If that model is capable of planning, it must include a model of the future consequences of your active state or your action just planning.
So now in the game of planning is inference. Now notice what we've made, though. We've made quite a big move away from big data and machine learning, because, again, it's the consequences of moving. It's a consequence of selecting those data or those data or looking over there. And that tells you immediately that even to be a contender for a conscious artifact or a as its strong AI or generalized or analytical it, and then you've got to have movement in the game.
And furthermore, you've got to have a generative model of the sort you might find in, say, a variation or to encode that is thinking about the future conditioned upon different courses of action.
Now that brings a number of things to the table, which which now we start thinking about. Those have got all the right ingredients to talk about consciousness. I've now got to select among a number of different courses of action into the future. As part of planning.
I've now got free will I the act of selecting this course of action or that policy or that policy or that action suddenly makes me into an inference machine, a self evidencing artefact that now looks as if it's selecting amongst different alternative ways forward. As I actively stream here or swim there or look over here, look over there. So I think you've now got to a situation.
If there is planning in the mix, you're now getting much closer to that line. If that were ever to exist, I don't think it gets you quite as far as self aware, though, I think and then you have to, I think, grapple with the question. How would formally write down a calculus or a maths of self-awareness? I don't think it's impossible to do, but I think, you know, you would there'll be pressure on you to actually commit to a formal definition of what you mean by self-awareness.
I think most people that I know would probably say that a goldfish, a pet fish was. Not self-aware, they would probably argue about their favorite cat, but would be quite happy to say that their mom was self-aware, so.
I mean, but that might very well connect to some level of complexity with planning, it seems like self-awareness is essential for. Complex planning. Yeah, you want to take that further? You're absolutely right. Again, the line is unclear, but it seems like integrating yourself into the world, into your into your planning is essential for constructing complex plans.
Yes. Yes. So mathematically, describing that in the same elegant way as you have with the free energy principle may be difficult.
Well, yes and no. I don't think that. Well, perhaps we should. Can we just go back? But that's a very important answer you gave. And I think that if I just unpacked it, you'd see the truism that you've just you've just exposed.
But let me say, I I'm mindful that I didn't answer your question before. You know, what's the principle good for? Is it just a pretty theoretical exercise to explain non equilibrium studies? Yes, it is. It does nothing more for you than that. It can be regarded as very arrogant. But, you know, it is of the sort of theory of natural selection or a hypothesis of natural selection.
Beautiful, undeniably true, but tells you absolutely nothing about, you know, why you have legs and eyes and, you know, it sounds it nothing about the actual phenotype and it wouldn't allow you to build something.
So the friendship is not directly that by itself is as vacuous as most tautological theories and by tautological. Of course, I'm talking to the you know, the theory of natural, the survival of the fittest, what's the fittest survive while the cycles, the fitter, a discarded circles and in a sense of free energy principle has that same, you know, deflationary tautology under the hood.
It's you know, it it it's it's to you. It's a characteristic of things that exist, why they exist, because they minimise their energy by the minute as a free energy, because they exist. And you keep on going round and round and round.
But the what the practical thing which you don't get from natural selection, but you could say has now manifest in things like differential evolution or genetic algorithms, that IBM's CMC, for example, in machine learning, the practical thing you can get is if it looks as if things that exist are trying to have density dynamics.
And because they're optimizing a variation of free energy and innovation, free energy has to be a functional of a generative model.
A probabilistic description of causes and consequences causes out there, consequences in the sensorium on the sensory parts, the mark of plunky, then it should in theory possible to write down the model, work out the gradients and then cause it to autonomously self-evidence. So you should be able to write down oil droplets. You should be able to create artefacts where you have supplied the objective function, the supplies, the gradients that supplies the self organising dynamics to non equilibrium steady state.
So there is actually a practical application of the free energy principle when you can write down your required evidence in terms of well, when you can write down the generative model, that is the thing that has the evidence, the probability of these sensory data on this data.
Given that given that model is effectively the thing that the elbow or the vision for energy balance approximates, that means that you can actually write down the model. The kind of thing that you want to engineer, the kind of ajai odd artificial general intelligence that you want to manifest. Probabilistically, and then you engineer a lot of hard work, but you will engineer a robot and a computer to perform a gradient descent on that objective function. So it does have a practical implication.
Now, why am I sitting on about that? It's irrelevant. Yes. So what kinds of.
So the answer to would it be easier to be hard? Well, mathematically, it's easy. I just told you, all you need to do is write down your your perfect artifact probabilistically in the form of a problematic gerente model of probability distribution over the causes and consequences of of the world in which this thing is immersed. And then you just engineer a computer and a robot to form a gradient descent on that objective function. No problem.
But of course, the big problem is writing down the genetic model. So that's where the heavy lifting comes in. Yeah.
So it's it's the form and the structure of that generative model which basically defines the artifact that you will create or indeed the kind of artifact that has self-awareness. So that's where all the hard work comes in. Very much like natural selection doesn't tell you in the slightest why you have eyes. So you have to drill down on the actual phenotype, the actual genetic model.
So with that in mind, what did you tell me? That tells me immediately the kinds of generative models I would have to write down in order to have self-awareness. What you said to me was.
I have to have a model that is effectively fit for purpose for this kind of world in which I operate.
And if I now make the observation that this kind of world is effectively largely populated by other things like me, i.e. you, then it makes enormous sense that if I can develop a hypothesis that we are similar kinds of creatures, in fact the same kind of creature, but I am me and you are you, then it becomes a game mandated to have a sense of self.
So if I live in a world that is constituted by things like me, basically a social world, a community, then it becomes necessary now for me to infer that it's me talking and not you talking.
I wouldn't need that if I was on Mars by myself or if I was in the jungle as a feral child.
If there was nothing like that, if there's nothing like me around, there would be no need to have an inference that a hypothesis. Yes, it is me that is experiencing or causing these sounds. And it is not you. It's only when there's ambiguity in play induced by the fact that there are others in that world. So I think that the special thing about self-aware artifacts is that they have learned to or they have acquired or at least are equipped with, possibly by evolution generative models that allow for the fact there are lots of copies of things like them around and therefore they have to work out it's you and not me.
That that's brilliant. I've never thought of that.
I never thought of that, that the purpose of the really usefulness of consciousness for self awareness in the context of planning existing in the world is so you can operate with other things like you and like you could. It doesn't have to necessarily be human. It can be other kind of similar creatures. Absolutely.
Well, we imbue a lot of our attributes into our pets don't waste or we try to make our robots humanoid. And I think there's a deep reason for that. But it's just much easier to read the world if you can make the simplifying assumption that basically you're me and it's just your turn to talk.
And I mean, when we talk about planning, when you talk specifically about planning the highest, if you like, manifestation or realization of that planning is what we're doing now. I mean, the human condition doesn't get any higher than this, talking about the philosophy of existence and the conversation. But in that conversation, there is a, you know, a beautiful art of term taking and mutual inference theory of mind. I have to know when you want to listen.
I have to know when you want to interrupt. I have to make sure that you're online. I have to have a model in my head of your model, in your head. That's the highest the most sophisticated form of generative model where the JONTI model actually has a gentle model of somebody else's JONTI model.
And I think that and what we are doing now evinces the kinds of generative models that would support self-awareness, because without that we both be talking over each other or we'd be singing together in a in a choir, you know, which is probably not that's not a brilliant idea, if what I'm trying to say.
But, you know, we will we wouldn't have this discourse, the dance of it.
Yeah, that's right. That is to have as I interrupt, I mean, that's beautifully put. I'll listen to this conversation many times of there's so much poetry in this and mathematics. Let me ask the serious of perhaps the biggest question as as a last kind of question. We've talked about living in existence and the objective function under which these objects would operate. What do you think is the objective function of our existence? What's the meaning of life? What do you think is the for you, perhaps the purpose, the source of fulfillment, the source of meaning for your existence as as one blob, as in this soup?
I'm tempted to answer that again as the physicist, the free energy expect consequent upon my behavior. So technically that we can get a really interesting conversation about what that comprises in terms of searching for information, resolving uncertainty about the kind of thing that I am both.
I suspect that you want a slightly more personal and fun answer and but which is can be consistent with that. And I think it's reassuringly simple and hops back to what you were taught as a child, that you have certain beliefs about the kind of creature and the kind of person you are. And all that self-evidence means all that minimizing variation of reality in an in an inactive and embodied way means is fulfilling the beliefs about what kind of thing you are.
And of course, we're all given those scripts, those narratives at a very early age in the form of bedtime stories or fairy stories.
I'm a princess and I'm going to meet a beast who's going to transform a prince.
And the narratives are all around you, from your parents to the to the friends to the society feeds these stories.
And then then your objective function is to fulfill exactly the narrative that has been encountered by your immediate family. But, you know, as you say, also the sort of the culture in which you grew up and you create for yourself. I mean, again, because of this active inference, this inactive aspect of self evidencing you, not only am I modeling my environment, my weakness, my my external states out there, but I'm actively changing them all the time and external sites doing the same back.
We're doing it together. So there's a synchrony that means that I'm creating my own culture over different timescales. So the question now is for me being very selfish, what scripts were I given? It basically was a mixture between Einstein and Sherlock Holmes.
So I smoke as heavily as possible. Try to avoid too much interpersonal contact.
Yeah, yeah. Enjoy the fantasy that, you know, you're a. Popular scientist is going to make a difference in slightly quirky way. So that's that's what I grew up with my father. My father was an engineer and loved science and loved it, loved those sort of things like Sir Arthur Eddington, Spacetime and Gravitation, which was the, you know, the first understandable version of general relativity.
And he and so all the all the fairy stories I was told as I was growing up were all about these characters keeping the Hobbit out of this, because it's quite a journey of exploration of sorts.
So, yeah, I've just grown up to be what I imagine a mild mannered Sherlock Holmes slash Albert Einstein would would do in my shoes.
And you did it elegantly and beautifully. Carl is a huge undertaking. Today was fun. Thank you so much for your time. Thank you. Thank you for listening to this conversation with Carl Feresten and thank you to our presenting sponsor Kashyap. Please consider supporting the podcast by Downloading Cash and Using Collects podcast to enjoy this podcast. Subscribe on YouTube. Review five stars and up a podcast support on Patron or simply connect me on Twitter and Lex Friedman. And now let me leave you with some words from Carl Freston.
Your arm moves because you predict it will and your motor system seeks to minimize prediction error. Thank you for listening and hope to see you next time.