Transcribe your podcast
[00:00:00]

The following is a conversation with Ben Gursel, one of the most interesting minds in the artificial intelligence community. He's the founder of Singularity, not designer of Open Kagi Framework, formerly a director of research at the Machine Intelligence Research Institute and chief scientist of HANDSON Robotics, the company that created the Sophea robot. He has been a central figure in the Ajai community for many years, including in his organizing and contributing to the conference and artificial general intelligence, the 2020 version of which is actually happening this week, Wednesday, Thursday and Friday.

[00:00:36]

It's virtual and free. I encourage you to check out the talks, including by Joshua Bock from Episode one of one of this podcast. Quick summary of the ads to sponsors the Jordan Harbage, a show and master class. Please consider supporting this podcast by going to Jordan Harbinger, that complex and signing up a master class Dotcom's Leks. Click the links by all the stuff. That's the best way to support this podcast and the journey I'm on in my research and startup.

[00:01:08]

This is the Artificial Intelligence Podcast, if you enjoy it. Subscribe on YouTube. Review of Five Stars Up a podcast supporta on Patrón or connect with me on Twitter.

[00:01:18]

Allex Friedman spelled without the E just F.R. ID man as usual. I'll do a few minutes of ads now and never any ads in the middle that can break the flow of the conversation. This episode is supported by the Jordan Harbage show. Go to Jordan Harbage Dotcom slash Lex. It's how he knows I sent you on that page. There's links to subscribe to it on Apple podcast, Spotify and everywhere else. I've been bingeing on his podcast. Jordan is great.

[00:01:47]

He gets the best out of his guests, does deep, calls them out when it's needed and makes the whole thing fun to listen to his interviewed Kobe Bryant, Mark Cuban, Neil deGrasse Tyson, Carter, Kasparov and many more. His conversation with Kobe is a reminder how much focus and hard work is required for greatness in sport, business and life. I highly recommend the episode. If you want to be inspired again, go to Jordan Harbor Jedi Council, Flex its How Jordan Knows I sent you.

[00:02:20]

The show, sponsored by a master class sign up, a master class that looks to get a discount and to support this podcast. When I first heard about master class, I thought it was too good to be true. For one hundred and eighty bucks a year, you get an all access pass the watch courses from the list. Some of my favorites, Chris Hadfield and Space Exploration, deGrasse Tyson and Scientific Thinking and Communication will write creator of the greatest city building game ever, SIM City and Sims Game Design.

[00:02:52]

Carlos Santana on guitar. Garry Kasparov, the greatest chess player ever on chess. Danny on the ground on poker anymore. Chris Hadfield explaining how rockets work and the experience of being launched into space alone is worth the money. Once again, sign up a masterclass that council likes to get a discount and to support this podcast. And now here's my conversation with Ben Gursel.

[00:03:38]

What books, authors, ideas had a lot of impact on you in your life in the early days? You know what got me into A.I. and science fiction such in the first place wasn't a book, but the original Star Trek TV show, which my dad worked with me like in its first round, it would have been 1968, 69 or something. And that was incredible because every every show that visited a different a different alien civilization with different culture and weird mechanisms.

[00:04:08]

But that that got me into science fiction. And there wasn't that much science fiction to watch on TV at that stage that got me into reading the whole the whole literature of science fiction, you know, from from the beginning of the previous century and until that time. And I mean, there was so many science fiction writers who were inspirational to me. I'd say if I had to pick two, it would have been stanislao learned the Polish writer. Yeah, Soliris.

[00:04:38]

And then he had a bunch of more obscure writings on on super human eyes that were engineered. Solaris was sort of a super human, naturally occurring intelligence then Philip K. Dick, who, you know, ultimately my friend and Philip K. Dick is one of the things that brought me together with David Hanson, my collaborator on robotics project. So, you know, standards for them was was very much an intellectual. Right. So he he had a very broad view of intelligence going beyond the human and into what I would call, you know, open ended superintelligence that the Solaris Superintelligent Ocean was intelligent in some ways more generally intelligent than people, but in a complex and confusing way so that human beings could never quite connect to it.

[00:05:27]

But it was but it was still probably very, very smart. And then the government for supercomputer in one of one of them's LUMS books, this was engineered by people, but eventually it became very intelligent in a different direction than humans and decided that humans were kind of trivial, not that interesting. So it put some impenetrable shield around itself. So that's how far from humanity and then issued some philosophical screed about the pathetic and hopeless nature of humanity and all human thought and and then disappeared.

[00:06:05]

Now, Philip K. Dick, he was a bit different. He was human focused. His main thing was, you know, human compassion. And the human heart and soul are going to be the constant that will keep us going through whatever aliens, aliens we discover or telepathy machines or super AIS or whatever it might be. So he didn't believe in reality, like the reality that we see maybe a simulation or a dream or something else we can't even comprehend.

[00:06:35]

But he believed in love. And compassion is something persistent through the various simulated realities. So those those two science fiction writers have had a huge impact on me. Then a little older than that, I got into Dostoyevsky and Friedrich Nietzsche and Rumbold and a bunch of more more literary type things.

[00:06:54]

We talk about some of those things on the Solera side, Stanislav.

[00:07:00]

This kind of idea of there being intelligence is out there that are different than our own. Do you think their intelligence is maybe all around us that we're not able to even detect this kind of idea of maybe you can comment also on Stephen warfrom thinking that there's computations all around us and we're just not smart enough to kind of detect their intelligence or appreciate their intelligence? Yeah. So my friend Hugo Tagaris, who I've been talking to about these things for for many decades, since the early 90s, he had an idea he called SIOP the search for a particular intelligence.

[00:07:42]

So the concept there was, as A.I.s, get smarter and smarter and smarter. You know, assuming the laws of physics as we know them now are still are still what these superintelligence is perceived holding, they're bound by as they get smarter and smarter, they're going to shrink themselves littler and littler because special relativity makes it so they can communicate between two spatially distant points. So they're going to get smaller and smaller. But then ultimately, what does that mean?

[00:08:10]

The minds of the super, super, super intelligences, they're going to be packed into the the interaction of elementary particles or quarks or the protons inside quarks or whatever it is. So what we perceive as random fluctuations and the quantum or some quantum level may actually be the thoughts of the micro, micro, micro, miniaturised super intelligences, because there's no way we can tell random from structured, but with an algorithmic information more complex than our brains. Right.

[00:08:40]

We can't tell the difference. So what we think is random could be the thought processes of some really tiny super minds. And if so, there is not a darn thing we can do about it except, you know, try to upgrade our intelligences and expand our minds so that we can we can perceive more of what's around us. But if that if those random fluctuations, like even if we go to, like, quantum mechanics, if that if that's actually super intelligent systems only then part of the soup of superintelligence, the I don't mean just like like a finger of the entirety of the body of the superintelligent system.

[00:09:18]

It could be. I mean, a finger is a is a strange metaphor. I mean, we think it is Dharma's, I mean as if the finger is also useful and is controlled with, with intent, but by the brain, whereas we may be much less than that. Right. I mean, I mean we may be just some random epiphenomenon they don't care about too much. Like think about the the shape of the crowd emanating from a sports stadium or something right there.

[00:09:46]

There's some emergent shape to the crowd. It's there. You could take a picture of it. It's kind of cool. It's irrelevant to the main point of the sports event or where the people are going or what's on the minds of the people making that shape in the crowd. So we may just be some semi arbitrary, higher level pattern popping out of a lower level hyperintelligent self organization. And I'm in. So so be it, right? I mean, that's one thing that's fun, right?

[00:10:15]

Yeah. I mean, the older I've gotten, the more respect I've achieved for our fundamental ignorance. I mean, mine and everybody else. I mean, I look at my my two dogs, two beautiful little toy poodles. And, you know, they watch me sitting at the computer typing. They just think I'm sitting there wiggling my fingers to exercise and maybe or guarding the monitor on the desk that they have no idea that I'm communicating with other people halfway around the world, let alone, you know, creating complex algorithms running in rhyme on some computer server in St.

[00:10:48]

Petersburg or something like that. Although that right there, right there in the room with me. So what things are there right around us that we're just too stupid of close minded to comprehend? Probably. Probably quite a lot. You're very you're very pro.

[00:11:01]

Could be could also be communicating across multiple dimensions with with other or other beings and year to year to unintelligent to understand the kind of communication mechanism they're going through.

[00:11:13]

There have been various TV shows and science fiction novels positing cats, dolphins, mice and whatnot. They're actually super is here to observe that. I would I would guess, as one or the other quantum physics founder said those theories are not crazy enough to be true.

[00:11:33]

The reality is probably crazier than you saw on the human side with Philip K. Dick. And and in general, where do you fall on this idea that love and just the basic spirit of human nature persist throughout these multiple realities?

[00:11:52]

Are you on the side like the thing that inspires you about artificial intelligence? Is it the human side of somehow persisting through all of the different systems we engineer, or is it or is I inspire you to create something that's greater than human, that's beyond human, that's almost non-human? I would say my motivation to create A comes from from both of those directions, actually, so when when I first became passionate about Ajai, when I was I would have been two or three years old after watching robots on Star Trek.

[00:12:32]

I mean, then it was really a combination of intellectual curiosity, like can a machine really think, how would you do that? And yeah, just ambition to create something much better than all the clearly limited and fundamentally defective humans I saw around me then as I got older and got more enmeshed in the human world and, you know, got married, had children. So my parents began to age. I started to realize, well, not only will I let you go far beyond the limitations of the human, but it could also like stop us from dying and suffering and feeling pain and tormenting ourselves mentally.

[00:13:12]

So you can see Ajai has amazing capability to do good for humans as humans, alongside with its capability to go far, far beyond the human level. So, I mean, both that both aspects are there, which makes it even more exciting and important. So you mentioned asking each other, what did you pick up from those guys? I mean, that that would probably go beyond the beyond the scope of a brief interview. But both of those are amazing thinkers who one will necessarily have a complex relationship with.

[00:13:49]

Right. So, I mean, Dostoyevsky on the on the minus side, he's kind of a religious fanatic. And he sort of helped squash the Russian nihilist movement, which is very interesting because what nihilism meant originally in that period of the mid late eighteen hundreds in Russia was not not taking anything fully, 100 percent for granted. It was really more like what we'd call Bayesian ism. Now, where you don't want to adopt anything as a dogmatic certitude and always leave your mind open.

[00:14:18]

And how Dostoyevsky parodied nihilism was what was it was a bit different. Right. He produces people who believe absolutely nothing. So they they must assign an equal probability way to to every proposition which which which doesn't really work. So on the one hand, I didn't really agree with Dostoyevsky on on his sort of religious point of view. On the on the other hand, if you look at his understanding of human nature and sort of the human mind and heart and soul, it's really unparalleled.

[00:14:54]

And he had an amazing view of how human beings, you know, construct a world for themselves based on their own understanding and their own mental predisposition. And I think if if you look in The Brothers Karamazov, in particular, the the Russian literary theorist Mikhail Bakhtin wrote about this as a polyphonic mode of fiction, which means it's not a third person, but it's not first person from any one person, really. There are many different characters in the novel, and each of them is sort of telling part of the story from their own point of view.

[00:15:29]

So the reality of the whole story is, is an intersection like synergistically of the many different characters, world views. And that really it's a beautiful metaphor and even a reflection, I think, of how all of us socially create our reality, like each of us sees the world in a certain way. Each of us, in a sense, is making the world as we see it based on our own, our own minds and understanding. But it's polyphony like I can like in music where multiple instruments are coming together, coming together to create the sound.

[00:16:02]

The ultimate reality that's created comes out of each of our subjective understandings, you know, intersecting with each other. And that that was one of the many beautiful things, industrial scale. So maybe a little bit to mention, you have a connection to Russia and Soviet culture. I mean, I'm not sure exactly what the nature of the connection is, but there at least the spirit of your thinking.

[00:16:24]

Oh, yeah. Well, my my my ancestry is three quarters Eastern European Jewish. So, I mean, my three of my great grandparents emigrated to New York from Lithuania and sort of border regions of Poland, which are in and out of Poland in around the time around the time of World War One. And they were. They were. Socialists and communists, as well as Jews, mostly Mensheviks, not not Bolshevik, and they sort of they fled at just the right time to the US for their own personal reasons.

[00:16:58]

And then almost all or maybe all of my extended family that remained in Eastern Europe was killed either by Hitler or Stalin's minions at some point. So the burden of the family then emigrated to the US was was pretty much the only one. So how much of the spirit of the people is in your blood still you when you look in the mirror, do you see? What do you see me?

[00:17:23]

I see a bag of meat that I want to transcend by uploading into some sort of superior reality. Very. I mean, yeah, very clearly.

[00:17:35]

I mean, I'm I'm not religious in the traditional sense, but clearly the the Eastern European Jewish tradition was what I was raised in. I mean, there was my grandfather, Leia's well, was a physical chemist to work with Linus Pauling and a bunch of the other early greats in quantum mechanics. I mean, he was he was into X-ray diffraction. He was on the material science side, experimentalists rather than a theorist. His sister was was also a physicist.

[00:18:05]

My my father's father, Victor Garozzo, was a PhD in psychology who had the unenviable job of giving psychotherapy to the Japanese in internment camps in the US in war in World War Two to counsel them. Why they shouldn't kill themselves, even though they had all their stuff taken away and then imprisoned for no good reason. So, I mean, there yeah, there is a lot of Eastern European Jewish tradition in my in my background. One of my great uncle was a guest conductor of San Francisco Orchestra.

[00:18:39]

So there's a lot of mckissock and a bunch of music music in there also. And clearly this culture was all about learning and understanding, understanding the world and also not quite taking yourself too seriously while you do it right. There's a lot of Yiddish humor in there. So I do appreciate that. That culture, although the whole idea that, like the Jews are the chosen people of God, never resonated with me too much. The graph of the girls of family, I mean, just the people I've encountered, just doing some research and just knowing your work through the decades, it's kind of fascinating.

[00:19:20]

And just the number of PhDs.

[00:19:23]

Yeah, yeah. I mean, fascinating.

[00:19:25]

My dad is a sociology professor who recently retired from from Rutgers University. But that clearly that gave me a head start in life. I mean, my my grandfather gave me quantum mechanics books and I was like seven or eight years old. And I, I remember going through them and it was all the old car mechanics like Rutherford albums and stuff. So I got to the part of wave functions, which I didn't understand, although I was very bright kid and I realized he didn't quite understand it either.

[00:19:56]

But at least like he pointed me to some professor, he knew you pen nearby who understood these things. So that's that that's an unusual opportunity for a kid to have. My dad, he was programming Fortran when I was 10 or 11 years old on like three thousand mainframes at Rutgers University. So I got to do linear regression in Fortran on punch cards when I was in middle school because he was doing, I guess, analysis of demographic and sociology data.

[00:20:26]

So, yes, certainly. Certainly that gave me a. Head Start and a push towards science beyond what would have been the case with many, many different situations. When did you first fall in love with the ice at the. Is it the programming side of Fortran? Is it maybe the sociology psychology that you picked up from your dad? I when I was probably three years old, when I saw a robot on Star Trek, it was turning around in a circle, going around Iraq, Iran, because Spock and Kirk had tricked into a mechanical breakdown by presenting with a logical paradox.

[00:21:00]

And I was just like, well, this makes no sense. This is very, very smart. It's been travelling all around the universe. But these people could trick it with a simple, logical paradox like what if, you know, if the human brain can get beyond that paradox, why can't why can't this? So I felt the screenwriters of Star Trek had misunderstood the nature of intelligence. And I complained to my dad about it. And he he wasn't he wasn't going to say anything one way or the other.

[00:21:29]

But, you know. And before I was born, when my dad was an Antioch College in in the middle of the US, he led a he led a protest movement called Slum Student League against Mortality. They were protesting against death, wandering around the campus. So he he was in some futuristic things even back then. But whether I could confront logical paradoxes or not, he didn't he didn't know. But, you know, when I 10 years after that is something I discovered Douglas Hofstadter's book, Girdler Zoback.

[00:22:05]

And that was sort of to the same point of A.I. and paradox and logic. Right, because he was over and over and over with Girls and Completeness Theorem. And can I really fully model itself reflexively or does that lead you into some paradox? Can the human mind truly model itself reflexively or does that lead you into some paradox?

[00:22:25]

So I think that book for The Last Robock, which I think I read when it first came out, I would have been 12 years old or something. I remember it was like 16 hour day. I read it cover to cover and then I reread it after that because there was a lot of weird things with little formal systems in there that were hard for me at the time.

[00:22:43]

But that was the first book I read in the. Gave me a feeling for I was like a practical academic or engineering discipline that people were working in because before I read called Starbuck. I was in the eye from the point of view of a of a science fiction fan, and I had the idea. Well, it may be a long time before we can achieve immortality in superhuman ajai. So. I should figure out how to build a spacecraft traveling close to the speed of light, go far away, then come back to the Earth in a million years when technology is more advanced and we can build these things.

[00:23:19]

Reading Girdler Zoback Well, it did not ring true to me. A lot of it did. But I could see, like, there are smart people right now at various universities around me who are actually trying to work on building what I would now call ajai, although Hostetter didn't call it that.

[00:23:36]

So really, it was when I read that book, which would have been probably middle school, then I started to think, well, this this is something that I could I could practically flying away and waiting it out.

[00:23:49]

You can actually be one of the people that actually.

[00:23:51]

Yeah. And if you think about it, I was interested in what we now call nanotechnology and in the. Human immortality and time travel are all the same things as every other, like science fiction loving kid, but I seem like if Hofstetter was right, you just figure out the right programs that they're entitled. Like you don't need to you don't need to spend stars in the weird configurations or get government approval to cut people up and fiddle with their DNA or something.

[00:24:21]

Right. It's just programming. And then, of course, that can achieve anything else that there's another book from back then, which was by the Finebaum Gerald Finebaum, who was who was a physicist at Princeton, and that was the Prometheus project. And this book was written in the late 1960s. So I encountered it in the mid 70s. But what this book said is in the next few decades, humanity is going to create superhuman thinking machines, molecular nanotechnology and human immortality.

[00:24:54]

And then the challenge will have is what to do with it, to expand human consciousness in a positive direction, or do we use it just to further vapid consumer consumerism? And what he proposed was that the U.N. should do a survey on this and the U.N. should send people out to every little village in remotest Africa, South America, and explain to everyone what technology was going to bring the next few decades and the choice that we had about how to use it and let everyone and the whole planet vote about whether we should develop, you know, super I know technology and and immortality for expanded consciousness or for rampant, rampant consumerism.

[00:25:35]

And needless to say. That didn't quite happen, and I think this guy died in the Middle Ages, so we didn't even see his ideas start to become become more mainstream. But it's interesting, many of the themes I'm engaged with now are from a giant immortality even to trying to democratize technology as I've been pushing forward the singularity in my work in the blogging world. Many of these themes were there in my mom's book and in the late 60s even.

[00:26:05]

And of course, Valentin Turchin, a Russian writer who I and a great Russian physicist who I got to know when we both lived in New York in the late 90s and early art. I mean, he had a book in the late 60s in Russia, which was the phenomenon of science, which laid out all this all these same things as well. And Val died and I remember four or five or something of Parkinson's.

[00:26:32]

So, yeah, it's easy, easy for people to lose track now of the fact that the the futurist and singularity and advanced technology ideas that are now almost mainstream are on TV all the time. I mean, these. These are not that new, right, they're sort of new in the history of the human species, but I mean, these are all around in fairly mature form in the middle of the last century, were written about quite articulately by fairly mainstream people who are professors at top universities.

[00:27:07]

It's just until the enabling technologies got to a certain point. Then you couldn't make it real soon. And even in the 70s, I was sort of seeing that and living living through it. Right from Star Trek to Douglas Hofstetter, things were getting very, very practical from the late 60s to the late 70s. And, you know, the first computer I bought, you could only program with hexadecimal machine code and you had to solve it together. And then then like a few years later, there's punch cards and a few years later, you could get like Atari four hundred and Commodore victory and you could you could type on the keyboard and programmed in higher level languages alongside the assembly language.

[00:27:52]

So these ideas have been building up a while and I guess my generation got to feel them build up, which is different than people coming into the field now. Now for whom these things have just been part of the ambience of culture for their whole career or even the or even the even their whole life was fascinating to think about.

[00:28:12]

You know, there being all of these ideas kind of swimming, you know, almost with the noise all around the world and all the different generations and some kind of nonlinear thing happens where they percolate up and and capture the imagination of the mainstream. And that seems to be what's happening where they are now.

[00:28:32]

I mean, you mentioned that the idea of the Superman. Right. But he he didn't understand enough about technology to think you could physically engineer a Superman by piecing together Molik molecules in a certain way. He he was a bit vague about how how how the Superman would appear. But he was quite deep in thinking about what the state of consciousness and the mode of cognition of a Superman would be. He was a very astute analyst of, you know, how the human mind constructs the illusion of itself, how it constructs the illusion of free will, how it constructs values like like good and evil out of its own, you know, desire to maintain and advance its own organism.

[00:29:18]

He understood a lot about how human minds work. Then he understood a lot about how post human minds would work. I mean, the Superman was supposed to be a mind that would basically have complete access to its own brain and consciousness and be able to architect its own its own value system and inspect and fine tune all of its own, its own biases. So that's a lot of powerful thinking there, which then fan in and sort of ceded all of postmodern continental philosophy in all sorts of of things have been very valuable in development of culture and indirectly even even of technology.

[00:29:57]

But of course, without the technology there, it was all some quite abstract thinking. So now we're at a time in history when a lot of these ideas. Can be can be made real, which is amazing, amazing and scary, right? It's kind of interesting to think. What do you think Nietzsche would if he was born a century later or transported through time? What do you think you would say about I I mean, those are quite different.

[00:30:21]

If he's born a century later, we're transported through time. Well, he'd be beyond, like, tick tock and Instagram and you would never write the great works he's written. So maybe maybe also sprach Zarathustra would be a music video, right? I mean. I mean. I mean, who knows. Yeah.

[00:30:37]

But if he was transported through time do you think. Debbie, be interesting, actually, to go back. You just made me realize that it's possible to go back and read Nature with an eye of is there some thinking about artificial beings? I'm sure there he has. He had inklings. I mean, with Frankenstein before him, I'm sure he had inklings of artificial beings somewhere in the text.

[00:31:01]

It'd be interesting to see to try to read his work, to see if he hadn't if if Superman was actually an AGI system, like if he had inklings of that kind of thinking, he didn't he didn't know I would say no. I mean, he had. He had a lot of inklings of modern cognitive science, which are very interesting, if you look in like the the third part of the collection that's been titled The Will to Power and put through there, there's there's very deep analysis of thinking processes.

[00:31:38]

But he he wasn't so much of a physical tinkerer type type guy. It was very abstract.

[00:31:46]

And do you think worrying about the world to power, do you think human? What do you think drives humans? Is it is it. Oh, an unholy mix of things. I don't think there's one pure, simple and elegant objective function driving humans by any means. What do you think? If we look at I know it's hard to look at humans in the aggregate, but do you think overall humans are good? Or do we have both good and evil within us that depending on the circumstances, depending on the whatever can can can go perfectly to the top good and evil or.

[00:32:28]

Very ambiguous, complicated in some ways, silly concepts, but if we we could dig into your question from a couple directions. So I think if you look at the evolution, humanity is shaped both by individual selection and what biologists would call group selection, like tribe level selection. Right. So individual selection has driven us in a selfish DNA sort of way so that each of us does, to a certain approximation, what will help us propagate our DNA to to future generations.

[00:33:04]

I mean, that that's why I've got four kids so far, and probably that's not the last one. On the other hand, I like the ambition. Tribal like group selection means humans in a way will do what what will advocate for the persistence of the DNA of their whole their whole tribe or their social group. And in biology, you have both of these like and you can see an ant colony or a beehive. There's a lot of group selection in the evolution of those social animals.

[00:33:36]

On the other hand, a big cat or some very solitary animal, it's a lot more biased toward individual selection. Humans are an interesting balance, and I think this reflects itself in what we would view as selfishness versus altruism to some extent. So we just have both of those objective functions contributing to the make up of our brains. And then as Nietzsche analyzed in his own way and others have analyzed in different ways, I mean, we abstract this as well.

[00:34:08]

We have both good, good and evil within us. Right. Because a lot of what we view as evil is really just selfishness. And a lot of what we view as good is altruism, which means doing what's good for the for the tribe. And on that level, we have both of those just baked baked into us. And that's that's how it is. Of course, there are psychopaths and sociopaths and people who, you know, get gratified by the suffering of others.

[00:34:38]

And that's that's that's a different thing.

[00:34:42]

Yeah, those are exceptions. But yeah. But I think at core we're not purely selfish. We're not purely altruistic. We are a mix. And that's that's the nature of it. And we also have a complex constellation of values that are just very specific to our evolutionary history. Like we, you know, we we love waterways and mountains. And the ideal place to put the house is on a mountain overlooking the water. Right. And, you know, we care a lot about our our kids and we care a little less about our cousins and even less about our fifth cousins.

[00:35:21]

I mean, there are many particularities to the human values which whether they're good or evil, depends on here on your perspective or let's say I I spent a lot of time in Ethiopia, in Addis Ababa, where we have one of our eye development officers for my singularity project. And when I walk through the streets and all this, you know, there's so there's people lying by the side of the road, like just living there by the side. There are dying probably of curable diseases without enough food and medicine.

[00:35:55]

And when I walk by them, you know, I feel terrible. I give them money when I come back home to the developed world. They're not on my mind that much, I do donate some, but I mean, I also spend some of the limited money I have enjoying myself in frivolous ways rather than donating it to those people who are right now like starving, dying and suffering on the roadside. So does that make me evil? I mean, it makes me somewhat selfish and somewhat altruistic.

[00:36:24]

And we each we each balance that in our own way. Right. So that's. Whether that will be true of all possible ages is is a is a subtler question. So, yes, that's how humans are. So you have a sense you kind of mentioned that there's a selfish I'm not going to bring up the whole Ayn Rand idea of selfishness being the core virtue. That's a whole interesting kind of tangent that I think will just distract ourselves.

[00:36:53]

I have to make one amusing comment or comment that has amused me anyway. So the the I have extraordinarily negative respect for for Ayn Rand, negative with a negative. But when I work with a company called Generation, which was evolving flies to have extraordinarily long lives in in Southern California. So we have spies that were evolved by artificial selection to five times the lifespan of normal fruit flies. But the population of super long live flies was physically sitting in a spare room and Ayn Rand Elementary School in Southern California.

[00:37:35]

So that was just like, well, if I saw this in the movie, I wouldn't believe it. Well, yeah, the universe has a sense of humor in that kind of way that fits in the humor, fits in somehow into this whole absurd existence. But you mention the balance between selfishness and altruism as kind of being innate. Do you think is possible? That's kind of an emergent phenomena, those peculiarities of our value system. How much of it is innate?

[00:38:04]

How much of it is something we collectively kind of like a dusty novel? Bring to life together as a civilization. I mean, the answer to nature versus nurture is usually both and of course it's nature versus nurture versus self organization, as you mentioned. So clearly, there are evolutionary roots to individual and group selection, leading to a mix of selfishness and altruism, on the other hand. Different cultures manifest that in different ways. We all are basically the same biology.

[00:38:39]

And if you look if you look at sort of three civilized cultures, you have tribes like the Yoanna in Venezuela, which which their their culture is focused on, on killing out, killing other tribes. And you have other stoneage tribes that are mostly peaceful and have big taboos against violence. So you can certainly have a big difference in how culture manifests these innate biological characteristics. But still, you know, there's probably limits that are given by our biology.

[00:39:14]

I used to argue this with my great grandparents who were Marxists, actually, because they they believed in the withering away of the state, like they believe that, you know, as you move from capitalism to socialism to communism, people would just become more social minded so that a state would be unnecessary and people would just give everyone would give everyone else what they needed. Setting aside that, that's not what the various Marxist experiments on the planet seem to be heading toward in practice.

[00:39:47]

Just as a theoretical point, I was very dubious that that human nature could go there at that time. My great grandparents are alive. I was just like, you know, I'm a cynical teenager. I think humans are humans are just jerks. The state is not going to wither away. If you don't have some structure keeping people from screwing each other over, they're going to do it. So now I actually don't quite see things that way. I mean, I think the my feeling now subjectively is the culture aspect is more significant than I thought it was when I was a teenager.

[00:40:22]

And I think. You could have a human society that was dialed dramatically, further toward, you know, self-awareness, their awareness, compassion and sharing, then our current society and of course, greater material abundance helps. But to some extent, material abundance is a subjective perception. Also, because many stoneage cultures perceive themselves as living in great material abundance that are the food and water they want and lived in the beautiful place that. Sex lives, their children, I mean, they had abundance without any factories.

[00:41:00]

So I, I think humanity probably would be capable of fundamentally more positive and joy filled mode of social existence than than what we have now. Clearly, Marks didn't quite have the right idea about about how how to get there. I mean, he missed he missed a number of of key aspects of of human society and its evolution. And if we look at where we are in society now. How to get there is quite a quite different question, because there are very powerful forces pushing people in in different directions than the positive, joyous, compassionate existence.

[00:41:43]

Right.

[00:41:43]

So if we were trying to you know, your mosque is dreams of colonizing Mars at the moment. So we maybe have a chance to start a new civilization with a new governmental system. And certainly there's quite a bit of chaos. We're sitting now. I don't know what the date is, but this is June. There's quite a bit of chaos and all different forms going on in the United States and all over the world. So there's a hunger for new types of governments, new types of leadership, new types of systems.

[00:42:17]

And so what are the forces at play and how do we move forward?

[00:42:21]

Yeah, I mean, colonizing Mars, first of all, it's a super cool thing to do. We should be doing it. So you're you're you love the idea. Yeah. I mean, it's more important. It's more important than making chocolate, your chocolates and sexier lingerie and many of the things that we spend a lot more resources on as a species. Right. So, I mean, we certainly should do it. I think the. The possible future is in which a Mars colony makes a critical difference for humanity are very few.

[00:42:55]

I mean, I think I mean, assuming we make a Mars colony, people go live there in a couple of decades, I mean, their supplies are going to come from Earth. The money to make the colony came from Earth. And whatever powers are supplying the goods they're from, from Earth are going to, in effect, be in control of that of that Mars colony. Of course, there are outliers situations where, you know, Earth gets nuked into oblivion and somehow Mars has been made self-sustaining by that point.

[00:43:28]

And and then Mars is what allows humanity to persist. But I think that those are very, very, very unlikely.

[00:43:37]

Do you not think it could be a first step on a long journey? It's the first step in a long journey, which which is awesome. I'm guessing the colonization of the rest of the fiscal universe will probably be done by. Ages that are better designed to live in space than by by the machines that that we are. But I mean, who knows, we may cryo cryopreserved ourselves and some superior way to what we know now and like shoot ourselves out to Alpha Centauri and beyond.

[00:44:08]

I mean, that's all cool. It's very interesting and it's much more valuable than most things that humanity is spending its resources on. On the other hand, with Ajai, we can get to a singularity before the Mars colony becomes sustaining for sure, possibly before it's even operational. And so your tuition is that that's the problem. If we really invest resources and we can get to faster than a legitimate full like self-sustaining colonization of Mars.

[00:44:37]

Yeah, and it's very clear that we will to me, because there's so much economic value in getting from there. I turned toward ajai was the Mars colony. There's less economic value until you get quite far out into the into the future. So I think that's very interesting. I just think it's it's somewhat somewhat off to the side. I mean, just as I think say, you know, art and music are very, very interesting. And I want to see resources go into amazing art and music being being created and.

[00:45:14]

I'd rather see that than a lot of the garbage that the society spends their money on. On the other hand, I don't think Mars colonization or inventing amazing new genres of music is it's not one of the things that is most likely to make a critical difference in the evolution of human or non-human life in this part of the universe over the next decade. Do you think Ajai is ragi is is by far the most important thing that's on the horizon? And then technologies that have direct ability to enable AGIA to accelerate ajai are also very important.

[00:45:54]

For example, say, quantum computing. I don't think that's critical to achieve ajai, but certainly you could see how the right quantum computing architecture could massively accelerate ajai similar to other other types of nanotechnology. Right now, the quest to cure aging and end disease, while not in the big picture, as important as ajai, of course, it's important to all of us as as as individual humans.

[00:46:24]

And if someone made a super longevity pill and distributed it tomorrow, I mean, that would be huge and a much larger impact than a Mars colony is going to have for quite some time, but perhaps not as much as an ajai system now.

[00:46:41]

But if you if you can make a benevolent ajai, then all the other problems are solved. I mean, if then the ajai can be once it's as generally intelligent as humans, it can rapidly become massively more generally intelligent than humans. And then that that ajai should be able to solve science and engineering problems much, much better than than than human beings, as long as it is, in fact motivated to do so. That's why I said a benevolent ajai.

[00:47:10]

There could be other kinds. Maybe it's good to step back a little bit. I mean, we've been using the term ajai.

[00:47:16]

People often cite you as the creator or at least the populariser of the term Asiye Artificial General Intelligence. Can you tell the origin story of the term? So, yeah, I would say I. I launched the term Ajai upon the world first, for what it's worth, without ever fully being in love with the term. What happened is I was editing a book and this process started around 2000, one or two. I think the book came out 2005 for me.

[00:47:47]

I was editing a book which I provisionally was titled Titling Real I. And I mean, the goal was to gather together fairly serious academic papers on the topic of making thinking machines that could really think in the sense like people can or even more broadly than people can write. So then I was reaching out to other folks that I had encountered here, there who were interested in that, which included some some other folks out there who I knew from the transhumanist and singularity and world, like Peter Voss, who has a company, Ajai Inc.

[00:48:25]

still in California, and included Shane Legg, who had worked for me at my company Web Mind in New York in the late 90s, who by now has become rich and famous. He was one of the co-founders of Google Deep Mine. But at that at that time, Shane was. I think he may have been have just started doing his deal with Marcus Fuller, who at that time hadn't yet published his book Universal, I would sure give us a mathematical foundation for artificial general intelligence.

[00:49:00]

So I reached out to Shane and Marcus and Peter Viles and pay Waying, who was another former employee of mine who had been Douglas Hofstadter's student, who had his own approach to Ajai. And a bunch of some Russian folks reached out to these guys and they contributed papers for the book. But that was my provisional title. But I never loved it because in the end, you know, I was doing some what we would now call narrow AI as well, like applying machine learning to genomics data or chat data for sentiment analysis.

[00:49:34]

And I mean, that work is real. And in a sense in a sense, it's really I it's just a different kind of kind of AI Ray Kurzweil wrote about narrow AI versus strong AI. But that seemed weird to me because. First of all, narrow and strong. That's right. But secondly, strong, I was used in the cognitive science literature. I mean, the hypothesis that digital computer age could have true consciousness like like human beings.

[00:50:07]

So there was already a meaning to strong A.I., which was complexly different, but related. So we were tossing around on an email list whether what title title should be. And so we we talked about now I brought I why do I know? I General I. And I think it. It was either Sheinberg or Peter Viles on the private email discussion we had just about when we go with ajai artificial general intelligence and Tawang want to do jei general artificial intelligence, because in Chinese it goes in that order.

[00:50:45]

But we figured gay wouldn't work in US culture that time. So so so we went with the Ajai. We used it for the for the title of that book. And part of Peter and Shane's reasoning was you have the G factor in psychology, which is IQ general intelligence. So you have a meaning of G.I. general intelligence in psychology. So then you're looking like artificial GI.

[00:51:12]

So then that makes a lot of. Yeah, we use that for that. We use that for the title of the book. And so I think I maybe both Shane and Peter think they invent the term. But then later after the book was published, this guy Mark Goodbody came up to me and he's like, Why? I published an essay with the term Ajayan in like 1997 or something. And so I'm just waiting for some Russian to come out and say they published that in 1953.

[00:51:40]

I mean, that's for sure. That term is not dramatically innovative or anything. It's one of these obvious in hindsight things, which is also. Annoying in a way, because. You know, Josh Harbach, who you interviewed, is a close friend of mine, and he likes the term synthetic intelligence, which I like much better, but it hasn't it hasn't actually caught on because, I mean, artificial is a bit off to me because, ah, the face is like a tool or something.

[00:52:12]

But but not all ages are going to be tools. I mean, there may be now, but we're aiming toward making them agents rather than than tools. And in a way, I don't like the distinction between artificial and natural because I mean we're part of nature also and machines are part of our part of nature. I mean, you can look at evolve versus engineered, but that's a different that's a different decision than it should be engineered. General intelligence and then General.

[00:52:38]

Well, if you look at Marcus Hunter's book universally, I well, he argues there is, you know, within the domain of computation theory, which is limited, but interesting. So if you assume computable environments, the computer will reward functions, then he articulates what would be a truly general intelligence, a system called ISI, which is quite beautiful. I see. I see. And that's that's the middle name of of my alleged child actually is a what's the first name.

[00:53:07]

First name is Quirk's EQR exi, which my wife came up with. But that's an acronym for Quantum Rational Expanding Intelligence and his ex is his middle name is his middle name is X X Affinis actually which is mean, it means the former principle underlying XY. But if you're giving Elon Musk your child to run. Well I did it first.

[00:53:31]

He copy, he copied me with this new freakishness. But now if I have another baby, I'm going to have to outdo her to becoming an arms race. If we're geeky baby names, we'll see what the babies think about it. Yeah, but I mean, my oldest son, Zarathustra, loves his name and my daughter Sherzad loves her name. So so far, basically, if you give your kids weird names, they live up to it. Well, you're obliged to make the kids weird enough that they like the names, directs their upbringing in a certain way.

[00:54:01]

But yeah, anyway, I mean, what Mark is showing in that book is that a truly general intelligence theoretically is possible, but would take infinite computing power. So then the artificial is a little off. The general is not really achievable within physics as far as we know. And I mean physics as we know it may be limited, but that's what we have to work with now. Intelligence infinitely general.

[00:54:24]

You mean like information from the information processing perspective? Yeah. Yeah. Intelligence is not very well defined either. I mean, what does it mean? I mean, in a I know it's fashionable to look at it as maximising an expected reward over the future, but that's that sort of definition is pathological in various ways. And but my friend David Weinbaum, a.k.a. Weaver, he had a beautiful thesis on open ended intelligence, trying to conceive intelligence in a without a reward that, yeah, he's just looking at it differently.

[00:54:57]

He's looking at complex self organising systems and looking at an intelligence system as being one that, you know, revises and grows and improves itself in conjunction with it, with its environment, without necessarily there being one objective function it's trying to maximize, although over certain intervals of time, it may act as if it's optimizing a certain objective function. Very much Söderström Stalin's novels. Right. So the other point is artificial general intelligence don't work. They're all bad.

[00:55:26]

On the other hand, everyone knows what AI is and ajai seems immediately comprehensible to people with a technical background. So I think that the term is served, its sociological function, not now. It's out there everywhere. Which which it starts baffles me. It's like KFC. I mean, that's it. Yeah. We're stuck with ajai probably for a very long time until ajai systems take over and rename themselves. Yeah.

[00:55:51]

And I mean then we'll be stuck with GPS too, which means have nothing to do with graphics anymore.

[00:55:57]

So I wonder what the system will call us humans.

[00:56:00]

I was maybe grandpa grandpa processing your biological grandpa processing units. Uh, OK.

[00:56:12]

So um, maybe also just a comment on ajai representing before even the term existed, representing a kind of community like you've talked about this in the past, sort of aiyaz coming in waves. But there's always been this community of people who dream about creating general human level superintelligent systems.

[00:56:35]

Um, can you maybe give your sense of the history of this community as it exists today, as it existed before this deep learning revolution all all throughout the winters and the summers of.

[00:56:46]

Yeah, sure. First, I would say as a side point, the winters and summers, they are. Greatly exaggerated by by Americans in that if you look at the publication record of the artificial intelligence community since the 1950s, you would find a pretty steady growth in advance of ideas and papers. And what's thought of as an AI winter or summer was sort of how much money is the US military pumping in, which was was meaningful. On the other hand, there was a real going on in Germany, U.K. and in Japan, in Russia all over the place, while U.S. military got more and less less enthused about A.I. So I mean, that happened to be just for people who don't know, the US military happened to be the main source of funding for our research.

[00:57:41]

So another way to phrase that is it's up and down of funding for our research.

[00:57:48]

And I would say the correlation between funding and intellectual advance was not 100 percent right, because, I mean, in Russia as an example, or in Germany, there was less funding than in the US, but many foundational ideas were were laid out. But it was more theory than implementation. And us really excelled at sort of breaking through from theoretical papers to working implementations, which which did go up and down somewhat with the US military funding. But still, I mean, you can look in the 1980s, Dietrich Turner in Germany had self-driving cars on the autobahn.

[00:58:28]

And I mean this it was a little early with regard to the car industry. So it didn't catch on, such as has has happened now. But I mean, that whole advancement of self-driving car technology in Germany was pretty much independent of a military summers and winters in the U.S. So there's been more going on. And they globally then not only most people on the planet realize, but then most new, I realize because they've come out within a certain subfield of A.I. and haven't had to look so much, so much beyond that.

[00:59:05]

But I would say. When I got when I got my pig in 1989 in mathematics, I was interested in I already in Philadelphia, but yeah, I started at NYU and I transferred to Philadelphia to Temple University in good old North Philly, North Philly, the pearl of Pearl of the U.S.. You never stopped at a red light then because you were afraid if you stopped at a red light, some more carjacking. So you drive through every red light.

[00:59:33]

Yeah, this is it isn't every day driving or bicycling to temple from my house was like a new new adventure. But yeah, when the reason I didn't do a and I was what people were doing in the academic I feel then was just astoundingly boring and seemed wrong headed to me. It was really like rule-based expert systems and production systems. And I actually I loved mathematical logic. I had nothing against logic as the cognitive engine for an AI. But the idea that you could type in the knowledge that I would need to think seemed just completely stupid and and wrong headed to me.

[01:00:12]

I mean, you can use logic if you want, but somehow the system has got to be automated, learning. It should be learning from experience. And I feel Ben was not interested in learning from experience. I mean, some researchers certainly were. I mean, I remember in the mid eighties I discovered a book by John Andreas, which was it was about a reinforcement learning system called Purpose You Are Process, which was an acronym that I can't even remember what it was for purpose anyway.

[01:00:47]

But I mean, that was a system that was supposed to be an ajai and basically by some sort of fancy like Markov decision process learning it was supposed to learn everything just from the bits coming into it and learning to maximize its reward and become become intelligent. Right. So that was there in academia back then, but it was like isolated, scattered, weird people. But all these isolated, scattered, weird people in that period, I mean, they they laid the intellectual grounds for what happened later.

[01:01:19]

So you look at John Andrius, the University of Canterbury, with its purpose reinforcement learning murkoff system. He was the supervisor for John Cleary in New Zealand. Now, John Cleary worked with me when I was at Waikato University in 1993 in New Zealand, and he worked with Ian Whitten there. And they launched Weka, which was the first open source machine learning tool kit, which was launched in, I guess, 93 or 94 when I was at work.

[01:01:51]

The university written in Java, unfortunately written in Java, which was a cool language back then.

[01:01:57]

I guess it's still, well, not cool anymore. But I find, like most programmers now, I find Java unnecessarily bloated. But back then it was like Java or C++ basically, and Java oriented Java was easier for students. Yeah, amusingly, a lot of the work on Wako when we were in New Zealand was funded by a U. Sorry New Zealand government grant to use machine learning to predict the menstrual cycles of cows. So in the US, all the grant funding for I was about how how to kill people or spy on people in New Zealand.

[01:02:31]

It's all about cows are kiwi fruits. Yeah. So yeah. Anyway, I mean under John Andrius had is probability theory based reinforcement learning protegé GI John Cleary was trying to do much more ambitious probabilistic existence. Now John Cleary helped to work out which is the first open source machine learning to get to the predecessor for Tensas flow and torture and all these things. Also, Shein Leg was at work harder working, working with with John Clearing in Winton and this whole group and and then working with my own companies, my company, Webman and our company.

[01:03:15]

I had in the late 90s with a team there at work at the university, which is how Shane got his head full of of Ajai, which led him to go on and with demos established from deep mind. So what you can see through that lineage is, you know, in the 80s and 70s, John Andrius was trying to build probabilistic reinforcement systems. The technology, the computers just weren't there to support. His ideas were very similar to what people are doing now.

[01:03:41]

But, you know, although he's long since passed away and didn't become that famous outside of Canterbury, I mean, the lineage of ideas passed on from him to his students to their students, you can go trace directly from there to me and to mine it so that there was a lot going on and a guy that did.

[01:04:01]

Ultimately lay the groundwork for what we have today, but there was there wasn't a community and so when I when I started. Trying to pull together in the community, it was in, I guess, the early hours when I was living in Washington, D.C. and making a living, doing a consulting for various various U.S. government agencies. And I organized the first Ajai workshop in 2006. And I mean, it wasn't it wasn't like it was literally in my basement or something.

[01:04:34]

I mean, it was it was in the conference room at the Marriott in Bethesda. It's not that not that edgy or underground, unfortunately, but still people attended at 60 or something.

[01:04:45]

No, I mean, D.C. has a lot of arguing on probably until the last five or 10 years, much more than Silicon Valley, although it's just quiet because of the nature of what happened.

[01:04:56]

What happens in D.C., the business isn't driven by PR. Mostly when something starts to work really well, it's taken black and becomes even even more quiet. But yeah, the thing is that really had the feeling of a group of starry eyed mavericks like, yeah. Huddled in a basement, like plotting how to overthrow the narrow establishment. And, you know, for the first time in some cases, coming together with others who shared their passion for ajai and the technical seriousness about about working on it.

[01:05:30]

Right. And that I mean, that's very, very different than what we have today. I mean, now now it's a little bit different. We have ajai conference every year and there's several hundred people rather than 50. Not now. It's more like this is the main gathering of people who want to achieve ajai and think that large scale, nonlinear regression is not the golden path to AGIA. So, I mean, again, you're on our. Yeah, yeah, yeah.

[01:06:02]

Well, certain architectures for for sure. For learning using neural network. So yeah, the ajai conferences are sort of now the main concentration of people not obsessed with deep neural net deep reinforcement learning but but still interested in energy and not not not not the only ones. I mean, there's other little conferences and groupings interested in human level AI and cognitive cognitive architectures and so forth. And it's been a big shift like that back then. You couldn't really it would be very, very Edgington to give a university department seminar, Dimensioned mentioned ajai or a human level.

[01:06:45]

It was more like you had to talk about something more short term and immediately practical. Then, you know, in the bar after the seminar, you could bocian about ajai in the same breath as as time travel or the simulation hypothesis or something. Right. Whereas now now Ajai is not only in the academic seminar room like you have. Vladimir Putin knows where there is and he's like Russia needs to become the leader in ajai. Right. So national leaders and CEOs of large corporations, I mean, the CTO of Intel, Justin Ratner, this was years ago, Singularity Summit Conference 2008 or something.

[01:07:25]

He's like, we believe Ray Kurzweil, the singularity will happen in 2045 and it will have intel inside. So, I mean, it's gone it's gone from being something which is the pursuit of like crazy mavericks, crackpots and science fiction fanatics to being, you know, a marketing term for large corporations and national leaders, which is a. Astounding transition, but, yeah, in the in the course of this transition, I think a bunch of subcommunities have formed in the community around the Ajai conference series is certainly one of them.

[01:08:05]

It hasn't grown as big as I might have liked to. On the other hand, you know, sometimes a modest sized community can be better for making intellectual progress. Also, you go to a Society for Neuroscience conference. You have 35 or 40000 neuroscientists. On the one hand, it's amazing. On the other hand, you're not going to talk to the leaders of the field if you're an outsider.

[01:08:31]

And in some sense, the triple A.I., the artificial intelligence, the main kind of generic Artificial Intelligence Committee conference is too big. It's too amorphous. Like it doesn't mix. It does. And this has become a company advertising out in the media. So, I mean, to comment on the role of ajai in the research community, I'd still if you look at Europe's, if you look at GDP, if you look at these, I clear, you know, Asia is still seen as the outcaste.

[01:09:09]

I was the I would say in these main machine learning, in these main artificial intelligence conferences amongst the researchers, I don't know if it's an accepted term yet.

[01:09:21]

What I've seen bravely you mentioned, Shane, this deep mind and then open AI are the two places that are, I would say, unapologetically so far. I think it's actually changing, unfortunately. But so far they've been pushing the idea that the goal is to create an ajai without billions of dollars behind them.

[01:09:41]

So, I mean, they're in the public mind that that certainly carries some fruit. I mean I mean, they all really strong researchers right there. They do. They're great teams. I mean, demand in particular. And they have I mean, demised Marcus Hoder walking around. I mean, there's all these folks who basically they're full time position involved.

[01:10:03]

I mean, dreaming about creating Asia.

[01:10:05]

I mean, Google Brain has a lot of amazing, ajai oriented people. Awesome. And I mean, so I'd say from a.

[01:10:15]

Public marketing view, deep mind and open, they are the two large, well-funded organizations that have put the term in concept AGI out there sort of as part of their public image. But I mean, they're certainly not that there are other groups that are doing research that seems just as as as ageist to me. I mean, including a bunch of groups in Google's main main Mountainview office. So, yeah, it's true. Ajai is somewhat. Away from the mainstream now, but if you compare it to where it was, you know, 15 years ago, there's there's there's there's been an amazing mainstreaming.

[01:10:59]

You could say the same thing about super longevity research, which is one of my my application areas I'm excited about. I mean, I've been talking about this since the 90s, but working on this since 2001. And back then, really to say you're trying to create therapies to allow people to live hundreds of thousands of years. You were way, way, way, way out of out of the industry, academic mainstream. But now, you know, Google had had Project Calico, Craig Venter at Human Longevity, Inc.

[01:11:31]

and once once the suits come marching in. Right. I mean, once once there's big money in it, then people are forced to take it, take it seriously, because it's the way modern society works. So it's still not as mainstream as cancer research, just as ajai is, not as mainstream as automated driving or something. But the degree of mainstreaming that's happened in the last, you know, 10 to 15 years is astounding to those of us who've been at it for a while.

[01:11:59]

Yeah, but there's a marketing aspect to the term.

[01:12:02]

But in terms of actual full force research that's going on under the heading of Ajai, it's currently, I would say, dominated. Maybe you can disagree, dominated by neural networks, research that the nonlinear regression, as you mentioned, um. Mike, what's your sense with with open court, with your work and in general? I was logic based systems and expert systems for me. Always seemed to capture a deep element of intelligence that needs to be there, like you said, it needs to learn, needs to be automated somehow.

[01:12:42]

But that seems to be missing from a lot of the. A lot of research currently. So what's your sense? I guess one way to ask this question, what's your sense of what kind of things one ajai system need to have? Yeah, that's a very interesting topic that I thought about for for a long time, and I think they're. Are many, many different approaches that can work for getting to the human level. So I don't I don't think there's like one golden algorithm, one one golden design that can work.

[01:13:23]

And I mean, flying machines is the much more an analogy here. Like I mean, you have airplanes, you have helicopters, you have balloons, you have stealth bombers that don't look like regular airplanes. You've got blimps, birds to birds and bugs, right? Yeah.

[01:13:42]

And I mean and there are certainly many kinds of flying machines and there's a catapult that you can just launch and bicycle powered like flying machines, right? Yeah. So now these are all analyzable by basic theory of aerodynamics right now. So one issue with Ajai is we don't yet have the analog of the theory of aerodynamics and that that's what Marcus Huether was trying to make with the EXI and his general theory of general intelligence. But that theory and its most clearly articulated parts really only works for either infinitely powerful machines or almost or insanely impractically powerful machines.

[01:14:29]

So, I mean, if if you were going to take a theory based approach to ajai, what you would do is say, well, let's let's take what's called a exito, which is which is Hodas x ray machine that can work on merely insanely much processing power rather than what is to stand for time and light. OK, so you're basically constraints. Yeah, yeah. Yeah. So how exi works basically is each, each, each action that wants to take before taking that action.

[01:15:02]

It looks at all its history. Yeah. And then it looks at all possible programs it could use to make a decision and it decides like which decision program would have let it make the best decisions according to its reward function over its history and the uses that decision program to take to make the next decision. It's not afraid of infinite resources and it's searching through the space of all possible computer programs. In between each action and its next action, Exito searches through all possible computer programs that have runtime less than T in length, less than no.

[01:15:35]

So it's which is still an practicably humongous space, right? So what. What you. Would like to do to make an ajai and will probably be done 50 years from now to make and they just say, OK, what we we have. Some constraints, we have these processing power constraints and, you know, we have space and time constraints on the program, we have energy utilization constraints and we have this particular class environment class of environments that we care about which maybe say, you know, manipulating physical objects on the surface of the earth, communicating in human language.

[01:16:14]

I mean, whatever our particular not not annihilating humanity or whatever our particular requirements are going to be, if you formalize those requirements in some formal specification language, you should then be able to run automated programs specialized on Exito, specialize it to the the computing resource constraints and the particular environment and go. And then it will spit out like the the specialized version of Exito to your research restrictions in your environment, which will be your adjei. Right. And that that that I think is how our Supachai will create new ecosystems.

[01:16:55]

Right. But but and that's a very robust seems really inefficient.

[01:16:59]

It's a very rational approach. But I like the whole field of program specialization came out of Russia. Can you backtrack?

[01:17:05]

So what is program specialization? So that's basically what it takes to take sorting. For example, you can have a generic program for sorting lists, but what if on your list you care about at length 10000 or less, you can run an automated program specialized on your sorting algorithm and then will come up with the algorithm that's optimal for sorting lists of ninety one thousand or 10000 or less. Right. That's kind of like is now the kind of the process of evolution is.

[01:17:31]

It's a program specialized to the environment area involving human beings or exactly is your Russian heritage showing so with Alexander Vitya and Peter Enochian. And so there's a yeah, there's a long history of of thinking about evolution. Evolution that way. That way also. Well, my point is that what we're thinking of as a human level, general intelligence, you know, if you start from narrow eyes like are being used in the commercial AI field now, then you're thinking, OK, how do we make it more and more general?

[01:18:10]

On the other hand, if you start from XY or Schmidt Huber's journal machine or these infinite, infinitely powerful but practically infeasible eyes, then getting to a human level, ajai is a matter of specialization. It's like, how do you how do you take these maximally general learning processes and how do you how do you specialize them so that they can operate within the resource constraints that you have, but will achieve the particular things that you care about? Because we are not we humans are not maximally general intelligence.

[01:18:45]

If I ask you to run the maze in 750 dimensions, you're probably very slow, whereas the two dimensions, you're probably you're probably way better. Right? So, I mean, we're special because our our hippocampus has a two dimensional map in it. Right. And it does not have a 750 dimensional map in it. So, I mean, we we're in our peculiar mix of. Generality and specialization will probably start quite general at birth, not still narrow, but like more general than we are at age 20 and 30 and 40 and 50, 60.

[01:19:24]

I don't think I think it's more complex than that because. I mean, in some sense, a young child. Is less biased and the brain has yet to sort of crystallize into appropriate structures for processing aspects of the physical and social world. On the other hand, the young child is very tied to their sensorium, whereas we can we can deal with abstract mathematics like 750 dimensions. And the young child cannot because they haven't they haven't grown up yet called the the formal capabilities that they have.

[01:20:01]

They haven't learned the abstract yet. Right. And and the ability to abstract gives you a different kind of generality than what the baby has. So there is both more specialization and more generalization that comes with the development process.

[01:20:16]

Actually, I mean, I guess just the trajectories of the specialization are most controllable at the young age. I guess that's one way to put it. Do you have kids? No, they're not controllability you. So you think it's interesting? I think honestly, I just think a human adult is much more generally intelligent than a human baby. Babies are very stupid. I mean. I mean. I mean. I mean, they're cute. They're cute.

[01:20:45]

Yeah. Which is which is why we put up with their repetitiveness and stupidity and they have what the Zen guys would call a beginner's mind, which is a beautiful thing. But that doesn't necessarily correlate with a higher level of intelligence and a plot of like cuteness and stupidity.

[01:21:02]

There there's a there's a process that allows us to put up with their stupidity.

[01:21:07]

Yeah, you get because you're an ugly old man like me, you're going to get really, really smart to continue the conversation. But, yeah, going back to your original question.

[01:21:16]

So the way I look at. Human level guy is how do you specialize? You know, unrealistically inefficient, superhuman brute force learning processes to the specific goals that humans need to achieve and the specific resources that that that we have and both of these the goals and the resources in the environment, some of this is important. And on the on the resources side, it's important that the hardware resources we bring to bear. Are very different than the human brain, so the way.

[01:21:57]

The way I would want to implement Ajai on a bunch of neurons in a VAT that I could rewire arbitrarily is quite different than the way I would want to create ajai on, say, a modern server farm of CPU's and GPS, which in turn may be quite different in the way I would want to implement ajai on, you know, whatever quantum computer will have in 10 years. Supposing someone makes a robust quantum Turing machine or something. So I think, you know, there's been coevolution of the the patterns of organization in the human brain and the physiological particulars of the human brain over time.

[01:22:40]

And when you look at neural networks, that is one powerful class of learning algorithms, but it's also a class of learning algorithms that evolved to exploit the particulars of the human brain as a computational substrate. If you're looking at the computational substrate of a modern server farm, you won't necessarily want the same algorithms that you want, that you want on the on the human brain. And, you know, from the right level of abstraction, you could look at maybe the best algorithms in the brain and the best algorithms at the moment, computer network as implementing the same abstract learning and representation processes.

[01:23:16]

But, you know, finding that level of abstraction is its own major research project. And so that's about the hardware side and and the software side, which follows from that. Then regarding what are the requirements? I wrote the paper years ago and what I called the embodied communication prior, which was quite similar in intent to use or banjoes recent paper on the consciousness prior. Except I, I didn't want to wrap up consciousness because to me the qualia problem and subjective experience is a very interesting issue also, which we can chat about.

[01:23:55]

But I would rather keep that philosophical debate distinct from the debate of what kind of biases do you want to put in the general intelligence to give it human like general intel? And I'm not sure your Shobanjo is really addressing that kind. I'm just using the term I love your shot to pieces like he is by far my favorite of the lines of deep learning, such such a good hearted guy. And he's a great guy. Yeah, for sure. I am not I am not sure he has plunged to the depths of the philosophy of consciousness.

[01:24:30]

Now he's using it as a sex. Yeah, yeah, yeah. So I when I called it was the embodied communication prior to me.

[01:24:38]

Can you maybe explain it a little bit. Yeah. Yeah. And what I meant was, you know, what are we humans evolved for, you can say being human. But that's that's very abstract, right? I mean, we our minds control individual bodies, which are autonomous agents moving around in a world that's composed largely of solid objects. Right. And we've also evolved to communicate via language with other, you know, solid object agents that are going around doing things collectively with us in a in a world of solid objects.

[01:25:11]

And these things are very obvious. But if you compare them to the scope of all possible intelligences or even all possible intelligences that are physically realisable, that actually constrains things a lot. So if you start to look at, you know, how would you realize some specialized or constrained version of universal general intelligence in a system that has, you know, limited memory and limited speed of processing, but whose general intelligence will be biased toward controlling a solid object agent, which is mobile in a solid object world for manipulating solid objects and communicating via language with other similar agents in that same world.

[01:25:57]

Right. Then, starting from that, you starting together requirements analysis for for for human human level, general intelligence. And then that that leads you into cognitive science. And you can look at say, what are the different types of memory that the human mind and brain has? And this this has matured over the last decades. And I got into this a lot. And so after doing my math, I was an academic for eight years. I was in departments of mathematics, computer science and psychology when I was in the psychology department at University of Western Australia.

[01:26:31]

I was focused on cognitive science of memory and perception. Actually, I was I was teaching neural nets and deep neural nets and it was multilayered perceptual psychology and cognitive science. It was transdisciplinary among engineering, maths, psychology, philosophy, linguistics, computer computer science. But yeah, we we were teaching psychology students to try to model the data from human cognition. Using multilayer Perceptron, which is the early version of a deep neural network, very, very, but a recurrent backdrop was very, very slow to turn back.

[01:27:08]

So this is the study of these constrained systems that are supposed to be. So if you look if you look at. If you look at cognitive psychology, you can see there's multiple types of memory which are to some extent represented by different subsystems in the human brain. So we have episodic memory, which takes into account our life history. And everything that's happened to us, we have declarative or semantic memory, which is like facts and beliefs abstracted from the particular situations as they occurred in there is sensory memory, which to some extent the sense modality specific and to some extent this is unified across across modalities.

[01:27:50]

There's procedural memory, memory of how to do stuff like how to swing the tennis racquet. Right. Which is there's motor memory, but it's also a little more more abstract than than motor memory involves cerebellum and cortex working, working together. Then then there's there's memory linkage with emotion between the languages of cortex and limbic system. There's specifics of spatial and temporal mogoeng connected with memory, which has to do with your hippocampus and thalamus connecting to cortex and the basal ganglia, which influences goals.

[01:28:25]

So we have specific memory of what goes subgoals and sub subgoals. We want to perceive in which context. In the past, human brain has substantially different subsystems for these different types of memory and substantially differently tuned learning, differently tuned modes of long term potentiation to do with the types of neurons and neurotransmitters and the different parts of the brain corresponding to these different types of knowledge and these different types of memory and learning in the human brain. I mean, you can back these all into embodied communication for controlling agents in worlds of solid objects.

[01:29:02]

So if you look at building an aging system, one way to do it, which starts more from cognitive science and neuroscience, is to say, OK, what are the types of memory that that are necessary for this kind of work?

[01:29:14]

Yeah. Yeah, necessary for this the sort of intelligence, what types of learning? Well, with these different types of memory. And then how do you connect all these things together. Right. And of course, the human brain did it incrementally through evolution because each of the sub networks of the brain and when it's not really the lobes of the brain, it's the subnetwork, each of which is widely distributed. Which of this? Each of the sub networks in the brain coevolved with the other sub networks of the brain, both in terms of its patterns of organization and the particulars of the neurophysiology.

[01:29:49]

So they all grew up communicating and adapting to each other. It's not like they were separate black boxes that were then within glommed together. Right. Whereas as engineers we would tend to say, let's make let's make the declarative memory box here and the procedural memory boxier and the perception boxier and why that wire them together. And when you can do that, it's interesting. I mean, that's how a car is built. Right. But on the other hand, that's clearly not how biological systems are are made.

[01:30:18]

The parts coevolved so as to adapt and work together. So that's, by the way, how every human engineered system that flies those who are using that analogy before it's built.

[01:30:30]

So do you find this at all appealing? Like there's been a lot of really exciting, which I find strange, that is ignored work in cognitive architectures, for example, throughout the last few decades. Do you find that?

[01:30:41]

Yeah, I mean, I, I had a lot to do with it, with the community. And, you know, Paul Rosenblum, who was one of the and John Legend who built the saw architecture, are friends of mine and I learned so quite well and Acthar in these different cognitive architectures and how I was looking in the world about.

[01:31:00]

Ten years ago, before this whole commercial mining explosion was on the one hand, you have these cognitive architecture guys who were working closely with psychologists and cognitive scientists who had thought a lot about how the different parts of a human like mine should work together. On the other hand, you had this learning theory, guys who didn't care at all about the architecture. But we're just thinking about like how how do you recognize patterns and large amounts of data? And in some sense, what you needed to do was to get the learning that the learning theory guys were doing and put it together with the architecture, the cognitive architecture guys were doing.

[01:31:41]

And then you have what you needed now. Can't, unfortunately, when you look at the details. You can't just do that without totally rebuilding what what is happening on both the cognitive architecture. Right. And the learning side. So, I mean, they tried to do that in summer, but what they ultimately did is like take a deep neural net or something for perception. And it as one of the one of the black boxes and it becomes one of the boxes, the learning mechanism becomes one of the boxes as opposed to fundamental that that doesn't quite work.

[01:32:14]

Now, you could look at some of the stuff. The mind is like the the differential neural computer, something that sort of has a neural net for deep learning perception. And that's another neural net, which is like a memory matrix stored in the map of the London subway or something. So probably Dennis Hassabis, with thinking about this like part of cortex and part of the hippocampus is hippocampus has a special map. And when he was a neuroscientist, he was doing a bunch of cortex hippocampus interconnection.

[01:32:42]

So there the DNC would be an example of folks from the deep neural network trying to take a step in the cognitive architecture direction of having two neural modules that correspond roughly to two different parts of the human brain that deal with different kinds of memory and learning. But on the other hand, it's super, super, super crude from the cognitive architecture of just as what what John learned and so did with neural nets was super super crude from from a from a learning point of view, because the learning was like after the side not affecting the core representations.

[01:33:12]

Right. And when you weren't learning the representation, you were learning the data that feeds into the referee. You were learning abstractions of perceptual data to feed into the the representation that was was not right. So, yeah, this this was. This was clear to me a while ago, and one of my hopes with the Ajai community was to sort of bring people from those two directions together. That didn't happen much in terms of yet and or where I was going to say it didn't happen in terms of bringing like the lions of cognitive architecture together with the lions of deep learning.

[01:33:47]

It did work in the sense that a bunch of younger researchers have had their heads filled with both of those ideas. This this comes back to a saying. My dad, who was a university professor, often quoted to me, which was a science advances one funeral at a time, which I'm I'm trying to avoid like I'm fifty three years old and I'm trying to invent amazing, weird ass new things that nobody ever, ever thought about, which we'll talk about in a few minutes.

[01:34:16]

But but there but there is that aspect, like the people who have been at AEI a long time and have made their career at developing one aspect, like a cognitive architecture or a deep learning approach. It can be hard when once you're old and have made your career doing one thing, it can be hard to mentally shift gears. I mean, I try quite hard to remain flexible. We've been successful somewhat in changing maybe. Have you changed your mind on some aspects of what it takes to build Najai like technical things?

[01:34:50]

The hard part is that the world doesn't want you to know the world or your own brain, the world. Well, that one point is that your brain doesn't want to. The other part is that the world doesn't want you to like the people who have followed your ideas, get mad if you change your mind. And you know, the media wants to pigeonhole you as an avatar of a certain a certain idea. But, yeah, I've changed my mind on a bunch of things.

[01:35:18]

I mean, when I started my career, I really thought quantum computing would be necessary for Fritschi. And I. I doubt it's necessary now, although I think it will be a super major enhancement. But I'm also I'm now in the middle of embarking on the complete rethinking rewrite from scratch off of our open ecosystem, together with Alexei Potapov and his team in St. Petersburg who's working with me in Singularity. So now we're trying to like go back to basics, take take everything we learned from working with the current open cargo system, take everything everybody else has learned from working with them, with their their Proteau Ajai systems and design the best framework for for the next stage.

[01:36:07]

And I do think there's a lot to be learned from the recent successes with deep neural nets and deep reinforcement systems. I mean, people made these essentially trivial systems work much better than I thought they would. And there's a lot to be learned from that. And I want to incorporate that knowledge appropriately in our open COG 2.0 system. On the other hand, I also think current deep neural net architectures as such will never get you anywhere near Ajai. So I think you want to avoid the pathology of.

[01:36:44]

Throwing the baby out with the bathwater is like saying all these things are garbage because foolish journalists overblow them as being the path to ajai and a few researchers overblow them as well. Yeah, there's there's a lot of interesting stuff to be learned there, even though those are not not the golden path.

[01:37:05]

So maybe this is a good chance to step back. You open to point out, but go back to open Kaga one zero point zero. Which Alstyne? Yeah, yeah. Maybe talk to the history of open eyes. And you're thinking about those ideas. I would say. Open Cork 2.0 is a term we're throwing around sort of tongue in cheek because the existing open court system that we're working on now is not remotely close to what we consider we'd consider a 1.0, right?

[01:37:37]

I mean I mean, it's it's an early it's been around. What, 13 years is something, but it's still an early stage research system and actually where we are. Going back to the beginning in terms of. Theory and implementation, because we feel like that's the right thing to do, but I'm sure what we end up with is going to have a huge amount in common with the current system. I mean, we are still like that, the general approach.

[01:38:08]

So first of all, what is open cock? Sure. Open kog. Is an open source software project that I launched together with several others in 2008 and. Probably the first code written word that was written in 2001 or two or something that was developed. As a proprietary Cubase within my company in Government LLC, then we decided to open source it and in 2008 cleaned up the code, threw out some things, added some new things. And what language is it written in C++?

[01:38:46]

Primarily, there's a bunch of schemes as well, but most of it was and it's separate from that, something. We'll also talk about singularity net.

[01:38:54]

So yeah, it was born as a non networked thing, correct? Correct. Well, there are many levels of networks involved here. Real connectivity to the Internet. I know at birth. Yeah. I mean, singularity is a separate project and a separate body of code. And you can use singularity as part of the infrastructure for distributed open system. But there are there are different layers, some open cog. On the one hand, as a software framework could be used to implement a variety of different AI architectures and algorithms, but in practice there has been a group of developers which I've been leading together with Linus Webster, Snow Guys Viler and a few others, which have been using the open cut platform and infrastructure to to implement certain ideas about how to make an AGI.

[01:39:58]

So there's been a little bit of ambiguity about open software platform versus open kog the ajai design, because in theory you could use that software to do. You could use it to make a neural net. You could you could use it to make a lot of difference.

[01:40:13]

What kind of stuff does the software platform provide in terms of utility tools? Like what?

[01:40:18]

Yeah, let me first tell you about Open Karg as a software platform and then I'll tell you the specific ajai R&D we've been building building on top of it.

[01:40:29]

So the core component of Open Cog is a software platform is what we call the admin space, which is a weighted labeled hydrograph 80 OEM Atom Space, Atom space, not not atom like Adam and Eve, although that would be cool.

[01:40:45]

Yeah. So you have a hyper graph which is like so a graph in the sense is a bunch of nodes with links between them. A hyper graph is like a graph, but links can go between more than two nodes. You have a link between three nodes and in fact in fact open space would probably be called a meta graph because you can have links pointing to links where you could have links between the whole sub. Right. So it's a it's an extended hyper graph or a meta graph.

[01:41:17]

And as a graph, a technical term, it is now a technical term. Interesting. But I don't think it was a technical term when we started calling this a generalized hyper graph. But in any case, it's a way it labeled generalized hypergraphia way weighted labeled mothercraft. The weights and labels mean that the nodes and links can have numbers and symbols attached to them so they can have type's on them. They can have numbers on represent, say, a truth value or importance value for a certain purpose.

[01:41:49]

And of course, like with all things, you can reduce that to a hypergraphia and hyperglycemic or just hydrograph to a graph and you could use a graph to an adjacency matrix. So, I mean, there's always multiple representation, but there's a layer of representation as well here. Right, right. Right. And so similarly, you could have a link to a whole graph because a whole graph could represent, say, a body of information. And I could say I reject this body of information.

[01:42:16]

And one way to do that is make that link go to that whole sub graph representing the body of information. Right. I mean, there's many there are many alternate representations, but that's anyway, what we have an open kog. We have an open space, which is this weighted label, generalized hydrograph knowledge store. It lives in RAM. There's also a way to back it up to disk. There are ways to spread among among multiple different machines.

[01:42:41]

Then there are various utilities for dealing with that. So there's a pattern matcher which lets you specify a sort of abstract pattern and then search through a whole Adam's space with labeled hydrograph to see what Subhi Paragraph's may match that that pattern for an example. So that's then there's there's something called the the Coggs server in open code, which lets you run a bunch of different agents or processes. In a scheduler and each of these agents, basically, it reads staff from the admin space and the right stuff to the space.

[01:43:19]

So this is sort of the basic operational model.

[01:43:23]

That's the software framework. And, of course, that's there's a lot there just from a scalable software engineering standpoint. So you can use this. I don't know if you have you looked into the Stephen Wolfram's physics project recently with the hyperaggressive. And so could you theoretically use, like, the software framework to.

[01:43:40]

Certainly could. Although Wolfram would rather die than use anything but Mathematica for his work.

[01:43:46]

Well, that's yeah, but there's a big community of people who are, you know, would love integration.

[01:43:53]

And like you said, the young minds love the idea of integrating, of connecting, etc. And I would add on that note, the idea of using hyper graph type models in physics is not very new. Like if if you look at the Russians did it first.

[01:44:07]

Well, I'm sure they did. And a guy named Ben Drivers, who's a mathematician, a professor in Louisiana or somewhere, had a beautiful book on quantum sets and hyper graphs and algebraic topology for discrete models of physics and carried it much farther than the more from his. But he's he's not rich and famous. So it didn't didn't get in the headlines. But yeah. Wolfram aside. Yes, certainly that's a good way to put the whole open framework.

[01:44:36]

You could use it to model biological networks and simulate biology processes. You could use it to model physics on a discrete graph models of physics. You could use it. You could use it to do, say, biologically realistic neural networks, for example. And that's so that's a framework. What do agents and processes do?

[01:45:01]

Do they grow the graph today? What kind of computations just to get get a sense out of this? So they in theory, they could do anything they want to do. They're just C++ processes. On the other hand, the computation framework is sort of designed for agents where most of their processing time is taken up with reads and writes to the admin space. And so that's that's a very different processing model than, say, the matrix multiplication based model that underlies most deep, most deep learning systems.

[01:45:32]

Right. So so you could I mean, you could create an agent that just factored numbers for a billion years. It would run within the open platform, but it would be pointless. Right. I mean, the point of doing open cog is because you want to make agents that are cooperating via reading and writing into this this way that labeled hypergraphia. And so that and that that has both cognitive architecture importance, because then this hydrograph is being used as a sort of shared memory among different cognitive processes.

[01:46:05]

But it also has, you know, software and hardware implementation implications because current GPU architectures are not so useful for open cog, whereas. A Graf chip would be incredibly useful, and I think Grauwe Core has those now, but they're not ideally suited for this. But I think in the next, let's say, three to five years, we're going to see new chips where like a graph is put on the chip and, you know, the back and forth between multiple processes acting Cindy and Mindy on that on that graph is going to be fast.

[01:46:40]

And then that may do for open cog type architectures. What used it for for deep neural architecture.

[01:46:47]

And it's a small tangent. Can you comment on taskbar, neuromorphic computing? So like hardware implementations of all these different kind of I mean, are you interested are you excited by that?

[01:46:58]

I mean, I'm excited about Grauwe processors because I think they can massively speed up the up, which is a class of architectures that I'm then I'm working on. I think if you know. In principle, no morphic computing. Should be amazing, I haven't yet been fully sold on any of the systems that are out there like memristor should be amazing to write. A lot of these things have obvious potential, but I haven't yet put my hands on the system that that seemed to have that mark system should be amazing.

[01:47:32]

But the current systems, uh, not being I mean, look, for example, if you wanted to make a biologically realistic hardware neural network, like taking making a. Circuit in hardware that emulated like the Hodgkin Huxley equation or the Isakov education, like equate differential equations for a biologically realistic neuron and putting that in hardware on the chip, that would seem that would make more feasible to make a large scale, truly, biologically realistic neural network. Now, what's been done so far is not like that.

[01:48:11]

So I guess personally, as a researcher, I mean, I've done a bunch of work in cognitive, neuro and sorry, in computational neuroscience, where I did some work with IARPA in D.C. Intelligence Advanced Research Projects Agency. We were looking at how do you how do you make a biologically realistic simulation of seven different parts of the brain cooperating with each other using like realistic, nonlinear dynamical models of neurons? And how do you get that to simulate what's going on in the mind of a joint intelligence analyst, what they're trying to find terrorists on a map.

[01:48:44]

So if you want to do something like that, having neuromorphic hardware that really let you simulate like a realistic model of the neuron would be, it would be amazing. But that's that's sort of with my computational neuroscience hat on, right. With an ajai hat on, I'm just more interested in these hydrograph knowledge, representation based architectures, which would benefit benefit more from from various types of Grauwe processors, because the main processing bottleneck is reading, writing to RAM.

[01:49:19]

It's reading, writing to the graph. And the main processing bottleneck for this kind of technology architecture is not multiplying matrices. And for that reason, GPS, which are really good at multiplying matrices, don't, don't, don't apply as as well. There are frameworks like gunwalking, others that try to boil down graph processing to matrix operations and they're cool. But you're still putting a square peg into a round hole in a certain way.

[01:49:45]

And the same is true, mean quantum machine learning, which is very cool. It's also all about how to get matrix and vector operations in quantum mechanics. And I see why that's natural to do. I mean, quantum mechanics is all unitary matrices and vectors are. On the other hand, you could also try to make graph centric quantum computers, which I think is where things will go. And then then we can have then we can make like take the open Karg implementation layer implemented in a uncolored stay inside the quantum computer.

[01:50:21]

But that that may be the Singularity Square. I'm not I'm not I'm not sure we need that to get to human human level, human level that's already beyond the first single. But can we just go back to open court and the hydrograph and that's the software framework.

[01:50:38]

Right. So the next thing is our cognitive architecture tells us particular algorithms to put there. Got it.

[01:50:46]

Can we backtrack on the kind of do is this graph designed is that in general supposed to be sparse and the operations constantly grow and change to grab? The graph is sparse and but is it constantly adding links? So is a self modifying hypergrowth.

[01:51:04]

So it's not so the right and Reid operations you're referring to. This isn't just a fixed graph to which you change the way it's known as the growing graph. Yeah, yeah. That's that's true. So it's it is a different model than, say, current deep neural nets and have a fixed neural architecture. And you're updating the weights, although there have been like Kaskade, correlational neural architectures that grow new new nodes and links. But the most common neural architectures now have a fixed neural architecture.

[01:51:35]

You're updating the weights and then. Ropen Kog, you can update the weights, and that certainly happens a lot, but adding new nodes, adding new links removed, removing nodes and links is an equally critical part of the systems operations. So now when you start to add these cognitive algorithms on top of this open kog architecture, what does that look like?

[01:51:58]

So yeah, so that they within this framework, then creating a cognitive architecture is basically two things. It's it's choosing what type system you want to put on the nodes and links in the hydrograph, what types of nodes and links you want. And then then it's choosing what collection of agents, what collection of AI algorithms are processes are going to run to operate on this type. And of course, those two decisions are are closely, closely connected to each other.

[01:52:31]

So in terms of the type system, there are some links that are more neural net like that, just like have waits to get updated by having learning and activation spreads among them. There are other things that are more logic like and nodes in a more logical. So you could have a variable node and you can have a node representing a universe or existential quantifier as in in predicate logic or term logic. So you can have logic like nodes and links, or you can have neural like nodes and links.

[01:53:01]

You can also have procedure like nodes and links as in, say, combinatorial logic or lambda calculus representing programs so you can have nodes and links representing many different types of semantics, which means you could make a horrible, ugly mess or you can make a system where these different types of knowledge are interpenetrate and synergized with each other beautifully. Right.

[01:53:26]

So you, the hydrograph, can contain programs that can contain programs, although. In the current version, it is a very inefficient way to guide the execution of programs, which is one thing that we are aiming to resolve with our rewrite of the system now.

[01:53:46]

So what do you use the most beautiful aspects of open kog just to you personally, some aspect that captivates your imagination from beauty or power?

[01:54:00]

What what fascinates me is finding a common representation that underlies. Abstract, declarative knowledge and sensory knowledge and movement, knowledge and procedural knowledge and episodic knowledge, finding the right level of representation, where all these types of knowledge are stored in a sort of universal and inter convertible, yet practically manipulable way. Right. So that's to me. To me, that's the core. Because once you've done that, then the different learning algorithms can help each other out. Like what you want is if you have a logic engine that helps with declarative knowledge and you have a deep neural net that gathers perceptual knowledge and you have a say in evolutionary learning system that learns procedures, you want these to not only interact on the level of sharing results and passing inputs and outputs to each other.

[01:55:00]

You want the logic engine when it gets stock to be able to share its intermediate state with the neural net and with the evolutionary learning algorithm so that they can help each other out of of bottlenecks and help each other solve combinatorial explosions by intervening inside each other's cognitive processes. But that can only be done if the intermediate state of a logic engine, evolutionary learning engine and a deep neural net are represented in the same form. And that's what we figured out how to do by putting the right type system on top of this where that level hypergrowth.

[01:55:34]

So is there can you maybe elaborate on what that what are the different characteristics of a type system that that can coexist amongst all these different kinds of knowledge that needs to be represented?

[01:55:47]

And I mean, like, is it hierarchical just in any kind of insights you can give on that kind of type? Yeah, yeah. So this this this gets very nitty gritty and mathematical, of course. But one key part is switching from practical logic to term logic. What is practical logic or term logic? So term logic was invented by Aristotle, or at least that's the oldest, oldest recollection we we have of it.

[01:56:18]

But term logic breaks down basic logic into basically simple links between nodes like inheritance link between between no nowaday and not so in term logic.

[01:56:32]

The basic deduction operation is A implies B, B implies C, therefore implies C, whereas in predicate logic the basic operation is most exponents like A, A implies B, therefore B. So there is a slightly different way of breaking down logic. But by breaking down logic into term logic, you get a nice way of breaking logic down into into into nodes and links so your concepts can become nodes, the logical relations become links. And so then inference is like, so if this link is A implies B, this link is B implies C, then deduction builds A link and C and your probabilistic algorithm can assign assign a certain weight there.

[01:57:14]

Now you may also have like a heavy involvement from the C, which is the degree to which thinking the degree to which A being the focus of attention should make B the focus of attention. Right. So you could have the neural link and you can you can have a symbolic like logical inheritance link in your term logic. And they have separate meaning, but they could be used to to guide each other as well. Like if if there is a large amount of neural weight on the link between A and B, that may direct your logic to think about, well, what is the relation or the similar?

[01:57:49]

Is there an inheritance relation? Are they are they similar in some context? On the other hand, if there's a logical relation between A and B, that may that may direct your neural component to think, well, when I'm thinking about er should I be directing some attention to B also, because there's a logical relation. So in terms of logic, there's a lot of thought that went into how do you break down. Logic relations, including basic sort of proposition logic relations, as Aristotelian term logic deals with and then quantifier logic relations also, how do you break those down elegantly into a graph?

[01:58:28]

Because you I mean, you can boil logic, expression into a graph in many different ways. Many of them are very ugly. Right? Right. We tried to find elegant ways of sort of hierarchically breaking down complex logic expression into into nodes and links so that if you have, say, different nodes representing, you know, then I leks interview or whatever, the logic, relations between those things are compact in the Northern Link representation so that when you have a neural net acting on the same nodes and links, the neural net and the logic engine can can sort of interoperate with each other and also interpretable by humans.

[01:59:07]

Is that is that an important stuff? In simple cases, it's interpreted by humans, but honestly. You know. I would say logic systems give. More potential for. Transparency incomprehensibility, then neural net systems, but you still have to work at it because, I mean, if I show you a practical logic proposition with like 500 nested universes on existential quantifiers and two hundred and seventeen variables, that's no more comprehensible than the matrix of a neural network.

[01:59:43]

So I'd say the logic expressions and I learned from this experience are mostly totally opaque to demon beings and maybe even harder to understand there, because, I mean, when you have multiple nested quantifiable things, it's a very high level of abstraction. There is a difference, though, in that within logic, it's a little more straightforward to pose the problem of like normalises in Baulderstone to a certain form. I mean, you can do that in neural nets to like you can distill a neural net to a simpler form, but that's more often done to make a neural net that will run on an embedded device or something.

[02:00:16]

It's it's harder to distill in that to a comprehensive reform than is to simplify logic, expression to a comprehensive reform. But but it doesn't come for free. Like what's what's in the A's mind is is incomprehensible to to a human unless you do some special work to make it comprehensible. So on the on the procedural side, there's some different than sort of interesting voodoo there. I mean, if if you're familiar in computer science, there's something called the Kerry Howard correspondence, which is a one to one mapping between proofs and programs.

[02:00:48]

So every program can be mapped into a proof. Every proof can be mapped into a program. You can model this using category theory in a bunch of a bunch of nice math. But we want to make that practical, right. So that if you if you have an executive, a program that like moves the robot's arm or figures out in order to say things in a dialogue, that's a procedure represented in open cogs. Hydrograph. Well, if you want to reason on how to how to improve that procedure, you need to map that procedure into logic using Howard Isomorphic and certain in the logic, the logic engine can reason about how to improve that procedure and then map that back into the procedural representation that is efficient for execution.

[02:01:33]

So again, that comes down to not just can you make your procedure into a bunch of nodes and links because I mean, that can be done trivially as a C++ compiler has nodes and links and so on it. Can you boil down your procedure into a bunch of nodes and links in a way that's like hierarchically decomposed and simple enough? It can reasonably, yeah. Given the resource constraints at hand, you can map it back and forth to your to your term logic, like fast enough and without having a bloated logic expression.

[02:02:02]

So there's just a lot of. There's a lot of nitty gritty particulars there, but by the same token, if you if you ask a designer like how do you make the Intel R7 chip so good? There's a there's a long list of technical answers which which will take a while to go through. And this has been decades of work. I mean, the first A.I. system of this nature I tried to build was called Webman in the mid 1990s.

[02:02:30]

And we had a big graph, a big graph operating in Ram influent with Java at one point one, which is a terrible, terrible implementation idea. And then each each node had its own processing. So like that there, the core loop looked through all nodes in the network and the node in act was whether this little thing was doing. And we had logic and neural nets in there, but an evolutionary learning. But we hadn't done enough of the math to get them to operate together very cleanly.

[02:03:00]

So it was really it was quite a horrible mess. So as well as shifting in implementation where the graph is its own object and the agents are separately scheduled, we've also done a lot of work on how do you represent programs? How do you represent procedures? You know, how do you represent genotypes revolution in a way that the interoperability between the different types of learning associated with these different types of knowledge actually works. And that's been quite difficult. It's taken decades and it's totally off to the side of what the commercial mainstream of the of the AI field is doing, which isn't thinking about representation at all, really.

[02:03:45]

Although you could see, like in the DNC, they had to think a little bit about how do you make representation of a map and this memory matrix work together with the representation needed for visual pattern recognition and a hierarchical neural network. But I would say we have taken that direction of taking the types of knowledge you need for different types of learning, like declarative procedural attentional. And how do you make these types of knowledge represent in a way that allows cross learning across these different types of memory?

[02:04:17]

We've been prototyping and experimenting with this with an open cog and before that, where mind since the mid mid 1990s. Now, disappointingly to all of us, this has not yet been cashed in and in an ajai system. I mean, we've used this system within our consulting business, so we've built natural language processing and robot control and financial analysis. We built a bunch of sort of vertical market specific proprietary AI projects they use open on the back end.

[02:04:54]

But we haven't. That's not the angle over that. It's interesting, but it's not the Ejogo. So know what we're looking at with. Are rebuilding the system to point out. Yeah, we're also calling it true ajai, so we're not quite sure what the name what the name is yet, that we made a website for two ajai that I know, but we haven't put anything on there yet. So we may come up with an even better name.

[02:05:19]

But it's kind of like the real eye starting point for your age. But I like troubador better because Drew has like you can be Trueheart You can be true to your girlfriends. The true, true, true has true has a number and it also has logic in it. Right, because logic is a key, but I like it. Yeah.

[02:05:35]

So yeah. With the true ajai system we're sticking with the same basic architecture, but we're, we're trying to build on what we've learned. And one thing we've learned is that, you know, we need type checking among dependent types to be much faster and probabilistic dependent types to be much faster. So as it is now, you can have complex types on the nodes and links. But if you want to put like if you want types to be first class citizens so that you can have the types can be variables and then you type checking them on complex higher order types.

[02:06:15]

You can do that in the system now, but it's very slow. This is stuff like it's done in cutting edge programming languages like like ACDA or something, these obscure research languages. On the other hand, we've been doing a lot of time together, deep neural nets with symbolic learning. So we did a project for Cisco, for example, which was this was street scene analysis. But they are deep neural models for a bunch of cameras watching street scenes.

[02:06:38]

But they trained a different model for each camera because they couldn't get the transfiguring learning to work between camera and camera B. So we took what came out of all the models for the different cameras. We fed it into an open cut, symbolic representation. Then we did some pattern mining and some reasoning on what came out of all the different cameras within the symbolic graph. And that worked well for that application. Mean you go Latapy from Cisco, give it talk.

[02:07:02]

Touching on that at last year's conference, it was in Shenzhen. On the other hand, we learned from there it was kind of clunky to get the newer models to work well with the symbolic system because we were using torch and torch keeps a sort of computation graph. But you needed like real time access to that computation graph within our hyper graph, and we certainly did it. Alexei Potapov, who leads our St. Petersburg team, wrote a great paper on cognitive modules in Open Kog explaining sort of how do you deal with the torch compute graph inside open?

[02:07:37]

But in the end, we realized like that just hadn't been one of our design thoughts when when we opened Congreso between morning, really fast dependent type checking and wanting much more efficient in their operation between the computation graphs of deep neural net frameworks and open cogs, hyper graph and adding on top of that one to more effectively run an open hyper graph distributed across the room in 10000 machines, which is we're doing dozens of machines now, but it's just not we we didn't architect it with that sort of modern scalability in mind.

[02:08:10]

So these performance requirements are what have driven us to want to to architect the base, but the core. Ajai paradigm doesn't really change, like the mathematics is the same. It's just we can't scale to the level that we want in terms of distributed processing or speed, the various kinds of processing with the current infrastructure that was, you know, built in the phase 2001 to 2008, which is which is is hardly shocking. Well, I mean, the three things you mentioned are really interesting.

[02:08:46]

So what do you think about in terms of interoperability, communicating with computational graph of neural networks? What do you think about the representations that neural networks form?

[02:08:58]

They're bad, but there's many ways that you could deal with that. So I've been wrestling with this a lot. In some work on unsupervised grammar induction, another simple paper on that, they'll give it the next conference, the online portion of which is next week, actually. So what is grammar induction? So this isn't ajai either, but it's sort of on the verge between now and ajai or something. Unsupervised grammar induction is the problem. Throw your A.I. system a huge body of text and have it learn the grammar of the language that produced that text.

[02:09:37]

So you're not giving it labeled examples, so you're not giving it like a thousand sentences for the passes were marked up by graduate students. So it's just got to infer the grammar from from from the text. It's like it's like the Rosetta Stone but works. Right, because you only have the one language. Yeah. And you have to figure out what is the grammar.

[02:09:54]

So that's not really ajai because I mean, the way a human learns language is not that right? I mean, we learn from language that used in context. So it's a social invalid thing. We see we see how a given sense is grounded in observation as an interactive element, I guess. Yeah, yeah, yeah. On the other hand. And so I'm more interested in that. I'm more interested in making an existing learning language from its social and about its experience.

[02:10:22]

On the other hand, that's also more of a pain to do and that would lead us into Hanson Robotics and the robotics work have so much talk about in a few minutes. But just as an intellectual exercise, as a learning exercise, trying to learn grammar from a corpus is very, very interesting. Right. And and that's been a field in AI for a long time. No one can do it very well. So we've been looking at transforming neural networks and Tree Transformer's, which are amazing.

[02:10:53]

These came out of of Google Google Brain actually. And actually on that team was Lucas Kaiser, who used to work for me in the period 2005 through eighth grade or something. So that's been fun to see my former sort of Ajai Employees Jasperson do all these amazing things. Wagemann is starting to Google, actually. But anyway, we'll talk about that. Lucas Kaiser and a bunch of these guys, they create transformer networks. That classic paper like Attention Is All You Need and all these things following on from that.

[02:11:25]

So we're looking at transformer networks and like these are able to I mean, this is what underlies GPG two and three and so on, which are very, very cool and have absolutely no cognitive understanding of only the techs are looking at like they're they're very intelligent. They're very intelligent idiots. So sorry to take but a small bring us back.

[02:11:45]

But do you think T-3 understands?

[02:11:49]

No, no, I understand nothing is a complete idiot.

[02:11:52]

But I can tell you, you don't think GBG 20 will understand. No, no, no, no.

[02:11:59]

Sighs It's not going to buy you understanding any more than the faster car is going to get you to Mars is a completely different kind of thing. I mean, these networks are very cool. And as an entrepreneur, I can see many highly valuable uses for them and us as an as an artist. I love them. So, I mean, I we're using our own neural model, which is along those lines to control the KADEK robot now. And it's amazing to like train train in the role model on the robot, Philip K.

[02:12:30]

Dick and see it come up with like crazy stoned philosopher pronouncements, very much like what kids might have said. Like, these models are super cool. And I'm I'm working with Hanson Robotics now on using a similar but more sophisticated one for Sophea, which which we we haven't launched launched yet. But so I think it's cool. But it's not understand these are recognizing a large number of shallow patterns.

[02:12:59]

They're not they're not forming an abstract representation. And that's the point I was coming to. When we're looking at grammar induction, we tried to mine patterns out of the structure, the transformer network. And you can bet the patterns on what you want, they're nasty, so, I mean, if you do supervised learning, if you look at sentences where you know the correct parts of a sentence, you can learn a matrix that match between the internal representation of the transformer and the parts of the sentence.

[02:13:31]

And so then you can actually change something that will output the sentence pass from the transformer network's internal state. And we did this, I think. Christopher Manning, some some others have not done this also. But I mean, what you get is that the representation is horribly ugly and the scandal over the network. It doesn't look like the rules of grammar that, you know, are the right rules of grammar. Right. It's kind of ugly. So what we're actually doing is we're using a symbolic grammar learning algorithm, but we're using the terms from a neural network as a sentence probability oracle.

[02:14:06]

So like when you if you have a rule of grammar and you aren't sure if it's the correct rule of grammar or not, you can generate a bunch of sentences using that rule of grammar and a bunch of sentences violating that rule of grammar. And you can see the transformer model. Does it think the sensors are being the rule of grammar or more probable than the senses disobeying the rule of grammar?

[02:14:27]

So in that way, you can use the neural model as a probability oracle to guide guide symbolic grammar learning process. And that's interesting. That seems to work better than trying to milk the grammar out of the neural network that doesn't have it in there. So I think the thing is these neural nets are not getting a semantically meaningful representation internally by and large. So one line of research is to try to get them to do that. And Infocom was trying to do that.

[02:14:57]

So, like, if you look back like two years ago, there was all these papers like Edward, this probabilistic programming neural net framework that Google had, which came out of InFocus. And so the idea there was like you could train an inferior neural net model, which is a journey of a social network to recognize and generate faces. And the model would automatically learn a variable for how long the nose is and all mcaloon a variable for how wide the eyes are or how big the lips are or something.

[02:15:24]

Right. So automatically learn the these variables which have a semantic meaning. So that was a rare case where a neural net, trained with a fairly standard Gane method, was able to actually learn the semantic representation. So for many years, many of us tried to take that next step and yet again, that neural network that that would have not just a list of semantic latent variables, but we have said Bayes net of similar variables with dependencies between them, the whole programming framework Edwar was made for that.

[02:15:56]

I mean, no one got to work right. And it's possible. Yeah, I don't know.

[02:16:02]

It might be the back propagation just won't work for it because the gradients are too screwed up. Yeah. Maybe you could get to work using similar system like floating point evolutionary algorithm. We tried, we didn't get to work eventually we just passed that rather than gave it up. We passed that and said, well, OK, let's let's try more innovative ways to learn and to learn what are the representations implicit in that network without trying to make it grow inside that network.

[02:16:32]

And I described how we're doing that in. Language, you can do similar things in vision, right? So what is an oracle? Yeah, yeah, yeah. So you can that's one way you use a structured learning algorithm, which is symbolic, and then you use the deep neural net as an oracle to guide the structure. Not the other way to do it is like Infocom was trying to do and try to tweak the neural network to have the symbolic representation inside it.

[02:17:01]

I, I tend to think what the brain is doing is more like using the neural net type thing as an oracle, like I think the visual cortex or the cerebellum, or probably learning a non semantically meaningful, opaque, tangled representation. And then when they interface with the more cognitive parts of the cortex, the cortex is sort of using those as an oracle and learning the abstract representation. So if you do sports, take, for example, serving in tennis.

[02:17:31]

Right? I mean, my tennis serve is OK, not great, but I learned it by error. Right. And I mean, I learned music by telling to I just sit down and play. But then if you're an athlete, which I'm not a good athlete, then you'll watch videos of yourself serving and your coach will help you think about what you're doing. And you'll then form a declarative representation. But your cerebellum maybe didn't have the declarative of representation the same way with music.

[02:17:57]

Like, I will hear something in my head. I'll sit down and play the thing like I heard it. And then then I will try to study what my fingers did to see, like, what did you just play like? How did you do that? Right. Yeah, because if you're composing, you may want to see how you did it. And then declaratively morphed that in some way that your fingers wouldn't think of it. But the the physiological movement may come out of some opaque, like cerebellar Renfree reinforcement learning thing.

[02:18:31]

Right. And so that's, I think, trying to milk the structure of a neural net by treating it as an oracle, maybe more like how your declarative mind post-process is, what your your visual or or motor cortex. I mean, Invision, it's the same way like you can recognize beautiful art. Much better than you can say why you think that piece of art is beautiful, but if you're trained as an art critic, you do learn to say why.

[02:18:59]

And some of it's bullshit, but some of it isn't right. Some of it is learning to map sensory knowledge into declarative and language and linguistic knowledge.

[02:19:08]

Yet without necessarily making the sensory system itself use it, use a transparent and an easily communicable representation.

[02:19:17]

Yeah, that's fascinating to think of neural networks as like dumb question answers that you can just milk to build up a knowledge base and there could be multiple networks I suppose, from different.

[02:19:31]

Yeah. Yeah. So I think if, if a group like Deep Mind are open, if I were to build ajai and I think deep this is like a thousand times more likely from, from what I could tell.

[02:19:42]

But because they've heard a lot of people would. Broad minds and many different approaches and angles on the on AGW opening is also awesome, but I see them as more of like a pure deep reinforcement learning.

[02:19:58]

A short time ago said there's a lot of that. You're right. There's I mean, there's so much interdisciplinary work, a deep mine like 95.

[02:20:07]

And you put that together with Google Brain, which are on the ground, and they're not working that closely together now. But, you know, my oldest son, Zarathustra, is doing his Ph.D. in machine learning applied the automated theorem proving in Prague under Josef Hermann so that the first paper math, which applied deep neural nets to guide theorem proving without a Google brain. I mean by now. By now, the automated theorem proven community is gone way, way, way beyond anything Google was doing.

[02:20:35]

But still. Yeah, but anyway, if that community was going to make a najai, probably one way they would do it was, you know, take 25 different neural modules architected in different ways, maybe resembling different parts of the brain, like a basal ganglia model, cerebellum or thallium. Telma, if you have a few hippocampus models, a number of different models representing parts of the cortex. So take all of these and then.

[02:21:04]

Why are them together to to to Katrina and learn them together like that, that would be an approach to creating an ajai. One could implement something like that efficiently on top of our our true ajai like open Kog 2.0 system once it exists, although obviously Google has has their own highly efficient implementation architecture. So I think that's a decent way to build it. I was very interested in that in the mid 90s. But I mean, the knowledge about how the brain works sort of pissed me off like it was it wasn't like, you know, in the hippocampus, you have these concept neurons like the so-called grandmother neuron, which everyone left.

[02:21:44]

And it's actually there like I have some let's friedemann neurons that fire differentially when when I see you and that when I see any other person. Right. Yeah. So how do these let's friedemann neurons, how do they coordinate with the distributed representation of Leks Friedemann I have in my cortex. Right. There's some back and forth in the cortex and hippocampus that lets these discrete symbolic representations in hippocampus correlate and cooperate with the distributed representations in cortex. This probably has to do with how the brain does its version of abstraction and quantifier logic.

[02:22:17]

Like you can have a single neuron, the hippocampus that activates a whole distributed activation pattern in cortex. Well, this this may be how the brain does like symbolization an abstraction as in in functional programming or something, but we can't measure it like we don't have enough electrodes stuck between the the cortex and the and the hippocampus in any known experiment to measure it. So I got I got frustrated with that direction, not because it's impossible to understand yet.

[02:22:46]

We don't. And of course, it's a valid research director and you can try to understand more and more. And we are measuring more and more about what what happens in the brain now than ever before. So it's quite interesting. On the other hand, I sort of got more of an engineering mindset about ajai. I'm like, OK, we don't know how the brain works that well. We don't know how birds fly that well. The idea that we have no idea how a hummingbird flies in terms of the aerodynamics of it.

[02:23:13]

On the other hand, we know basic principles of like flapping and pushing the air down and we know the basic principles of how the different parts of the brain work. So let's take those basic principles and engineer something that embodies those basic basic principles. But, you know, it's well designed for the hardware that we have on hand, right. Right now.

[02:23:35]

Do you think we can create ajai before we understand? Yeah, I think I think that's probably that's probably what will happen and maybe the Ajai will help us do better brain imaging that will then let us build artificial humans, which is very, very interesting to us because we are humans. Right. I mean, building artificial humans is is super worthwhile. I just think it's probably not the shortest path to ajai.

[02:24:00]

So it's a fascinating idea that we would build ajai to help us understand ourselves.

[02:24:06]

You know, a lot of people ask me if, you know, young people interested in doing artificial intelligence, they look at sort of, you know, doing graduate level, even undergrads.

[02:24:19]

But graduate level research, if they see the world of the artificial intelligence community stands now, it's not really a type research for the most part. Yeah. So the natural question to ask is, what advice would you give? I mean, maybe I could ask if people were interested in working on open kog or in some kind of direct or indirect connection, open kog or ajai research. What would you recommend?

[02:24:45]

Open. First of all, is open source project, there is a there's a Google group discussion list as a GitHub repository. So if anyone's interested in lending a hand with that aspect of ajai, introduce yourself on the open car email list. And there's a Slark as well. I mean, we're we're certainly interested to have, you know, input into our. Redesign process for a new version of open cockpit, but also we're doing a lot of very interesting research.

[02:25:18]

I mean, we're working on under analysis for covered clinical trials. We're working with Hanson Robotics. We're doing a lot of cool things with the current version of of Open Cognata that there's certainly the opportunity to jump in the open kog or various other open source ajai oriented projects. So would you say there's like master's and thesis is in there? Plenty. Yeah, plenty, of course. I mean, the challenge is to find a supervisor who wants to foster that sort of research, but it's way easier than it was when I got my page.

[02:25:50]

OK, great. We've talked about open kog, which is kind of one the software framework, but also the actual attempt to build an ajai system. And then there is this exciting idea of singularity that so maybe can you say first what is singularity? Not so sure. Singularity in that is. A platform for. Realizing a decentralized. Network of artificial intelligences, so. Marvin Minsky, the A.I. pioneer who I knew a little bit, he had the idea of a society of minds like you should achieve an AI, not by writing one algorithm or one program, but you should put a bunch of different ideas out there and the different eyes will interact with each other, each playing their own role and then the totality of the society of what would be the thing that displayed the human level intelligence.

[02:26:53]

And I had when he was alive, I had many debates with Marvin about about this idea. And he. I think. He really thought the mind was more like a society than I do like, I think I think you could have a mind that was as disorganized as a human society, but I think a human like mine has a bit more central control than that, actually. I mean, we have this thalamus and the medulla and limbic system. We have a sort of top down control system that guides much of much of what we do more so than a society does.

[02:27:30]

So I think he's stretched that metaphor a little too far. But but I also think there's something interesting there. And so in the in the 90s, when I started my first sort of nonacademic I project Webman, which was an I started up in New York in the Silicon Alley area in the late 90s when I was aiming to do there was make a distributed society of AZ. The different parts of which would live on different computers all around the world and each one would do its own thinking about the data local to it, but they would all share information with each other and outsource work with each other and cooperate.

[02:28:08]

And the intelligence would be in the whole collective. And I organized a conference together with Francis Halligan at Free University of Brussels in 2001, which was the Global Brain Zero conference and where we're planning the next version, the Global Brain One conference at the Free University of Brussels for next year or 2021. So 20, 20 years after and then maybe we can have the next one, 10 years after that, like exponentially faster until the singularity comes. The timing is right.

[02:28:37]

Yeah, yeah, yeah.

[02:28:39]

So, yeah, the idea with the global brain was, you know, maybe the eye won't just be in the program one guy's computer, but I will be, you know, in the Internet as a whole with the cooperation of different modules living in different places. So one of the issues you face when architecting a system like that. Is, you know, how how is the whole thing controlled? Do you have like a centralized control unit that pulls the puppet strings of all the different modules there?

[02:29:08]

Or do you have a fundamentally decentralized network where the society of of AIS is controlled in some democratic and stuff organized where all the I's in that society? Right. And Francis and I had a different view of many things. But we both we both want to make like a global society of A.I. minds with a decentralized. Organized organizational mode now the main difference was he wanted the. Individual eyes to be all incredibly simple and all the intelligence to be on the collective level, whereas.

[02:29:49]

I thought that was cool, but I thought a more practical way to do it might be if some of the agents in the Society of Mines were fairly generally intelligent on their own. So, like, you could have a bunch of open cogs out there and a bunch of simpler learning systems. And then these are all cooperating, coordinating together, sort of like in the brain, OK, the brain as a whole is the general intelligence, but some parts of the cortex, you could say, have a fair bit of general intelligence on their own where, say, parts of the cerebellum or limbic system have very little intelligence on their own.

[02:30:21]

And they're contributing to general intelligence, you know, by way of their connectivity to other other modules. Do you see instantiations of the same kind of, you know, maybe different versions of Open Bangkok, but also just the same version of peacocking, maybe many instantiations as part of.

[02:30:38]

That's what David and Hertz and I want to do with money, software and other robots. Each one has its own individual mind living on the server, but there's also a collective intelligence infusing them and a part of them living on the edge in each robot, right? Yeah. So so the thing is, at that time, as well as Webman being implemented in Java, one point one, that's like a massive distributed system. Yeah. The you know, the block chain wasn't there yet.

[02:31:05]

So how then do this decentralized control? You know, we sort of knew it. We knew about distributed systems, we knew about encryption. So, I mean, we had the key principles of what underlies Block now. But I mean, we didn't put it together in the way that's been done. So when one fatality, Buttar, and then colleagues came out with the theory and block change, you know, many, many years later, like 2013 or something, then I was like, well, this is interesting.

[02:31:31]

Like, this is so early scripting language. It's kind of dorky in a way. And I don't see why you need a complete language for this purpose. But on the other hand, this is like the first time I could sit down and start to like script infrastructure for decentralized control of the eyes in a society of minds, in attractable world, like you can hack the Bitcoin code base. But it's really annoying where Solenni is a theory of scripting language is just nicer and easier to use.

[02:32:01]

I'm very annoyed by this point, but like Java, I mean, these languages are amazing when they first come out. So then I came up with the idea that turned into a singularity. OK, let's let's make it decentralized agent system where a bunch of different eyes, you know, wrapped up and say different docker containers or Aleksi containers, different eyes. Can each of them have their own identity on the block chain? And the coordination of this community of eyes has no central controller, no dictator.

[02:32:31]

Right. And there's no central repository of information. The coordination of the society of minds is done entirely by the decentralized network in a decentralized way by by the algorithms. Right. Because you're the mother of Bitcoin in math we trust. Right. And so that's what you need. You need the society of minds to trust only in math, not trust, only in one centralized server.

[02:32:55]

So they are systems themselves outside of them. Yeah.

[02:32:58]

But then the communication at the moment. Yeah. Yeah. We, I would have loved to put the ice operations on chain in some sense, but in Ethereum it's just too slow. You, you can't, you can't do that somehow.

[02:33:11]

It's the basic communication between our systems that's. Yeah. Yeah. So just basically and I it's just some software in singular, it's just some software process living in the container. And as important there's a proxy that lives in that container along with the eye that handles the interaction with the rest of singularity. And then when one wants to contribute with another one in the network, they set up a number of channels and the setup of those channels uses the Ethereum block chain.

[02:33:40]

Once the channels are set up, then data flows along those channels without without having to be having to be on the block chain. All the goes on the block is the fact that some data went along that channel. So you can do so.

[02:33:51]

There's not a shared knowledge. It's well, the identity of each agent is on the block chain on Ethereum block chain. If one agent rates the reputation of another agent that goes on the block chain, an agents can published what APIs they will fulfill on the on the block chain. But the actual data for A.I. and there is not it's not on the.

[02:34:16]

Do you think it could be. Do you think it should be. Um, in some cases it should be. In some cases maybe it shouldn't be. But I mean I think that so I'll give you an example. Using Ethereum, you can't do it using now there's more modern and faster block chains where you could you could start to do that in some cases. Two years ago, that was less so. It's a very rapidly evolving ecosystem.

[02:34:43]

So like one example, maybe you can comment on something I worked on a lot on is autonomous vehicles. You can see each individual vehicle as a system and you can see vehicles from Tesla, for example, and then Ford and GM and all these as also like the larger I mean, they all are running the same kind of system and each sets of vehicles. So individual systems and individual vehicles, but all different stations, the same system within the same company.

[02:35:15]

So, you know, you can envision a situation where all of those A.I. systems are put on singularity. Not right. Yeah. And how how do you see that happening? And what would be the benefit?

[02:35:29]

And could they share data? I guess I guess the biggest things is the power. There's a centralized control, but the benefit would have been it's really nice if they can somehow share the knowledge in an open way if they choose to.

[02:35:44]

Yeah, yeah. Those are quite good points. So I think the. The benefit from being on the on the decentralised network, as we envision it, is that we want the eyes and the network to be outsourcing work to each other and making API calls to each other frequently. So the real benefit would be if I wanted to outsource some cognitive processing or data processing, the data, pre processing, whatever, to some other guys in the network which specialize in something different.

[02:36:19]

And this this really requires a different way of thinking about software development. Right. So just like object oriented programming was different than imperative programming. And now joint programmers all use these frameworks to do things rather than just libraries, even, you know, shifting to agent based programming where an agent is asking other like live real time evolving agents for feedback and what they're doing. That's a different way of thinking. I mean, it's not a new one. There was loads of papers, an agent based programming in the 80s and onward.

[02:36:55]

But if you're willing to shift to an agent based model of development, then you can put less and less in your eye and rely more and more on interactive calls to other eyes running in the network. And of course, that's not fully manifested yet because although we've rolled out a nice working version of Singler in that platform, there's there's only 50 to 100 eyes running in there now. There's not tens of thousands of eyes. So we don't have the critical mass for the whole society of mind to be doing, doing what we want, what we want.

[02:37:29]

The magic really happens when there's just a huge number of agents. Yeah, yeah. In terms of data, we're partnering closely with another a project called Ocean Protocol and Ocean Protocol. That's the project of McConaghy, who developed big chain DB, which is a block chain based database. So ocean protocol is basically block chain based big data and names that make making it efficient for four different A.I. processes or statistical processes or whatever to to share large data sets or one process can send a clone of itself to work on the other guy's data set and send results back and so forth.

[02:38:07]

So by getting ocean and and, you know, you have data like so this is the data ocean, but getting ocean and singularity not to to interoperate. We're aiming to take into account of the big data aspect also. But it's it's quite challenging because to build this whole decentralized block chain based infrastructure, I mean, your competitors are like Google, Microsoft, Alibaba and Amazon, which have so much money to put behind their centralized infrastructures, plus the solving simpler algorithmic problems because making it centralized in some ways is is easier.

[02:38:44]

Right. So they're very major computer science challenges. And I think what you saw with the whole ICAO boom in the block in the cryptocurrency world is a lot of young hackers who were hacking Bitcoin or Ethereum. And they say, well, why don't we make this decentralized on block chain? Then after they raise the money through next year, they realized how hard it is. Like actually we're wrestling with incredibly hard computer science and software engineering and distributed systems problems, which can be solved, but they're just very difficult to solve.

[02:39:20]

And in some cases, the individuals who started those projects were not well equipped to actually solve the problems.

[02:39:30]

So you think would you say that's the main bottleneck? If you look at the future of currency? You know, the question is, will currency.

[02:39:39]

The main bottleneck is politics. It's governance in the hands of armed thugs that will shoot you if you bypass their currency restrictions. That's right. So your sense is that versus the technical challenges, because you kind of just adjust. The technical challenges are quite high as one in four making a distributed money.

[02:39:56]

You can do that on algo. And right now, I mean, so while Ethereum is too slow, there's Ahlgren and there's a few other more modern, more scalable block chains that would work fine for a decentralized global global currency. So I think there were technical bottlenecks to that two years ago and maybe Ethereum 2.0 will be as fast as ever. And I don't know. That's not that's not fully written yet. Right. So I think the obstacle to currency being put on the block change is that is the other currency will be on the block chain will just be on the block chain in a way that enforces centralized control and government hedge money rather than otherwise, like the RMV will probably the first global the first currency on the block in the airable, maybe next year already.

[02:40:41]

Yeah, yeah.

[02:40:42]

I mean, the hilarious digital currency, you know, makes total sense, but they would rather do it in the way that Putin and. Xi Jinping have access to the global keys for everything right then so and then the analogy to that in terms of singularity. I mean, there's echoes. I think you've mentioned before that Linux gives you hope and it's not as heavily regulated as money, right?

[02:41:07]

Yes, right. Not yet. Oh, that's a lot.

[02:41:10]

Slipperier the money, too. I mean, money is is easier to regulate because it's kind of easier to to define where it is. It's almost everywhere, inside everything. Where is the boundary between AI and software? I mean, if you're going to regulate A.I., there's no IQ test for every hardware device that has a learning algorithm. You're going to be putting like hegemonic regulation on all software.

[02:41:34]

And I don't rule out that that any adaptive software. Yeah, but how do you tell if software is a doctor and what every software is going to be adaptive?

[02:41:42]

I mean, or maybe maybe the you know, maybe we are living in the golden age of open source. That will not there will not always be open. Maybe it'll become centralized control software by governments.

[02:41:54]

It is entirely possible. And part of what I think we're doing with things like singularity, that protocol is creating a toolset. That can be used to counteract that sort of thing. It's a similar thing about mesh networking, right? Plays a minor role now, the ability to access Internet directly from the phone. On the other hand, if your government starts trying to control your use of the Internet, suddenly having networking, mesh networking there can be very convenient.

[02:42:27]

Right. And so right now. Something like a decentralized block based ajai framework or narrow framework. It's cool, it's nice to have. On the other hand, if governments start trying to tamp down on my eye and they're operating with someone's eye in Russia or somewhere, then suddenly having a decentralized protocol that nobody owns or controls becomes an extremely valuable part of the toolset. And, you know, we've we've put that out there now. It's not perfect.

[02:43:02]

But but it it operates and, you know, it's pretty block chain agnostic. So we're talking to our and about making part of Cingular, rather an algorithm. And my good friend Tuffy Saliba has a cool black team project called Tota, which is a block chain without a distributed ledger. It's like a whole other architecture. So so there's a lot of more advanced things you can do in the blogging world. Singularity that could be ported and to a whole bunch of it could be made multiple a whole bunch of different block chains.

[02:43:34]

And there's a lot of potential and a lot of importance to putting this kind of toolset out there. If you compare the open kog, but you could see is open. Kog allows tight integration of a few A.I. algorithms that share the same knowledge store in real time in Ramoray Singularity that allows loose integration of multiple different. They can share knowledge, but they're mostly not going to be sharing knowledge in RAM, in RAM on the same machine. And I think what we're going to have is a network of network of networks like I mean, you have the knowledge graph inside inside the open cargo system and then you have a network of machines inside distributed open mind.

[02:44:23]

But then that open cog will interface with other AIS doing deep neural nets or custom biology data analysis or whatever they're doing in singularity, which is a looser integration of different, some of which may be maybe they're their own networks. Right. And I think at a very loose analogy, you could see that in the human body, like the brain has regions like cortex or hippocampus which tightly interconnect like conical columns within the cortex, for example. Then there's looser connection with the different lobes of the brain and then the brain interconnects with the endocrine system in different parts of the body, even even more loosely.

[02:45:05]

Then your body interacts even more closely with the other people that you talk to. So you often have networks within networks, within networks with progressively looser coupling as you get get higher up in that hierarchy. I mean, you have that in biology. You have the in the in the Internet as a just networking medium. And I think I think that's what we're going to have in the network of of software processes leading to to ajai.

[02:45:33]

That's a beautiful way to see the world.

[02:45:35]

Again, the same similar questions with open kog. If somebody wanted to build a nice system and plug into the singularity that, what would you recommend?

[02:45:46]

So that's that's much easier. I mean, open kog is still a research system, so it takes some expertise and sometimes we have tutorials, but it's it's somewhat cognitively labor intensive to get up to speed on open kog. And I mean, what's one of the things we hope to change with the tragi Open 2.0 version is just make make the learning curve more similar to Tenzer flow or torture. Something right now open, amazingly powerful but not simple to not simple to do one.

[02:46:17]

On the other hand, singularity that you know as an open platform was developed a little more with usability in mind over the block chain. It's still kind of a pain for me. I mean, if you're a command line guy, there's a command line interface. It's quite easy to, you know, take any AI that has an API and lives in a doctor container and put it online anywhere. And then it joins the global singularity. And anyone who puts a request for services out into the singularity, the peer to peer discovery mechanism, will find your your eye.

[02:46:51]

And if it does what was asked, it can then start a conversation with your A.I. about whether it wants to ask your A.I. to do something for how much it would cost and so on. So that's that's fairly simple. If you wrote an AI and want it listed on like official singularity in the marketplace, which is which is on our website, then we have a publisher portal and there's a KYC process to go through because then we have some legal liability for what goes on on that website.

[02:47:22]

So in a way that's been an education to the sort of two layers, like there's the open, decentralized protocol and there's the market.

[02:47:30]

You can use your decentralized protocol, so say some developers from Iran, and there's brilliant AI guys in University of Isfahan in Tehran, they can put this stuff on singularity in that protocol. And just like they can put something on the Internet. I don't control it. But if we're going to list something on the singularity that marketplace and put a little picture and a link to it, then if I put some Iranian genius code on there, then Donald Trump can send a bunch of jackbooted thugs to my house to to arrest me for doing business with Iran.

[02:48:03]

So, I mean, we already see in some ways the value of having a decentralized protocol, because what I hope is that someone in Iran will put online an Iranian singularity in the marketplace. Right. Which you can pay in the cryptographic token, which is not owned by any country. And then if you're in like Congo or somewhere that doesn't have any problem with Iran, you can subcontract our services that you find on on that marketplace. Right. Even though U.S. citizens can buy US law.

[02:48:33]

So right now, that's kind of a minor point. You know, as you alluded, if if regulations go in the wrong direction, it can become more of a major point. But I think it also is the case that having these workarounds to regulations in place is a defense mechanism against those regulations being put into place. And you can see that in the music industry, right? I mean, Napster just happened in BitTorrent just happened. And now most people in my kid's generation, they're baffled by the idea of paying for music.

[02:49:06]

I mean, my dad pays for music. I mean, yeah, but that's because these decentralized mechanisms happened and then the regulations followed. Right. And the regulations would be very different if they had been put into place before there was Napster and BitTorrent and so forth. In the same way, we got to put out there in a decentralized vein and take that out there in a decentralized vein now so that the most advanced AI in the world is fundamentally decentralized.

[02:49:35]

And if that's the case, that's just the reality the regulators have to deal with. And then, as in the music case, they're going to come up with regulations that sort of work with the with the decentralized reality. Beautiful. You are the chief scientist of Hanson Robotics. You're still involved in robotics, doing a lot of really interesting stuff there. This is for people who don't know the company that created Sophea, the robot. Can you tell me who Sofia is?

[02:50:08]

I'd rather start by telling you who David Hanson is. David is the brilliant mind behind this Sophea robot, and he remains so far, he remains more interesting than his creation, although she may be improving faster than he is, actually. I mean. Yeah, so, yeah, I'm a good point.

[02:50:28]

I met David maybe twenty seven or something at some futurist conference. We were both speaking and I could see we had a great, great deal in common. I mean, we were both kind of crazy, but we also we both had a passion for Ajai and the singularity and we were both huge fans of the work of the science fiction writer. And I wanted to create benevolent ajai that would, you know, create massively better life for all humans and all sentient beings, including animals, plants and super human beings.

[02:51:07]

And David, he wanted exactly the same thing, but he had a different idea of how to do it. He wanted to get computational compassion, like he wanted to get machines that would love people and empathize with people. And he thought the way to do that was to make a machine that can, you know, look, people are the eye, face to face, look at look at people and make people love the machine and the machine loves the people back.

[02:51:34]

So I thought that was a very different way of looking at it, because I'm very math oriented and I'm just thinking like, what is the abstract cognitive? Algorithm that will let the system know, internalize the complex patterns of human values, blah, blah, blah. Things like, look you in the face, in the eye and love you. And so I we hit it off quite well and we talk to each other off and on. Then I move to Hong Kong in 20, 2011.

[02:52:06]

So I've been I mean, I've been living all over the place. I've been in Australia and New Zealand. My academic career then in Las Vegas for a while was in New York in the late 90s, starting my my entrepreneurial career in DC for nine years, doing a bunch of US government consulting stuff, then moved to Hong Kong in 2011, mostly because I met a Chinese girl who I fell in love with and we got married. She's actually not from Hong Kong.

[02:52:34]

She's from mainland China. But we converge together in Hong Kong, so married now have a two year old baby. So I went to Hong Kong to see about a girl, I guess.

[02:52:44]

Yeah, pretty. Pretty. Pretty much, yeah. And on the other hand, I started doing some cool research there with you at Hong Kong Polytechnic University. I got involved with a project called IDEO using machine learning for Stock and Futures Prediction, which was quite interesting. And I also got to know something about the consumer electronics and hardware manufacture ecosystem in Shenzhen across the border, which is like the only place in the world that makes sense to make complex consumer electronics at large scale, low cost.

[02:53:15]

It's just it's astounding. The hardware ecosystem that you have in South China, like you ask people here, cannot imagine what it's like.

[02:53:24]

So David was starting to explore that. Also, I invited him to Hong Kong to give a talk at Hong Kong. You and I introduced him in Hong Kong to some investors who were interested in his robots and he didn't have Sophea. Then he had a robot of Philip K. Dick, our favorite science fiction writer. He had a robot, Einstein. He had some little toy robots that looked like his son, Zino. So through the investors I connected him to, he managed to get some funding to basically put Hensen Robotics to Hong Kong.

[02:53:57]

And when he first moved to Hong Kong, I was working on Ajai research and also on this machine learning trading project. So I didn't get that totally involved with HandsOn Robotics. But as as I hung out with David more and more, as we were both there in the same place I started to get. I start to think about what you could do. To make his robots smarter than they were, and so we started working together and for a few years I was chief scientist and head of software at Hensen Robotics.

[02:54:33]

Then when I got deeply into the block chain side of things, I stepped back from that and co-founded Singularity. David Johnson was also one of the co-founders of Singularity. So part of our goal there had been to make the block chain based like cloud platform for Sophia and the other other software will be just one of the robots in this integration that.

[02:54:59]

Yeah, yeah, yeah, exactly. So many copies of the Sophia robot would be, you know, among the user interfaces to the globally distributed singularity, that cloud mind. And I mean, David and I talked about that for quite a while before cofounding Singularity, by the way, in in his vision.

[02:55:20]

And your vision was what Sophia, tightly coupled to a particular AI system, or was the idea that you can plug and you can just keep plugging in different systems within?

[02:55:31]

I think David's David's view was always that, Sophia. Would be a platform much like the pepper robot is a platform from something, should be a platform with a set of nicely designed APIs that anyone can use to experiment with their different AI algorithms on that platform. And singularity, of course, fits right into that right. The singularity that it's an API marketplace so anyone can put their eye on their open cog is a little bit different. I mean, David likes it, but I'd say it's my thing it's not his.

[02:56:10]

David has a little more passion for biologically based approaches to AI than I do, which which makes sense. I mean, he's really into human physiology and biology. He's a character sculptor. Right. So, yeah, he's interested in. But he also works a lot with robots and logic based systems. So, yeah, he's interested in not just software, but all Bentson robots as a powerful social and emotional robotics platform. And you know what I saw in Sofia?

[02:56:43]

Was a way to, you know, get. I algorithms out there in front of a whole lot of different people in an emotionally compelling way, and part of my thought was really kind of abstract connected to high ethics. And, you know, many people are concerned Ajai is going to enslave everybody or turn everybody into Entercom Petroleum to make extra hard drives for further their cognitive engine or whatever. And, you know, emotionally, I'm not driven to that sort of paranoia.

[02:57:19]

I'm really just an optimist by nature. But intellectually, I have to assign a non-zero probability to those sorts of nasty outcomes, because if you're making something ten times as smart as you, how can you know what it's going to do? There's an irreducible uncertainty there, just as my dog can't predict when I'm going to do tomorrow. So it seemed to me that based on our current state of knowledge. The best way to fire stages we create toward benevolence would be to infuse them with love and compassion the way that we do our own children.

[02:57:59]

So you want to interact with eyes in the context of doing compassionate, loving and beneficial things. And in that way, as your children will learn by doing compassionate, beneficial, loving things alongside you. And that way I will learn in practice what it means to be compassionate, beneficial and loving. It will get a sort of ingrained, intuitive sense of this, which you can then abstract in its own way as it gets more and more intelligent. Now, David saw this the same way.

[02:58:30]

That's why he came up with the name Sophea, which means which means wisdom. So it seemed to me making these like beautiful, loving robots to be rolled out for beneficial applications would be the perfect way to roll out early stage ajai systems so they can learn from people and not just learn, in fact, from the ultimate learning, human human values and ethics from people while being there. You know, their home service robots, their education assistants, their nursing robots.

[02:59:02]

That was the grand vision. Now, if you've ever worked with robots, the reality is quite different, right? Like the first principle is the robot is always broken. Working with robots in the 90s, a bunch and you had to solve them together yourself. And I put neural nets during reinforcement learning like overturned saveable type robots. And in the 90s when I was a professor, things of course advanced a lot. But but the principle, first of all, that the robots always broken.

[02:59:32]

So, yeah. So faced with the reality of making software do stuff, many of my robot A.I. aspirations were temporarily cast aside. And I mean, there's just a practical. Problem of making this robot interact in a meaningful way, because, you know, you put nice computer vision on there, but there's always glare and then or if you have a dialogue system. But at the time I was there, like no speech text algorithm could deal with Hong Kong and Hong Kong people's English accents.

[03:00:07]

So the speech text is always bad. So the robot always sounded stupid because it wasn't getting the right text. Right. So I started to view that really as what? Well, in software engineering, you call a walking skeleton, which is maybe the wrong metaphor to use for Sofia or maybe the right woman. I mean, with a walking skeleton is in software development is if you're building a complex system, how do you get started? Well, one way is to first build part one.

[03:00:33]

Well, then build part two. Well, then build part three well and so on. And the other way is you make like a simple version of the whole system and put something in the place of every part the whole system will need so that you have a whole system that does something and then you work on improving each part in the context of that whole integrated system. So that's what we did on the software level in Sofia. We made like a walking skeleton software system where so there's something that says there's something that here, there's something that moves.

[03:01:03]

There's something that there's something that remembers. There's something that learns. You put a simple version of each thing in there and you connect them all together so that the system will do its thing. So there's there's a lot of AI in there. There's not any ajai in there. I mean, there is computer vision to recognize people's faces, recognize when someone comes in the room and leaves to recognize whether two people are together or not. I mean, the dialogue system, it's a mix of like hand coded rules with deep neural nets that come up with with their own responses.

[03:01:39]

And there's some attempt to have a narrative structure and sort of try to pull the conversation into something with a beginning, middle and end and this sort of story arc.

[03:01:49]

So it's I mean, like if you look at the Lobna prize and the systems that beat the Turing test, currently they're heavily Rule-based because, like you said, narrative structure to create compelling conversations. You currently neural networks cannot do that. Well, even with Google Menagh, when you actually look at full scale conversations, it's so this is a thing.

[03:02:11]

So we've been I've actually been running an experiment the last couple of weeks taking Sophia's chatbot and then Facebook's transformer chatbot, which lay open the model. We've had them chatting to each other for a number of weeks on the server. Just that's why we're generating training data. What's Sophia says in a wide variety of conversations. But we can see. Compared to Sofia's current chatbot. The Facebook chat chatbot comes up with a wider variety of fluent sounding sentences, on the other hand, it rambles like mad, the Saffir chat, but it's a little more repetitive in the sentence structures it uses.

[03:02:54]

On the other hand, it's able to keep like a conversation arc over a much longer, longer period. So there now you can probably surmount that using reformer and like using various other deep neural architectures to improve the way these transform models are trained. But. In the end, neither one of them really understands what's going on, and I mean, that's the challenge ahead. With Sophia, if I were doing a robotics project aimed at Ajai, I would want to make like a robot toddler that was just learning about what it was seeing, because then the language is grounded in the experience of the robot.

[03:03:32]

But what Sophia needs to do to be Sophia is to talk about sports or the weather or robotics or the conference she's talking about. Yeah, she needs to be fluent talking about any damn thing in the world. And she doesn't have grounding for all for all those all those things. So there's this just like I mean, Google, Mena and Facebook stuff. I don't have a grounding for what they're talking about, about either. So in a way, the need to speak fluently about things where there is no non-linguistic grounding pushes what you can do for Sofia in the short term, a bit a bit away from me, I mean, away from here towards IBM Watson situation where you basically have to heuristic and hard core stuff.

[03:04:21]

And Rule-based, I have to ask you about this. OK, so.

[03:04:26]

Because, you know, in part, Sophia is like an. Uh. Is an art creation because it's beautiful. It's she's beautiful because she inspires through our human nature of anthropomorphize things, we immediately see an intelligent being there because David is a great sculptor, is as good a sculptor. That's right. So, in fact, if Sophia just had nothing inside her head, said nothing if she just sat there, were already prescribed some intelligence to along Sophie line in front of her after every time.

[03:05:06]

That's right. So it captivated the imagination of the of many people. I was going to say the world, but yeah. I mean, a lot of people and billions of people, which is amazing.

[03:05:17]

It's amazing.

[03:05:18]

Right now, of course, many people have prescribed much greater prescribed essentially ajai type of capabilities to Sophia when they see her.

[03:05:29]

And, of course, from the French folk like John Rikoon immediately see that the people from the community and get really frustrated because understandably so what?

[03:05:45]

And then they criticize people like you who sit back and don't say anything about like basically allow the imagination of the world that allow the world to continue being captivated.

[03:06:01]

So what what's your what's your sense of that kind of annoyance that our community has?

[03:06:08]

I think there's several parts to my reaction there. First of all. If I weren't involved with Hanson remarks and didn't know David Hanson personally, I probably would have been very annoyed initially at Sophia as well. I mean, I can understand the reaction. I would have been like, wait. All these stupid people out there think this is an ajai, but it's not an ajai, but they're checking people that this very cool robot is a najai. And now those of us, you know, trying to raise funding to build ajai, you know, people will think it's already there and already works.

[03:06:48]

Right. So I.

[03:06:49]

Yeah. On the other hand. I think. Even if I weren't directly involved with it once I dug a little deeper into David and the robot and the intentions behind it, I think I would have stopped being being pissed off with folks like young LachIan have remained pissed off after their after their after their initial while and their initial thing. That's the thing. I think that in particular struck me as somewhat. Ironic because John McCain is working for Facebook, which is using machine learning to program the brains of the people in the world toward vapid consumerism and political extremism.

[03:07:32]

So if if your ethics allows you to use machine learning in such a blatantly destructive way, why would you ethics not allow you to use machine learning to make a loveable theatrical robot that draws some foolish people into its theatrical illusion? Like if if the if the pushback had come from your Shobanjo, I would have felt much more humbled by it because he's he's not using AI for blatant evil. On the other hand, he also is a super nice guy and doesn't bother to go out there trashing other people's work for no good reason.

[03:08:11]

Some shots fired. But I mean, that's I mean, if you're going to ask, I'm going to answer for sure. I think we'll go back and forth. I'll talk to John again.

[03:08:21]

I would add on this, though.

[03:08:23]

I mean, David Hanson is an artist and he often speaks off the cuff. And I have not agreed with everything that David has said or done regarding Sophia. And David also does not agree with everything David has said or done about fourteen point.

[03:08:43]

I mean, David David is an artistic Wildeman. And that's that's that's part of his charm. That's that's part of his genius. So certainly there have been conversations within Hanson Robotics in between me and David where I was like, let's. Let's be more open about how this thing is working, and I did have some influence in nudging Hanson Robotics to be more open about about how software was working. And and David wasn't especially opposed to this. And, you know, he was actually quite right about it.

[03:09:19]

What he said was, you can tell people exactly how it's working.

[03:09:24]

And they won't care, they want to be drawn into the illusion and he was 100 percent, 100 percent correct. I'll tell you, this wasn't Sophia, this was Philip K. Dick. But we did some interactions between humans and Philip K. Dick Robot.

[03:09:39]

In Austin, Texas, a few years back, and in this case, the focus was just operated by another human in the other room. So during the conversations, we didn't tell people the robot was tower operated. We just said, here, have a conversation with Phil Dick, we're going to film you. Right. And they had a great conversation with Phil KADEK KADEK operated by my friends to find the guy. After the conversation, we brought the people in the back room to see Stephane, who was controlling the Philip K.

[03:10:08]

Dick robot. But they didn't believe that these people were like, well, yeah, but I know I was talking to Phil like maybe Stephane was typing, but the spirit of Phil was animating his mind while he was typing. Yeah.

[03:10:22]

So even though they knew was a human in the loop, even seeing the guy there, they still believe that was Phil they were talking to a small part of me believes that they were right, actually, because I understand what we don't understand the universe.

[03:10:37]

There is a cosmic minefield that we're all embedded in that you really strange synchronicity in the world, which is a topic we don't have time to go into too much here. I mean, there's there's some nature. There's something to this where our imagination about Sophia and people Giannakou and being frustrated about it is all part of this beautiful dance of creating artificial intelligence that's almost essential.

[03:11:06]

You see, with Boston Dynamics, I'm a huge fan of as well. You know, the kind of I mean, these robots are very far from intelligent, but I played with the last one, actually, with the Superman.

[03:11:20]

Yeah, very cool. It reacts quite in a fluid and flexible way, but we immediately ascribe the kind of intelligence we media ascribe to them. Yeah, yeah.

[03:11:30]

If you kick it and it falls down and goes out, you feel bad help.

[03:11:34]

And I mean, that's, that's, that's part of that's going to be part of our journey and creating intelligence systems more and more and more and more like as Sofia starts out with the walking skeleton, as you add more and more intelligence, I mean, we're going to have to deal with this kind of idea about Sophia. I would say. I mean, first of all, I have nothing against young. Look, you know, this is fine.

[03:11:58]

This is a nice guy. If he if he wants to play the media media adventure game, I'm happy to play him. He's a good researcher and a good human being. And I'd happily work with the guy. But yeah, the other thing I was going to say is I have been explicit about how Sofia works.

[03:12:17]

And I posted online and, what, eight plus magazine online webzine. I mean, I posted a moderately detailed article explaining like there are three software systems we've used inside Sofia. There's there is a time line editor, which is like a rule-based authoring system where she's really just being an outlet for the human scripting. There's a chatbot which has some real and some aspects. And then sometimes we use open code behind Sofia where there's more learning, learning and reasoning.

[03:12:49]

And, you know, the funny thing is I can't always tell which system is operating right. I mean, so when she whether she's really learning or thinking or just appears to be over half hour, I could tell with over like three or four minutes of interaction, I even having three systems that's already sufficiently complex where you can't really tell right away. Yeah.

[03:13:10]

The thing is, even if you get up on stage and tell people how Sophie is working and then they talk to her, they still attribute more agency and consciousness to her then than is is really there. So I think there's there's a couple levels of ethical issue there. One issue is, should you be. Transparent about how software is working, and I think you should and I think. I think we have been I mean I mean, it's there's articles online, there's some TV special that goes through me explaining the three subsystems behind Sofia.

[03:13:52]

So the way Sofia works is is out there much more clearly than how Facebook works or something. I mean, we've been fairly explicit about it. The other is. Given that telling people how it works doesn't cause them to not attribute too much intelligence agency to it anyway, then should you keep killing them when they want to be fooled? And I mean, the you know, the whole media industry is based on fooling people the way they want to be fooled.

[03:14:24]

And we are fooling people 100 percent toward a good and I mean, we are we are playing on people's sense of empathy and compassion so that we can give them a good user experience with help for robots and so that we can we can fill the AI's mind with love and compassion. So, I mean, I've been talking a lot with Hanson Robotics lately about collaborations in the area of of medical robotics. And we we haven't quite pulled the trigger on a project in that domain yet, but we may well do so quite soon.

[03:15:02]

So we've been we've been talking a lot about, you know, robots can help with with elder care robots going out with kids, Davidson, lot of things with with autism therapy and robots, robots before in the car over there, having a robot that can be a nursing assistant in various senses can be quite valuable. The robots don't spread infection and they can also deliver more attention than human nurses can give. Right. So if you have a robot that's helping a patient with covid, if that patient attributes more understanding and compassion and agency to that robot, then it really has because it looks like a human.

[03:15:37]

I mean, is that really bad? I mean, we can tell them it doesn't fully understand you and they don't care because they're lying there with a fever and they're sick. But they'll react better to that robot with its loving, warm facial expression than they would to a pepper robot or a metallic looking looking robot. So it's it's really it's about how you use it, right? If you made a human looking like door to door sales robot that used its human looking appearance to scan people out of their money.

[03:16:07]

Yeah. Then you're using that that connection in a bad way. But you can also use it in a good way. And but then that's the same the same problem with every technology. Right. Beautifully put.

[03:16:20]

So like you said, we're living in the era of the covid. This is twenty, twenty one of the craziest years in recent history.

[03:16:32]

So if we zoom out and look at this pandemic, the coronavirus pandemic.

[03:16:41]

Maybe let me ask you this kind of thing in viruses in general, when you look at viruses, do you see them as the kind of intelligent system?

[03:16:53]

I think the concept of intelligence is not that natural of a concept in the end. I think human minds and bodies are a kind of complex self organizing adaptive system. And viruses certainly are that right. There are very complex of organizing adaptive system. If you want to look at intelligence, as Marcus defines it, as sort of optimizing computable reward functions over computable environments, for sure, viruses are doing that right. And and I mean, in doing so, they're causing some some harm to us.

[03:17:30]

And so there you know, the human immune system is a very complex of organizing adaptive system, which has a lot of intelligence to it. And viruses are also adapting and dividing into new mutant strains and so forth. And ultimately, the solution is going to be nanotechnology, right? I mean I mean, the solution is going to be making little nanobots that fight the viruses or.

[03:17:55]

Well, people will use them to make nastier viruses, but hopefully we can also use them to just detect combat and kill the viruses. But I think now now we're stuck with the biological mechanisms to combat these these viruses. And, you know, we've been ajai is not yet mature enough to use against covid, but we've been using machine learning and also some machine reasoning in open kog to help some doctors to do personalized medicine against covid. So the problem there is, given the person's genomics and given their clinical medical indicators, how do you figure out which combination of antivirals is going to be most effective against covid for that person?

[03:18:41]

And so that that's something where machine learning is interesting. But also we're finding the abstraction we get in open court with. Machine reasoning is interesting because it can help with transfer learning when you have not that many different cases to study and qualitative differences between different strains of a virus or people of different ages who may have Koven.

[03:19:04]

So there's a lot of different disparate data to work with and a small datasets and somehow integrating them.

[03:19:10]

You know, this is one of the shameful things. It's very hard to get that data. So I mean, we're working with a couple of groups doing clinical trials and they're sharing it with us under non-disclosure.

[03:19:24]

But what should be the case is like every covid clinical trial should be putting data online somewhere like suitably encrypted to protect patient privacy so that anyone with the A.I. algorithms should be able to help analyze it and any biologist should be able to analyze it by hand to understand what they can write instead. Instead, that data is like siloed inside whatever hospital is running the clinical trial, which is completely asinine and ridiculous. What why the world works that way? I mean, we can all analyse why, but it's insane that it does it.

[03:19:58]

Look at this Heidrick Chloroquine, right? All these clinical trials were done, reported by Sergius fear some little company no one ever heard of and. Everyone paid attention to this, so they were doing more clinical trials based on that, and then they stopped doing clinical trials based on that. Then they started again. And why isn't that data just out there so everyone can analyze and see what's going on? I hope that will move. That data will be out there eventually for future pandemics.

[03:20:29]

I mean, did you have hope that our society will move in the direction of so in the immediate future because the US and China frictions are getting very high. So it's hard to see U.S. and China as moving in the direction of openly sharing data with each other? Right. It's not there's some sharing of data, but different groups are keeping their data private until they've milked the best results from it and then they share it. So it's so yeah, we're working with some data that we've managed to get our hands on, something we're doing to do good for the world.

[03:21:00]

And it's a very cool playground for for like putting deep neural nets in open court together. So we have like a biodome space full of all sorts of knowledge from many different biology experiments about human longevity and from biology knowledge bases online. And we can do like graph to vector type in beddings, where we take notes from the hyper graph, embed them into vectors, which can then feed into neural nets for different different types of analysis. And we were doing this in the context of a project called the Review that we spun off from singularity not to do longevity, longevity, analytics.

[03:21:36]

I understand why people live to one hundred and five years over another. People don't. And then we have the spinoff Singularity Studio, where we're working with some some health care companies on data analytics. But so this biotech space that we built for these more commercial and longevity data analysis purposes, we're repurposing and covid data into the same same biome space and playing around with, like Graffin beddings from that graph into neural nets for bioinformatics. So it's it's both being a cool testing ground, some of our learning and reasoning.

[03:22:14]

And it seems we're able to discover things that people weren't seeing otherwise, because the thing in this case is for each combination of antivirals, you may have only a few patients who've tried that combination and those few patients may have their particular characteristics like this combination of three was tried only on people aged 80 or over. This other combination of three, which has an overlap with the first combination, was tried more on young people. So how do you combine those those different pieces of data?

[03:22:42]

It's a very dodgy transfer learning problem, which is the kind of thing that the probabilistic reasoning algorithms we have inside open kog are better than deep neural networks. On the other hand, you have gene expression data. We have twenty five thousand genes and the expression level of each gene in the peripheral blood of each person. So that sort of data, either deep neural nets or tools like egg booster cat boost his decision. Forestry's are better at dealing with than open because it's just these huge, huge, massive floating point vectors that are annoying for a logic engine to deal with, but are are perfect for a decision for a neural.

[03:23:19]

So it's it's a great playground for like hybrid hybrid A.I. methodology. And we can have singularity that have open Coggin one agent and a different agent and they talk to each other. But at the same time, it's highly practical because we're working with we're working with, for example, some physicians on this project in the physicians in the group called ENF Opinion based out of Vancouver and Seattle. Who are these guys? They're working every day, like in the hospital with patients, patients dying of covid.

[03:23:53]

So it's it's quite cool to see, like, neuro symbolic. I like where the rubber hits the road trying to save people's lives.

[03:24:02]

Like I've been doing bouy since 2001, but mostly human longevity research and fly longevity research, trying to understand why some organisms really live a long time. This is the first time like a race against the clock and try to use the eye to figure out stuff that like if we take two months longer to solve the problem, some more people will die because we don't know what combination of antivirals to give.

[03:24:30]

Yeah, at the societal level, the biological level, at any level. Are you hopeful about us as a human species getting out of this pandemic?

[03:24:42]

What are your thoughts in general that the pandemic will be gone in a year or two months? There's a vaccine for it.

[03:24:47]

So, I mean, that's that's a lot of pain and suffering can happen in that time. So, I mean, that could be reversible.

[03:24:55]

I think if you spend much time in sub-Saharan Africa, you can see there's a lot of pain and suffering happening all the time, like you walk through the streets of any large. City in sub-Saharan Africa, and there are. Loads, I mean, tens of thousands, probably hundreds of thousands of people lying by the side of the road, dying mainly of curable diseases without without food or water, and either ostracised by their families that they left their family house because, in fact, their family.

[03:25:28]

I mean, there's tremendous human suffering on the planet all the time, which most folks in the developed world. Pay no attention to and cover. This is not remotely the worst. How many people are dying of malaria all the time? I mean, some covid is bad. It is by no means the worst thing happening. And setting aside diseases, I mean, there are many places in the world where you're at risk of having your teenage son kidnapped by armed militias and forced to get killed in someone else's war fighting tribe against tribe.

[03:26:04]

I mean, some humanity has a lot of problems, which we don't need to have given the state of advancement of our technology right now. And I think covid is one of the easier problems to solve in the sense that there are many brilliant people working on vaccines. We have the technology to create vaccines and we're going we're going to create new vaccines. We should be more worried that we haven't managed to defeat malaria after so long and after the Gates Foundation and others putting putting so much so much money into it.

[03:26:35]

I mean, I think clearly the whole global medical system, the global health system and the global political and social economic system are incredibly unethical and unequal and badly designed. And I mean, I don't know how to solve that directly. I think what we can do indirectly to solve it is to make systems that operate in parallel and off to the side of the of the governments that are are nominally controlling the world with their armies and and militias. And to the extent that you can make compassionate, peer to peer, decentralized frameworks for doing things, these are things that can start out unregulated.

[03:27:25]

And then if they get traction before the regulators come in, then they've influenced the way the world works. Right.

[03:27:31]

Singularity that aims to do this with with A.I. Rajoub, which is a spinoff from from Singularity that you can see rejuvenile iodized that are EUV rejuvenile that aims to do the same thing for medicine.

[03:27:47]

So it's like peer to peer sharing of medical data. So you can share that into a secure data wallet. You can get advice about your health and longevity through through apps that rejuvenile will launch within the next couple of months and then singularity. And I can analyze all this data, but then the benefits from that analysis are spread among all the members of the network. But I mean, of course, I'm going to heart my particular project. But I mean, whether or not Singularity and Rajoub are the answer, I think it's key to create decentralized mechanisms for everything.

[03:28:26]

I mean, for I for for human health, for politics, for for jobs and employment, for sharing social information. And to the extent decentralised peer to peer methods designed with universal compassion at the core can gain traction, then this will just decrease the role that government has. And I think that's much more likely to do good than trying to, like, explicitly reform the global governance system. I mean, I'm happy other people are trying to explicitly reform the global government system.

[03:29:01]

On the other hand, you look at how much good the Internet or or Google did or mobile phones did you mean you're making something that's decentralized and throwing it out everywhere and it takes hold? Then government has to adapt. And I mean, that's what we need to do with AI and with health and in that light. I mean, the centralization of health care in the A.I. is certainly not ideal, right? Like most iPads are being sucked in by, you know, half dozen to a dozen big companies.

[03:29:34]

Most eye processing power is being bought by a few big companies for their own proprietary good. And most medical research is within a few pharmaceutical companies. And clinical trials run by pharmaceutical companies will stay solid within those pharmaceutical companies, not these large centralized entities which are intelligent in themselves, these corporations, but they're mostly malevolent, psychopathic and sociopathic intelligences. Not saying the people involved, but the corporations as self organizing entities on their own, which are concerned with maximizing shareholder value as a sole objective function.

[03:30:14]

I mean, I and medicine are being sucked into these pathological corporate organizations with government cooperation and Google cooperating with British and US government on. This is one among many, many different examples, 23 and me providing you the nice service of sequencing your genome and then licensing the genome to GlaxoSmithKline on an exclusive basis. Right. Right now you can take your own DNA and do whatever you want with it. But the pooled collection of 23 and me sequenced DNA is just to GlaxoSmithKline.

[03:30:48]

Someone else could reach out to everyone who who had worked with 23 made to sequence their DNA and say, give us your DNA for our are open and decentralized repository that will make available to everyone. But nobody is doing that because it's a pain to get organized on. The customer list is proprietary to 23. So yeah. Yeah, I mean this. This, I think, is a greater risk to humanity from A.I. than rogue ages, turning the universe into paper clips or computer Ternium, because what you have here is mostly good hearted and nice people who are sucked into a mode of organization of large corporations, which has evolved just for no individual's fault, because that's the way society has evolved.

[03:31:34]

It's not altruistic, self-interested and become psychopathic.

[03:31:37]

Like you said, the human version is psychopathic, even if the people are not. And that's exactly that's really the disturbing thing about it, because the corporations can do things that are quite bad for society, even if nobody has nobody has has a bad intention. Right. And no individual member of that corporation has a bad intentions.

[03:31:56]

Some probably do, but they don't. But but it's not necessary that they do for the corporation. Like I mean, Google I know a lot of people in Google and there are with very few exceptions, there are very nice people who genuinely want what's good for the world and. Facebook, I know fewer people, but it's probably it's probably mostly true, it's probably like fun young geeks who want to build cool technology.

[03:32:21]

I actually tend to believe that even the leaders, even Mark Zuckerberg, one of the most disliked people in tech, is also wants to do good for the world.

[03:32:29]

Do you think about Jamie Dimon, who's Jamie Dimon? The heads of the great banks may have a different psychology. Boy, yeah, well, I tend to I tend to be naive about these things and see the best in I tend to agree with you that I think the individuals want to do good by the world, but the mechanism of the company can sometimes be its own intelligence.

[03:32:52]

I mean, there's a I mean, my cousin Mario Gursel from Microsoft since nineteen eighty five or something and I can see for him. I mean, as well as just working on cool projects, you're coding stuff that gets used by like billions and billions of people and you think if I improve this feature that's making billions of people's lives easier. Right. So, of course. Of course, that's cool. And, you know, the engineers are not in charge of running the company anyway.

[03:33:24]

And of course, even if you're Mark Zuckerberg or Larry Page, I mean, you still have a fiduciary responsibility. And I mean, you you responsible to the shareholders, your employees who you want to keep paying them and so forth. So, yeah, you're enmeshed in the system. And, you know, when I worked in. D.C., I worked with Entercom, U.S. Army intelligence, and I was heavily politically opposed to what the U.S. Army was doing in Iraq at that time, like torturing people in Abu Ghraib.

[03:33:53]

But everyone I knew in the US Army and income when I hung out with them was very nice person. They were friendly to me. They were nice to my kids and my dogs. Right. And they really believed that the US was fighting the forces of evil. And you asked me about Abu Ghraib. They're like, well, but these Arabs will trap us into pieces. So how can you say we're wrong to waterboard them? Right. Like, that's much less than what they would do to us.

[03:34:17]

It's just in in their worldview, what they were doing was really genuinely for the for the good of humanity. Not none of them woke up in the morning and said, like, I want to do harm to good people because I'm just a nasty guy. Right. So, yeah, most people on the planet setting aside a few genuine psychopaths and sociopaths, I mean, most people on the planet have a heavy dose of benevolence and wanting to do good and also having capability to convince themselves whatever they feel like doing or whatever is best for them is for the good of humankind.

[03:34:54]

And the more we can decentralise control of decentralization, you know, the democracy is horrible. But this is like Winston Churchill said, you know, it's the worst possible system of government except for all the others. I mean, I think the whole mass of humanity has many, many very bad aspects to it. But so far, the track record of elite groups who know what's better for all of humanity is much worse than the track record of the whole teeming, democratic, participatory mass of humanity.

[03:35:26]

I mean, none of them is perfect by any means. The issue with a small elite group that knows what's best is even if it starts out as truly benevolent and doing good things in accordance with its initial good intentions, you find that you need more resources, you need a bigger organization. You pull in more people, internal politics arises, difference of opinions arise, and bribery happens like some some opponent organization takes a second in command down and makes them the first in command of some other organization.

[03:36:00]

And I mean, that's there's a lot of history of what happens with elite groups thinking they know what's best for the human race. So, yes, if I have to choose, I'm going to reluctantly put my faith in the vast, democratic, decentralized mass. And I think corporations have a track record of being ethically worse than their constituents human parts. And, you know, democratic governments have a more mixed track record, but there are at least that's the best we got.

[03:36:33]

Yeah, I mean, you can you can. There's Iceland. Very nice country, right? I mean, the very Democratic for 800 plus years, very, very benevolent, the unofficial government. And I think their track records of democratic modes of organization. Linux, for example, some of the people in charge of Linux are overtly complete assholes and trying to reform themselves in many cases and in other cases not. But the organization as a whole. I think it's done a good job overall.

[03:37:07]

It's been very welcoming in the Third World for for example, and it's it's allowed advanced technology to roll out on all sorts of different devices and platforms in places where people couldn't afford to pay for for proprietary software. So I'd say the Internet, Linux and many Democratic nations are examples of how such an open, decentralized Democratic methodology can be ethically better than the sum of the parts rather than worsens corporations. That has happened only for a brief period. And then.

[03:37:39]

And then. Then it goes. It goes sour. I mean, I'd say a similar thing about universities like university is a horrible way to organize research and get things done. Yet it's better than anything else we've come up with. Rather, a company can be much better, but for a brief period of time and it stops stops being so good. And so I think if you believe that Ajai is going to emerge sort of incrementally and is doing practical stuff in the world, like controlling humanoid robots or or driving cars or diagnosing diseases or operating killer drones or spying on people and repairing under the government, then then what kind of organization creates more and more advanced?

[03:38:27]

Now I verging toward ajai, maybe quite important because it will guide like what's what's in the mind of the early stage ajai as it first gains the ability to rewrite its own code base and project itself towards superintelligence. And if you. I believe that I may. Move toward ajai out of the sort of synergetic activity of many agents cooperating together rather than just have one person's project, then who owns the controls? That platform for cooperation becomes also very, very important.

[03:39:04]

Right. And is that platform HWC? Is it Google Cloud? Is it Alibaba or is it something more like the Internet or singularity, which is open and open and decentralized? So if all of my weird machinations come to pass. Right, I mean, we have we have the Hensen robots being a beautiful user interface, you know, gathering information on human values and being loving and compassionate people in medical homes service, robot office applications. You have singularity in the backyard, networking together many different eyes toward co-operative intelligence, fueling the robots, among many other things.

[03:39:41]

You have opened COG 2.0 and True Ajai as one of the sources of AI inside this decentralized network, powering the robot and medicalise, helping us live a long time and cure diseases among among other things. And this whole thing is operating in a in a democratic and decentralized way. I think if if anyone can post something like this off, you know, whether using the specific technologies I've mentioned or something else, I mean, then I think we have a higher odds of moving toward a beneficial technological singularity rather than one in which the first super ajai is indifferent to humans and just considers us an inefficient use of molecules.

[03:40:29]

That was beautifully articulated vision for the world, so thank you for that. Well, let's talk a little bit about life and death.

[03:40:39]

I'm pro-life and I do well for most people. There's a few exceptions.

[03:40:45]

And I won't mention I'm glad, just like your dad, you've taken a stand against death.

[03:40:53]

You have, by the way, you have a bunch of awesome music where you play piano online. One of the songs that I believe you've written, the lyrics go by the way. I like the way it sounds. People should listen to. It's awesome. I was I considered I probably will cover it. It's a good song.

[03:41:12]

Tell me, why do you think it is a good thing that we all get old and die? One of the songs I love the way it sounds, but let me ask you about Death first. Do you think there's an element to death that's essential to give our life meaning, like the fact that this thing ends the fight?

[03:41:32]

Let me say I'm I'm pleased and a little embarrassed. You've been listening to that music I put on my car is awesome. One of my regrets in life recently is I would love to get time to really produce music. Well, like, I, I haven't touched my sequencer software in like five years. Like, I would love to like rehearse and produce and edit, but with a two year old baby and trying to create the singularity, there's no time.

[03:41:59]

So I just made the decision to. Well, I'm playing random shit in an off moment, just record it, just just record it out there like like whatever it may be. If I'm unfortunate enough to die, maybe that can be input to the ajai when it tries to make an accurate mind up. Right. That is bad. I mean, that's very simple. It's baffling. We should have to say that. I mean, of course, people can make meaning out of out of death.

[03:42:26]

And if if someone is tortured, maybe they can make beautiful meaning out of that torture and murder. Beautiful poem about what it was like to be tortured. I mean, we're very creative. We can we can make beauty and positivity and have even the most horrible and shitty things. But just because if I was tortured, I could write a good song about what it was like to be tortured, doesn't make torture good. And just because people are able to derive meaning and value from death doesn't mean they wouldn't derive even better meaning and value from ongoing life without death, which is a very definite yeah.

[03:43:01]

Yeah.

[03:43:01]

So if you could live forever, would you live forever? Forever, I my my go with longevity research is to abolish the plague of involuntary death. I don't think people should die unless they choose to die. If I had to choose forced immortality versus dying, I would choose first immortality. On the other hand, if I chose if I had the choice of immortality with the choice of suicide whenever I felt like it, of course, I would take that instead.

[03:43:34]

And that's the more realistic choice. I mean, there's no reason you should have forced immortality. You should be able to live until you get until you get sick of living. Right. I mean, that's and that would seem insanely obvious to everyone 50 years from now. And there will be so I mean, people who thought death gives meaning to life, that we should all die. They will look at that 50 years from now. The way we now look at the Anabaptists in the year 1000 who gave away all their positions on top of the mountain for Jesus, for Jesus to come and bring them to the right, the ascension.

[03:44:07]

I mean, it's ridiculous that people think death is good because because you gain more wisdom as you approach dying. Of course it's true. I mean, I'm 53 and, you know, the fact that I might have only a few more decades left, it does make me reflect on things differently. It does give me a deeper understanding of many things. But I mean, so what? You could get a deep understanding in a lot of different ways.

[03:44:38]

Pain is the same way we're going to abolish pain. And that's even more amazing than abolishing death, right? I mean, once you get a little better, neuroscience will be able to go in and adjust the brain so the pain doesn't hurt anymore. And that, you know, people will say that's bad because there's so much beauty in overcoming pain and suffering. Oh, sure. And there is beauty in overcoming torture, too. And some people like to cut themselves, but not not money.

[03:45:05]

I mean, that's an interesting choice.

[03:45:07]

But to push I mean, to push back against the Russian side of me. I do romanticize suffering. It's not obvious. I mean, the way you put it, it seems very logical. It's almost absurd to to romanticize suffering or pain or death. But to me, a world without suffering, without pain, without death, it's not obvious.

[03:45:28]

Well, then you can say in the people zoo people torturing each other not know. What I'm saying is I don't what that's I guess what I'm trying to say. I don't know if I was presented with that choice, what I would choose.

[03:45:41]

Because to me, this is this is a summer. It's a sutler matter. And I've posted in this conversation in an unnecessarily extreme way. So I think I think the way you should think about it is what if there's a little dial on the side of your head? And you could turn to how much pain hurt, turn it down to zero, turn up to 11 like in Spinal Tap if it wants maybe through an actual spinal tap. So, I mean.

[03:46:13]

Would you opt to have that dial there or not? That's the question. The question isn't whether you would turn the pain down to zero all the time. Would you opt to have the dial or not? My guess is that in some dark moment of your life, you would choose to have the diamond plan and then it would be there just to confess a small thing.

[03:46:33]

Don't ask me why, but I'm doing this physical challenge currently where I'm doing six hundred eighty push ups and pull ups a day and my and my shoulder is currently as we sit here in a lot of pain and a I don't know, I would certainly right now if you give me a dial, I would turn that sucker to zero as quickly as possible.

[03:46:57]

But I don't I think the whole point of this journey is I don't know.

[03:47:04]

Well, because you're you're a twisted human being. So the question is, am I somehow twisted?

[03:47:11]

Because I have I created some kind of narrative for myself so that I can deal with the injustice and the suffering in the world? Or is this actually going to be a source of happiness?

[03:47:23]

Well, this is this is a to an extent is a research question that humanity will undertake. So many human human beings do have a particular biological makeup, which sort of implies a certain probability distribution over motivational systems. Right. So, I mean, we we and that is their output that is there. Now, the the question is. How flexibly can that more fair society and technology change, so if if we're given that dial and we're given a society in which say we don't have to, we don't have to work for a living and in which there's an ambient, decentralized, benevolent network that will warn us when we're about to hurt ourselves, you know, if we're in a different context, can we consistently with being genuinely and fully human, can we consistently get into a state of consciousness where we just want to keep the pain dial turned all the way down and yet we're leading very rewarding and fulfilling lives right now?

[03:48:32]

I suspect the answer is yes, we can do that. But I, I don't I don't know that as a researcher. I don't know that for certain. Yeah, no. I'm more confident that we could create a non-human ajai system. Which just didn't need an analogue of feeling pain. And I think that ajai system will be fundamentally healthier and more benevolent than human beings. So I think it might or might not be true that humans need a certain element of suffering to be satisfied, humans consistent with the human physiology.

[03:49:06]

If it is true, that's one of the things that makes us fucked and disqualified to be the day the soup, the Super eight guy. Right. I mean, this is the nature of the human motivational system. Is that. We we seem to gravitate towards situations where the best thing in the large scale is not the best thing in the small scale according to our subjective value system. So we gravitate towards subjective value judgments were to gratify ourselves in the large.

[03:49:40]

We have to gratify ourselves in the small. And we do that. And you see that in music. There's a theory of music which says the kid, musical aesthetics is the surprising fulfilment of expectations. Like you, you want something that will fulfill the expectations are listed in the private part of the music, but in a way with a bit of a twist that surprises you. And I mean, that's true not only in their music like my own or that of Zappa or or Stevi or Buckethead or Krzysztof Penderecki or something.

[03:50:12]

It's even there in Mozart or something. It's not there in elevator music too much. But that's that's that's that's why it's boring. Right. But wrapped up in there is you know, we want to hurt a little bit so that we can we can feel that we can feel the pain go away like we want to be a little a little a little confused by what's coming next. So then when the thing that comes next actually makes sense, it's so satisfying.

[03:50:36]

Right. And that's the surprising fulfillment of expectations that we said. Yeah, yeah. So beautiful is there. We've been skirting around a little bit. But if I were to ask you the most ridiculous question of what is the meaning of life, what would your answer be? Three values, joy, growth and choice. I think you need joy. I mean, that's the basis of everything if you want the number one value. On the other hand, I'm unsatisfied with a static joy that doesn't progress, perhaps because of some elemental element of human perversity.

[03:51:17]

But the idea of something that grows and becomes more and more and better and better in some sense appeals to me. But I also sort of like the idea of individuality as a distinct system. I have some agency, so there's some nexus of causality within within this system, rather than the causality being wholly evenly distributed over the joyous, growing mass. So I start with joy, growth and choices, three basic values that those three things can't continue indefinitely.

[03:51:49]

That's not that's something.

[03:51:51]

Yeah, that can last forever.

[03:51:52]

Is there is there some aspect of something you called the shell, like super longevity that was exciting that what is there in research wise, their ideas in that space that I think.

[03:52:07]

Yeah, in terms of the. Meaning of life, this really ties into that because. For us as humans, probably the way to get the most joy, growth and choice is transhumanism and to go beyond the human form that that that we have right now.

[03:52:25]

I mean, I think human body is great and by no means any of us maximize the potential for joy, growth and choice in our human bodies. On the other hand, it's clear that other configurations of matter could manifest even greater amounts of joy, growth and choice than that than humans do. Maybe even finding ways to go beyond the realm of matter that as we understand it right now. So I think in a practical sense, much of the meaning I see in human life is to create something better than humans and go beyond human life.

[03:53:02]

But certainly that's not all of it for me in a practical sense, or like I have four kids and a granddaughter and many friends and parents and family and just enjoying everyday human, human, social existence, what we can do even better. Yeah, yeah. And I mean, I love I've always when I could live near nature, I spend a bunch of time out in nature, in the forest and on the water every day and so forth.

[03:53:28]

So I mean, enjoying the present moment is part of it. But the you know, the growth and choice aspect are severely limited by our human biology. In particular. Dying seems to inhibit your potential for personal growth considerably as as far as we know, I mean, there's some element of life after death, perhaps. But even if there is one, I'll also continue going in in this biological and in super longevity.

[03:53:58]

I mean. You know, we haven't yet cured aging. We haven't yet cured death. Certainly, there's very interesting progress all around, I mean, CRISPR and gene editing can be can be an incredible tool. And I mean, right now, stem stem cells could potentially prolong life a lot. Like if you got stem cell injections of of just stem cells for every tissue of your body injected into every tissue.

[03:54:28]

And you can just have replacement of your own cells with new cells produced by those stem cells. I mean that that could be highly impactful and prolong life. Now, we just need slightly better technology for for having them grow. So using machine learning to guide procedures for stem cell differentiation and differentiation, it's kind of nitty gritty. But I mean, that's that's quite interesting. So I think there's there's a lot of different things being done to help with prolongation of human life.

[03:55:02]

But we could do do a lot better. So, for example, the extracellular matrix, which is the bunch of proteins in between the cells in your body, they get stiffer and stiffer as you get older and the extracellular matrix transmits information both electrically, mechanically and to some extent bio photonics. So there is all this transmission through the parts of the body, but the stiffer the extracellular matrix gets, the less the transmission happens, which makes your body get worse, coordinate between the different organs as you get older.

[03:55:34]

So my friend Christian Maistre, my alumnus organization, the Great, my alma mater, the Great Temple University, and Christian Chev Meister has a potential solution to this, or he has these novel molecules conspiring ligatures which are like polymers that are not organic, especially specially designed polymers, so that you can algorithmically predict exactly how they'll fold very simply. So he designed the molecular scissors and spiral ligaments that you could eat and then would then cut through all the glucose to find another CrossLink proteins in your extracellular matrix.

[03:56:09]

Right. But to make that technology really work and be mature is several years of work, as far as I know. No, no one's funding it at the moment. But there so there is so many different ways that technology could be used to prolong longevity. What we really need, we need an integrated database of all biological knowledge about human beings and model organisms like this, hopefully a massively distributed, open biodome space that can exist in other forms, too.

[03:56:35]

We need that data to be opened up and a suitably privacy protecting way. We need massive funding into machine learning. Ajai Prologis, statistical research aimed at solving biology, both molecular biology and human biology based on this massive, massive data set. Right. And then we need regulators not to stop people from trying radical therapies on themselves if they so, so wish to us, as well as better cloud based platforms for like automated experimentation on microorganisms, flies and mice and so forth.

[03:57:11]

And we could do all this. You look after the last financial crisis, Obama, who are generally like pretty well, but he gave four trillion dollars to large banks and insurance companies, you know, now in this covid crisis. Trillions are being spent to help everyday people in small businesses. In the end, we probably will find many more. Trillions are being given the large banks and insurance companies anyway, like could the world put 10 trillion dollars into making a massive holistic buy and bio simulation and experimental biology infrastructure?

[03:57:45]

We could we could potentially billions into that without even screwing us up too badly, just as in the end covid. And the last financial crisis won't screw up the world economy so badly. We're not putting 10 trillion dollars into that instead of a is siloed inside a few big companies and and government agencies and most of the data that comes from our individual bodies personally that could feed this I to solve aging and death. Most of that data is sitting in some some hospitals database doing nothing.

[03:58:21]

I got two more quick questions for you, one, I know a lot of people are going to ask me, you are Joe Rogan podcast wearing that same amazing hat. Do you have an origin story for the hat? Is this is a hat have its own story that you're able to share. That story has not been told yet. So we're going to have to come back and you can you can interview the hat that we'll leave that for the hat.

[03:58:46]

So that's all right. Too much. It's too much to pack into. Is there a book as I have going to write a book? OK, well, it may transmit the information through direct neurotransmission.

[03:58:57]

OK, so actually there might be some neural link competition there. Beautiful. We'll leave it as a mystery. Maybe one last question if you build an AGI system. You're successful at building the edge as you go system that could lead us to the singularity and you get to talk to her and ask her one question, what would that question be? We're not allowed to ask what is the question I should be asking? Yeah, that would be cheating, but I guess that's a good question.

[03:59:31]

I'm thinking of I wrote a story with Stefan Beaugard once where these. I developers, they created a super smart I aimed at answering all the philosophical questions that have been wearing them, like what is the meaning of life? Is there free will? What is consciousness? And so forth. So they got the Super eight GI Bill and it. It turned out said. Those are really stupid questions and then it puts stuff on the spaceship and left the Earth to be afraid of scaring, scaring it off.

[04:00:12]

That's I mean, honestly, there's no there is no one question that that rises among among all the all the others, really. I mean, what interests me more is upgrading my my own intelligence so that I can absorb the whole the whole world view of the of the super ajoy. But I mean, of course, if the if the answer could be like, what's the what is the chemical formula for the immortality pill, then I would do that or emit emit a bit string which will be the the code for a super ajai on the Intel R7 processor.

[04:00:58]

So those would be good questions.

[04:01:00]

So if your mind was expanded to become super intelligent like you're describing, I mean, there's you know, there's kind of a notion that within intelligence is a burden that is possible, that greater and greater intelligence, that other metric of joy that you mentioned becomes more and more difficult.

[04:01:22]

What's your pretty stupid idea? So you think if you're super intelligent, you can also be super joyful? I think getting root access to your own brain will enable new forms of joy that we don't have now. And I think. As I've said before, I am is really. Make multiple versions of myself, so I would like to give one version, which is basically human like I am now, but, you know, keep the dial to turn pain up and down and get rid of death and.

[04:01:57]

Make another version which fuses its mind with superhuman ajai and then will become massively transhuman and whether it will send some messages back to the human, me or not, it will be interesting to find out. The thing is, once you're a super, super ajai, like once subjective second to a human being, like a million subjective years to that super Ejiro. So it would be a whole different basis. And I mean, at very least those two copies will be good to have.

[04:02:28]

But it could be interesting to put your mind into into a dolphin or a space amoeba or all sorts of other things. You can imagine one version that doubled its intelligence every year and another version that just became a super ajai as fast as possible. So, I mean, now we're sort of constrained to think one mind oneself. One body, but I think we actually we don't need to be that constrained in thinking about future intelligence after we've mastered Adjei and nanotechnology and longevity biology.

[04:03:05]

I mean, then each of our minds is a certain pattern of organization. Right. And I know we haven't talked about consciousness, but I sort of I'm Penske's that sort of view the universe as conscious. And so, you know, a light bulb or a quirk or an ant or a worm or a monkey have their own manifestations of consciousness and the human manifestation of consciousness. It's partly tied to the particular meat that we're manifested by, but it's largely tied to the pattern of organization in the brain.

[04:03:39]

Right. So if you upload yourself into a computer or a robot or whatever else it is, some element of human consciousness may not be there because it's just tied to the biological embodiment. But I think most of it will be there. And these will be incarnations of your consciousness in a slightly different flavor. And, you know, creating these different versions will be amazing in each of them will discover meanings of life that have some overlap, but probably not total overlap with with the human beings, meaning, meaning of life.

[04:04:16]

The thing is to get to that future where we can explore different varieties of joy, different variations of human experience and values and transhuman experiences and values. To get to that future, we need to navigate through a whole lot of human bullshit of companies and and governments and killer drones and making and losing losing money and so on and so forth. And that's that that's the challenge we're facing now is if we do things right. We can get to a benevolent singularity, which is levels of joy, growth and choice that are literally unimaginable to the human beings if we do things wrong.

[04:04:59]

We can either annihilate all life on the planet or we could lead to a scenario where, say, all humans are annihilated and there's some Super eight die that goes on and does it does its own thing unrelated to us except via our role in originating it. And we may well be at the bifurcation point now where what we do now has significant causal impact on what comes about. And yet most people on the planet aren't thinking that way whatsoever. The thinking only about their own narrow, narrow aims and aims, aims and goals right now.

[04:05:35]

Of course, I'm thinking about my own, their aims and goals to some extent also. But I'm trying to use as much of my. Energy and minds are keen to push toward this more benevolent alternative, which will be better for me, but also for also for everybody else, and that's it's weird that so few people understand what's going on. I know you interviewed Elon Musk and he understands a lot of what's going on, but he's much more paranoid than I am because because Elon gets that guy is going to be way, way smarter than people and he gets an ajai, does not necessarily have to give a shit about people because we're a very elementary mode of organization of murder compared to many, many ages.

[04:06:22]

But I don't think he has a clear vision of how infusing early stages with compassion and human warmth can lead to an ajai that loves and helps people rather than viewing us as as, you know, a historical artifact and a waste of a waste of mass energy. But on the other hand, while I have some disagreements with him, like he understands we're way more of the story than almost anyone else in such a large scale corporate leadership position, it's it's terrible how little understanding of these fundamental issues exists out there.

[04:07:02]

Now, that may be different five or 10 years from now, because I can see understanding of ajai and longevity and other such issues is certainly much stronger and more prevalent now than 10 or 15 years ago. So, I mean, Eumundi is as a whole can be slow learners relative to what I would like. But on the historical sense, on the other hand, you could say the progress is astoundingly fast.

[04:07:28]

But Elon also said, I think on the Jargon podcast that love is the answer. So maybe in that way you and him are both on the same page of how we should proceed with Ajai.

[04:07:41]

I think there's no better place to end it. I hope we get to talk again about the hat and about consciousness and about a million topics could then cover. Ben, it's a huge honor to talk to you. Thank you for making it out. Thank you for talking today.

[04:07:56]

Thanks for having me. This was this was was really, really good fun. And we dug deep into some very important things. So thank thanks for doing this. Thanks so much.

[04:08:07]

Awesome. Thanks for listening to this conversation with Bangerter and thank you to our sponsors, the Jordan Harbor Jazz Show and Master Class. Please consider supporting the podcast by going to Jordan Harbinger Cognex and signing up to Master Class and Master Class Dotcom's Aglukkaq. Click the links, buy the stuff. It's the best way to support this podcast and the journey I'm on in my research and startup. If you enjoy this thing, subscribe on YouTube, review it with five stars and podcast support on Patrón or connect with me on Twitter.

[04:08:44]

Allex Friedman spelled without the E just.

[04:08:48]

Man, I'm sure eventually you will figure it out. And now let me leave you with some words from Ben Gursel. Our language for describing emotions is very crude, that's what music is for. Thank you for listening and hope to see you next time.