Transcribe your podcast
[00:00:00]

Today's episode is sponsored in part by Yahoo Finance, pork bun, Indeed, Airbnb, and Shopify. Yahoo Finance is the number one financial destination. For financial news and analysis, visit the brand behind every great investor, yahoofinance. Com. Build your digital brand and manage all your links from one spot with pork bun. Get your. Bio domain and link in bio bundle for just $5 at pork bun. Com/profiting. Attract interview and hire all in one place with Indeed. Get a $75-sponsored job credit at indeed. Com/profiting. Generate extra income by hosting your home on Airbnb. Your home might be worth more than you think. Find out how much at airbnb. Com/host. Shopify is the global commerce platform that helps you grow your business. Sign up for a $1 per month trial period at Shopify. Com/profiting. As always, you can find all of our incredible deals in in the show notes.

[00:01:04]

More and more systems in the world will get automated. Ai is another step in the automation of things. This has been a story of technology throughout history. The founder and CEO of the software company Wolfram Research. He's a mathematician, computer scientist, physicist, and businessman. When ChatGPT came out in late 2022, the people had been working on it. They didn't know it was going to work. They didn't think anything exciting would have happened, and by golly, it had worked. I think the thing that we learn from the advance of AI is, well, actually, there's not as much distance between the amazing stuff of our minds and things that are just able to be constructed computationally. This is the coming paradigm of the 21st century, And if you understand that well, it gives you a huge advantage. Unfortunately, whenever there's powerful technology, you can do ridiculous things with it. Having said that, when you say things like, Well, let's make sure that AIs never do the wrong thing. Well, the problem with that is Young and profiters, welcome to the show.

[00:02:19]

We are going to be talking a lot more about AI in 2024 because it's such an important topic. It's changing the world. Last year, I had a couple of conversations. We had William talking about AI. We had Mogau Dat, which I love that episode. I highly recommend you guys check the Mogau Dat episode. But nonetheless, I'm going to be interviewing a lot more AI folks. And first up on the docket is Dr. Stephen Wolfram. He's been focused on AI and computational thinking for the past decade. Dr. Stephen Wolfram is a world-renowned computer scientist, mathematician, theoretical physicist, and the founder of Wolfram research, as well as the inventor of the Wolfram computational language. A young prodigy, he published his first paper at He obtained his PhD in physics at 20, and he also was the youngest recipient of the MacArthur Genius Grant. In addition to this, Dr. Wolfram is the author of several books, including a recent one on AI entitled, What is ChatGPT Doing? Which we'll discuss today. We've got a lot of ground to cover with Steven. We're going to talk about what is AI, what is computational thinking, how is AI similar to nature, what is going on in the background of ChatGPT, how does it actually work, and what does he think the future of AI looks for jobs and for humanity overall.

[00:03:32]

We've got so much to cover. I think he's going to blow your mind. So, Stephen, welcome to Young and Profiting podcast.

[00:03:38]

Hello, though.

[00:03:39]

I am so excited for today's interview. We love the topic of AI, and I wanted to talk a little bit about your childhood before we got to the meat and potatoes of today's interview. So from my understanding, you started as a particle physicist at a very young age. You even started publishing scholarly papers as young as 15 years old. So talk to us about how you first got interested in science and what you were like as a kid.

[00:04:03]

Well, let's see. I grew up in England in the 1960s when space was the thing of the future, which it is again now, but wasn't for 50 years. I was interested in those kinds of things, and that got me interested in how things like spacecraft work, and that got me interested in physics. And so I started learning about physics. And so happened that the early 1970s were a time when lots of things were happening in particle physics, lots of new particles getting discovered, lots of fast progress and so on. And so I got involved in that. It's always cool to be involved in fields that are in the golden age of expansion, which particle physics was at the time. So that was how I got into these things. It's funny, you mentioned AI, and I realized that when I was a kid, machines that think were right around the corner, just as colonization of Mars was right around the corner, too. But it's an interesting thing to see what actually happens over a 50 years span and what doesn't.

[00:05:02]

It's so crazy to think how much has changed over the last 50 years.

[00:05:06]

And how much has not. In science, for example, I have just been finishing some projects that I started basically 50 years ago. And it's cool to finish something that there's a big science question that I started asking when I was 12 years old about how a thing that people have studied for 150 years now works, the second law of thermodynamics. And I was I'm interested in that when I was 12 years old. I finally, I think, figured that out. I published a book about it last year, and it's nice to see that one can tie up these things. But it's also a little bit shocking how slowly big ideas move. For example, the neural nets that everybody's so excited about now in AI, neural nets were invented in 1943, and the original conception of them is not that different from what people use lose today, except that now we have computers that run billions of times faster than things that were imagined back in the 1950s and so on. It's interesting. Occasionally, things happen very quickly. Oftentimes, it's shocking how slowly things happen and how long it takes for the world to absorb ideas. Sometimes there'll be an idea, and finally, some technology will make it possible to execute that idea in a way that wasn't there before.

[00:06:29]

Sometimes, Sometimes there's an idea and it's been hanging out for a long time and people just ignored it for one reason or another. I think some of the things that are happening with AI today probably could have happened a bit earlier. Some things have depended on the building of a big technology stack, but it's always interesting to see that, to me, at least.

[00:06:48]

It's so fascinating. This actually dovetails perfectly into my next question about your first experiences with AI. Now everybody knows what AI is, but really, most of us really started to understand it and using this term maybe five years ago, max. But you've been studying this for decades, even before people probably called it AI. Can you talk to us about the beginnings of how it all started?

[00:07:13]

Ai predates me. That term was invented in 1956. It's funny because as soon as computers were invented, basically in the late 1940s, and they started to become things that people had seen by the beginning of the 1960s. I first saw a computer when I was 10 years old, which was 1969-ish. And at the time, a computer was a very big thing, tended by people in white coats and so on. So I first got my hands on a computer in 1972, and that was a computer that was the size of a large desk and programed with paper tape and so on, and was rather primitive by today's standards. But the elements were all there by that time. But it's true, most people had not seen a computer until probably beginning of the 1980s or something, which was when PCs and things like that started to come out. But it was from the very first moments when electronic computers came on the scene, people assumed that computers would automate thought as bulldozers and things like forklift trucks had automated mechanical work. And that was the giant electronic brains was a typical characterization of computers in the 1950s.

[00:08:35]

So this idea that one would automate thought was a very early idea. Now, the question was, how hard was it going to be to do that? And people in the 1950s and beginning of the 1960s, they were like, this is going to be easy. Now we have these computers. It's going to be easy to replicate what brains do. In fact, a good example, back in the beginning of the 1960s, a famous incident was during the Cold War, and people were worried about US, Russian, Soviet communication and so on. They said, well, maybe the people are in a room, there's some interpreter. The interpreter is going to not translate things correctly. So let's not use a human interpreter. Let's teach a machine to do that translation. Beginning of the 1960s. And of course, machine translation, which is now, finally, in the 2020s, pretty good, took an extra 60 years to actually happen. And people just didn't have the intuition about what was going to be what wasn't going to be hard. So the term AI was in the air already very much by the 1960s. When I was a kid, I'm sure I read books about the future in which AI was a thing, and it was certainly in movies and things like And I think then this question of, Okay, so how would we get computers to do thinking like things?

[00:09:54]

When I was a kid, I was interested in taking the knowledge of the world and somehow cataloging it and so on. I don't know why I got interested in that, but that's something I've been interested in for a long time. And so I started thinking, how would we take the knowledge of the world and make it automatic to be able to answer questions based on the knowledge that our civilizations accumulated? So I started building things along those lines, and I started building a whole technology stack that I started in the late 1970s, and it's turned into a big thing that lots of people use. But the idea there, the first idea there was was, let's be able to compute things like math and so on, and let's take what has been something that humans have to do and make it automatic to have computers do it. People had said for a while, when computers can do calculus, then we'll know that they're intelligent. Things I built, solved that problem. By the mid-1980s, that problem was pretty well solved. And then people said, well, it's just engineering. It's not really a computer being intelligent. I would agree with that.

[00:11:01]

But then at the very beginning of the 1980s, when I was working on automating things like mathematical computation, I got curious about the more general problem of doing the kinds of things that we humans do, like we match patterns. We see this image and it's got a bunch of pixels in it, and we say, that's a picture of a cat, or that's a picture of a dog. And this question of how do we do that pattern matching, I got interested in and started trying to figure out how to make that work. I knew about neural nets. I started trying to get, this must have been 1980, '81, something like that. I started trying to get neural nets to do things like that, but they didn't work at all at the time. Hopeless. As it turns out, you say things happen quickly, and I say things sometimes happen very slowly. I was just working on something that is a potential new direction for how neural nets and things like that might work. And I realized, I worked on this once before, and I pulled out this paper that I wrote in 1985 that has the same basic idea that I was just very proud of myself for having figured out just last week.

[00:12:09]

And it's like, well, I started on it in 1985. Well, now I understand a bunch more and we have much more powerful computers. Maybe I can make this idea work. But so this notion that there are things that people thought would be hard for computers, like doing calculus and so on. We crushed that, so to speak, a long time ago. Then there were things that are super easy for people, like tell that's a cat, that's a dog, which wasn't solved. And I wasn't involved in the solving of that. That's something that people worked on for a long time, and nobody thought it was going to work. And And suddenly in 2011, through a mistake, some people who have been working on this for a long time left a computer trying to train to tell things like cats from dogs for a month without paying attention to it. They came back. They didn't think anything exciting would have happened. And by golly, it had worked. And that's what started the current enthusiasm about neural nets and deep learning and so on. And when ChatGPT came out in late 2022, again, the people who have been working on it, they didn't know it was going to work.

[00:13:15]

We had worked on previous kinds of language models, things that try to do things like predict what the next word will be in a sentence, those sorts of things. And they were really pretty crummy. And suddenly, for reasons that we still don't understand, we got above this threshold where it's like, yes, this is pretty humanlike. And it's not clear what caused that threshold. It's not clear whether we, in our human languages, for example, we might have, I don't know, 40,000 words that are common in language, like most languages, English as an example. And there's probably that number of words is somehow related to how big an artificial brain you need to be able to deal with language in a reasonable way. And if our brain If the brains were bigger, maybe we would routinely have languages with 200,000 words in them. We don't know. And maybe it's this match between what we can do with an artificial neural network versus what our human biological neural nets managed to do. We managed to reach enough of a match that people say, by golly, the thing seems to be doing the things that we humans do. But I mean, this question, what's ended up happening is what us humans can quickly do, like tell a cat from a dog or figure out what the next word in the sentence is likely to be.

[00:14:37]

Then there are things that we humans have actually found really hard to do, like solve this math problem, or figure out this thing in science or do this simulation of what happens in the natural world. Those are things that the unaided brain doesn't manage to do very well on. But the big thing that's happened last 300 years or so is we built a bunch of formalization of the world, first with things like logic that was back in antiquity, and then with math, and most recently with computation, where we're setting up things so that we can talk about things in a more structured way than just the way that we think about them off the top of our heads, so to speak.

[00:15:20]

That's so interesting. I know that you work on something called computational thinking. I think what you're saying now really relates to that. Help us understand the Wolfram Project and computational thinking and how it's related to the fact that humans, we need to formalize and organize things like mathematics and logic. What's the history behind that? Why do we need to do that as humans? And then how does it relate to computational thinking in the future?

[00:15:45]

There are things one can immediately figure out. One just intuitively knows, Oh, that's a cat, that's a dog, whatever. Then there are things where you have to go through a process of working out what's true or working out how to construct this or that thing. When you're going through that process, you've got to have solid bricks to start building that tower. So what are those bricks going to be made of? Well, you have to have something which has definitive structure. And that's something where, for example, back in antiquity, when logic got invented, it was like, well, you can think vaguely, oh, that sentence sounds right, or you can say, well, wait a minute, this or that, if one of those things is true, then that or that has to be true, et cetera, et cetera, et cetera. You've got some structured way to think about things. And then in the 1600s, math became a popular way to think about the world. And then you could say, okay, we're looking at the planet goes around the sun in roughly an ellips, but let's put math into that, and then we can have this way to actually compute what's going to happen.

[00:16:52]

So for about 300 years, this idea of math is going to explain how the world works at some level was a dominant theme, and that worked pretty well in physics. It worked pretty terribly in things like biology, in social sciences, and so on. Imagine there might be a social physics of how society works that never really panned out. So there was this question that things were places where math had worked, and it gave us a lot of modern engineering and so on, and there are cases where it hadn't really worked. I got pretty interested in this at the beginning of the 1980s, in figuring out how do you formalize I was thinking about the world in a way that goes beyond what math provides one, things like calculus and so on. What I realized is that you just think about, well, there are definite rules that describe how things work, and those rules are more stated in terms of, oh, you have this arrangement of black and white cells, and then this happens, and so on. They're not things that you necessarily can write in mathematical terms, in terms of multiplications and integrals and things like this.

[00:17:59]

And so I As a matter of science, I got interested in, so what do these simple programs that you can describe as these systems as rules of being, what do they typically do? And what one might have assumed is, you have a program that's simple enough, it's going to just do simple things. This turns out not to be true. Big surprise, to me, at least. I think to everybody else as well. It took people a few decades to absorb this point. It took me a solid bunch of years to absorb this point. But you just do these experiments, computer experiments, and you find out, yes, you use a simple rule, and no, it does a complicated thing. That turns out to be pretty interesting if you want to understand how nature works, because it seems like that's the secret that nature uses to make a lot of the complicated stuff that we see, the same phenomenon of simple rules, complicated behavior. So that turns into a whole big direction and new understanding about how science works. I wrote this big book back in 2002 called The New Science. Well, its title says what it is.

[00:19:03]

So that's one branch, is understanding the world in terms of computational rules. Another thing has to do with taking the things that we normally think about, whether that's how far is it from one city to another, or how do we remove this thing from this image or something like this, things that we would normally think about and talk about. And how do we take those kinds of things and think about them in a structured computational way. So that has turned into a big enterprise of my life, which is building our computational language, this thing now called Wolfman language, that powers a lot of, well, research and development kinds of things, and also lots of actual practical systems in the world. Although when you are interacting with those systems, you don't see what's inside them, so to speak. But the idea there is to make a language for describing things in the world, which might be this is a city, this is both the concept of a city and the actuality of the couple of hundred thousand cities that exist in the world, where they are, what their populations are, lots of other data about them, and being able to compute things about things in the world.

[00:20:20]

And so that's been a big effort to build up that computational language. And the thing that's exciting that we're on the cusp, I suppose, is People who study things like science and so on, for the last 300 years, it's like, okay, to make this science really work, you have to make it somehow mathematical. Well, now the case is that the new way to make science is to make it computational. And so you see all these different fields, call them X. You start seeing the computational X field start to come into existence. And I suppose one of my big life missions has been to provide this language and notation for making computational X for all X possible. It's a similar mission to what people did maybe 500 years ago when people invented mathematical notation. I mean, there was a time when if you wanted to talk about math, it was all in terms of just regular words at the time in Latin. And then people invented things like plus signs and equal signs and so on. And that streamlined the way of talking about math. And that's what led to, for example, algebra and then calculus, and then all the modern mathematical science that we have.

[00:21:32]

And so similarly, what I've been trying to do last 40 years or so is build a computational language, a notation for computation, a way of talking about things computationally that lets one build computational X for all X. One of the great things that happens when you make things computational is not only do you have a clearer way to describe what you're talking about, but also your computer can help you figure it out. And so you get this superpower. As soon as you can express yourself computationally, you tap into the superpower of actually being able to compute things. And that's amazingly powerful. And when I was a kid, as I say, in the 1970s, physics was hopping at the time because various new methods have been invented not related to computers. At this time, all the computational X fields are just starting to really hop, and it's starting to be possible to do really interesting interesting things, and that's going to be an area of tremendous growth in the next how many years.

[00:22:35]

I have a few follow-up questions to that. So you say that computational thinking is another layer in human evolution. So I want to understand why you feel it's going to help humans evolve. Also curious to understand the practical ways that you're using the Wolfram language and how it relates to AI, if it does at all.

[00:22:54]

Let's take the second thing first. Wolfram language is about representing the world computationally in a precise computational way. It also happens to make use of a bunch of AI. But let's put that aside. The way that, for example, something like an LLM, like a ChatGPT or something like that, what it does is it makes up pieces of language. If we have a sentence like the cat sat on the blank, what it will have done is it's read a billion web pages. Chances are the most common next is going to be Matt. And it has set itself up so that it knows that the most common next word is Matt, so let's write down Matt. So the big surprise is that it doesn't just do simple things like that, but having built the structure from reading all these web pages, it can write plausible sentences. Those sentences, they sound like they make sense. They're typical of what you might read. They might or might not actually have anything to do with reality in the world, so to speak. That's working the way humans immediately think about things. Then there's this separate whole idea of formalized knowledge, which is the thing that led to modern science and so on.

[00:24:14]

It's a different branch from things humans just can quickly and naturally do. So in a sense, Wolf and language, the big contribution right now to the world of the emerging AI language models, all this thing, is that we We have this computational view of the world, which allows one to do precise computations and build up these whole towers of consequences. So the typical setup, and you'll see more and more coming out along these lines. We built something with OpenAI back, oh, gosh, a year ago now. An early version of this is you've got the language model and it's trying to make up words, and then it gets to use as a tool our computational language. If it can formulate what It's talking about, well, we have ways to take the natural language that it produces. We've had Alpha Alpha system, which came out in 2009, is a system that has natural language understanding. We had solved the problem of one sentence at a time, what does this mean? Can we translate this natural language in English, for example, into computational language, then compute an answer using potentially many steps of computation? Then that's something that is a solid answer That was computed from knowledge that we've curated, et cetera, et cetera, et cetera.

[00:25:35]

So the typical mode of interaction is this a linguistic interface provided by things like LLMs, and that using our computational language as a tool to actually figure out, hey, this is the thing that's actually true, so to speak. Just as humans don't necessarily immediately know everything, but with tools, they can get a long way. I suppose it's been the story of my life, at least. I discovered computers as a tool back in 1972, and I've been using them ever since and managed to figure out a number of interesting things in science and technology and so on by using this external to me superpower tool of computation. The LLMs and the AIs get to do the same thing. So that's the core part of how the technology I've been building for a long time, most immediately fits into the current expansion of excitement AI and language models and so on. I think there are other pieces to this which have to do with how, for example, science that I've done relates to understanding more about how you can build other kinds of I like things, but that's a separate branch.

[00:26:48]

Let's hold that thought and take a quick break with our sponsors. Young and Profitors, I don't know about you, but I love to make my home someplace that I'm proud of. That's why I spent a a lot of time on my apartment, trying to make it my perfect pink palace, all set with the velvet couch, an in-home studio, and skyline views of the city. And while I love my apartment, I can get really sick of it. I can get really uninspired. And if you work from home, you know exactly what I'm talking about. But the good news is, like many of you guys, I'm an entrepreneur, and that means that I can work from anywhere. And finally, I decided to make good use of my work flexibility for the first time. This holiday break, the sun I was calling my name, so I packed my bags and my boyfriend, and we headed to Venice Beach, California. We got a super cute bungalow, and we worked from home for an entire month. The fresh air and slower pace helped to inspire some really cool new ideas for my business. And now I'm hitting the ground running in Q1.

[00:27:47]

Airbnb was the one that helped me make these California dreams come true. And in fact, Airbnb comes in clutch for me time and time again, whether it's finding the perfect Airbnb home for our annual executive team outing or booking vacation where my extended family can fit all in one place. Airbnb always makes it a great experience. And you know me, I'm always thinking of my latest business venture. So when I found out that a lot of my successful friends and clients host on Airbnb, I got Curious. And I want to follow suit because it seems like such a great way to generate passive income. So now we have a plan to spend more time in Miami, and then we'll host our place on Airbnb to earn some extra money whenever we're back on the East Coast. So I can't wait for that. And And a lot of people don't realize they've got an Airbnb right under their own noses. You can Airbnb your place or a spare room if you're out of town for even just a few days or weeks. You could do what I did and work remotely and then Airbnb your place to fund your trip.

[00:28:44]

Your home might be worth more than you think. Find out how much at airbnb. Com/host. That's airbnb. Com/host to find out how much your home is worth. Hey, app fam. Starting my LinkedIn Secrets Masterclass was one of the best things I've ever done for my business. I didn't have to waste time figuring out all the nuts and bolts of setting up a website that had everything I needed, like a way to buy my course, subscription offerings, chat functionality, and so on, because it was super easy with Shopify. Shopify is the global commerce platform that helps you sell at every stage of your business, whether you're selling your first product, finally taking your side hustle full-time, or making half a million dollars from your master class like me. And it doesn't matter if you're selling digital products or vegan cosmetics. Shopify helps you sell everywhere, from their all-in-one e-commerce platform to their in-person POS system. Shopify has got you covered as you scale. Stop those online window shoppers in their tracks and turn them into loyal customers with the internet's best Kimberding checkout. I'm talking 36 % better on average compared to other options out there.

[00:29:54]

Shopify powers 10 % of all e-commerce in the US, from huge shoe brands like All Birds to vegan cosmetic brands like Thrive Cosmetics. Actually, back on episode 253, I interviewed the CEO and founder of Thrive Cosmetics, Carissa Bodnar, and she told me about how she set up her store with Shopify, and it was so plug and play, her store exploded right away. Even for a makeup artist type girl with no coding skills, it was easy for her to open up a shop and start her dream job as an entrepreneur. That was nearly a decade ago, and now it's even easier to sell more with less, thanks to AI tools like Shopify Magic. And you never have to worry about figuring it out on your own. Shopify's award-winning help is there to support your success every step of the way. So you can focus on the important stuff, the stuff you like to do. Because businesses that grow, grow with Shopify. Sign up for a $1 per month trial period at Shopify. Com/profiting, and that's all lowercase. If you want to start that side hustle, you've always dreamed of, if you want to start that business you can't stop thinking about, if you have a great idea, what are you waiting for?

[00:31:07]

Start your store on Shopify. Go to Shopify. Com/profiting now to grow your business no matter what stage you're in. Again, that's Shopify Shopify. Com/profiting. Shopify. Com/profiting for a $1 per month trial period. Again, that's Shopify. Com/profiting. Young in profitors, we are all making money. But is your money hustling for you? Meaning, are you investing? Putting your savings in the bank is just doing you a total disservice. You got to beat inflation. I've been investing heavily for years. I've got an eTrade account, I've got a Robin hood account, and it used to be such a pain to manage all of my accounts. I'd hop from platform to platform. I'd always forget my Fidelity password, and then I have to reset my password. I knew that needed to change because I need to keep track of all my stuff. Everything got better once I started using Yahoo Yahoo Finance, the sponsor of today's episode. You can securely link up all of your investment accounts in Yahoo Finance for one unified view of your wealth. They've got stock analyst ratings. They have independent research. I can customize charts and choose what metrics I want to display for all my stocks so I can make the best decisions.

[00:32:20]

I can even dig into financial statements and balance sheets of the companies that I'm curious about. Whether you're a seasoned investor or looking for that extra guidance, Yahoo Finance gives you all the tools and data you need in one place. For comprehensive financial news and analysis, visit the brand behind every great investor, yahoofinance. Com, the number one financial destination, yahoofinance. Com. That's yahofinance. Com. Honestly, you're teaching us so much. I feel like a lot of people tuning in are probably learning a lot of this stuff for the first time. But one thing that we all are using right now is ChatGPT. Everybody has embraced ChatGPT. It feels like it's magic when you're just getting something that is giving you something that a human could potentially write. I have a couple of questions about ChatGPT. You alluded to how it works a bit, but can you give us more detail about how neural networks work in general and what ChatGPT is doing in the background to spit out something that looks like it's written by a human?

[00:33:26]

The original inspiration for neural networks was understanding something about how brains work. In our brains, we have about roughly 100 billion neurons. Each neuron is a little electrical device, and they're connected with things that look under a microscope a bit like wires. So one neuron might be connected to a thousand or 10,000 other neurons in one's brain. And these neurons, they'll have a little electrical signal, and then they'll pass on that electrical signal to another neuron. And pretty soon, one's gone through a whole chain of neurons, and one says the next word or whatever. And so the electrical machine, lots of things connected to things, that's how people imagine that brains work. And that's how neural nets, an idealization of that, set up in a computer where one has these connections between artificial neurons, usually called weights. You often hear about people saying this thing has a trillion weights or something. Those are the connections between the artificial neurons, and each one has a number associated with it. And so what happens when you ask ChatGPT something, what will happen is it will take the words that it's seen so far, the prompt, and it will grind them up into numbers, and it will take that sequence of numbers and feed that in as input to this network.

[00:34:50]

So it just takes the words, more or less every word in English gets a number or every part of a word gets a number. You have the sequence of numbers. That sequence numbers is given as input to this essentially mathematical computation that goes through and says, okay, here's this arrangement of numbers. We multiply each number by this weight. Then we add up a bunch of numbers. Then we take the threshold of those numbers and so on. And we keep doing this, and we do it a sequence of times, like a few hundred times for a typical ChatGPT type behavior, a few hundred times. And then at the end, we got another number. Actually, we got another collection of numbers that represent the probabilities that the next word should be this or that. So in the example of the cat sat on the, the next word has probably very high probability, 99 % probability to be mat, and 1 % probability or 0.5 % probability to be floor or something. And then what ChatGPT is doing is it's saying, well, usually I'm going to pick the most likely next word. Sometimes I'll pick a word that isn't the absolutely most Most likely next word, and it just keeps doing that.

[00:36:03]

And the surprise is that just doing that thing, a word at a time, gives you something that seems like a reasonable English sentence. Now, the next question is, how did it get all those In the case of the original ChatGPT, I think it was 180 billion weights. How did it get those numbers? And the answer is, what it tried to do was it was trained, and it was trained by being shown all this text from the web. And what was happening was, well, you've got one arrangement of weights. Okay, what next word does that predict? Okay, that predicts turtle as the next word for the cat sat on the. Turtle is wrong. Let's change that. Let's see what happens if we adjust these weights in that way. Oh, we finally got it to say mat. Great. That's the correct version of that particular weight. Well, you keep doing that over and over again. That takes huge amounts of computer effort. You keep on bashing it and trying to get it. No, no, no, you got it wrong. Adjust it slightly to make it closer to correct. Keep doing that long enough, and you got something which is a neural net, which has the property that it will typically reproduce the kinds of things it's seen.

[00:37:16]

Now, it's not enough to reproduce what it's seen, because if you keep going writing a big long essay, a lot of what's in that essay will never have been seen before. Those particular combination of words will never have been produced before. So then the The question is, well, how does it extrapolate? How does it figure out something that it's never seen before? What words is it going to use when it never saw it before? And this is the thing which nobody knew what was going to happen. This is the thing where the big surprise is that the way it extrapolates is similar to the way we humans seem to extrapolate things. And presumably, that's because its structure is similar to the structure of our brains. We don't really know why when it figures things out that it hasn't seen before, why it does that in a humanlike way. That's a scientific discovery. Now we can say, can we get an idea why this might happen? I think we have an idea why it might happen. It's more or less this, that we say, how do you put together an English sentence? Well, you learn basic grammar.

[00:38:15]

You say it's a noun, a verb, a noun. That's a typical English sentence. But there are many noun, verb, noun, English sentences that aren't really reasonable sentences. Like, I don't know, the electron ate the moon. Okay, it's dramatically correct, but probably doesn't really mean anything except in some poetic sense. Then what you realize is there's a more elaborate construction kit about sentences that might mean something. And people have been intending to create that construction kit for a couple of thousand years. I mean, Aristotle started the time when he created logic. He started thinking about that construction kit, but nobody got around to doing it. But I think ChatGPT and LLM show us there is a construction kit of, oh, that word, if it's blah, ate, blah, the first blah, better be a thing that eats things. And there's a certain category of things that eat things, and it's like animals and people and so on. And so that's part of the construction kit. So you end up with this notion of a semantic grammar of a way, a construction kit of how you put words together. My guess is that's essentially what ChatGPT has discovered.

[00:39:29]

And Once we understand that more clearly, we'll probably be able to build things like ChatGPT much more simply than it's very indirect way to do it, to have this neural net and keep bashing it and say, make it predict words better and so on. There's probably a more direct way to do the same thing, but that's what's happened. And this moment when it becomes human-level performance, very hard to predict when that will happen. It's happened for things like visual object recognition around 2011, 2012, type time frame. It's hard to know when these things are going to happen for different kinds of human activities. But the thing to realize is there are human-like activities, and then there are things that we have formalized where we've used math, we've used other kinds of things as a way to work things out systematically. And that's a different direction than the direction that things like neural nets are going in. And that happens to be the direction that I've spent a good part of my life trying to build up. And these things are very complementary in the sense that things like the linguistic interface that are made possible by neural nets feed into this precise computation that we can do on that side.

[00:40:43]

How does this make you feel about human consciousness and AI potentially being sentient or having any agency?

[00:40:52]

It's always a funny thing because we have an internal view of the fact that there's something going on inside for us. We experience the world and so on. Even when we're looking at other people, it's like it's just a guess. I know what's going on in my mind. It's just some guess what's going on in your mind, so to speak. And the big discovery of our species is language, this way of packaging up the thoughts that are happening in my mind and being able to transmit them to you and having you unpack them and make similar thoughts, perhaps in your mind, so to speak. So this idea of where can you imagine that there's a mind that's operating. It's not obvious between different people. We always make that assumption. When it comes to other animals, it's like, well, we're not quite sure, but maybe we can tell that a cat had some emotional reaction which reminded us of some human human, emotion, and so on. When it comes to our AIs, I think that increasingly people will have the view that the AIs are a bit like them. So when you say, well, is there a there there?

[00:41:58]

Is there a thing inside? It's like, okay, is there a thing inside another person? You say, well, what we can tell the other person is thinking and doing all this stuff. Well, if we were to look inside the brain of that other person, all we'd find is a bunch of electrical signals going around, and those add up to something where we have the assumption that there's a conscious mind there, so to speak. So I think we have always felt that our thinking and minds are very far away from other things that are happening the world. I think the thing that we learn from the advance of AI is, well, actually, there's not as much distance between the amazing stuff of our minds and things that are just able to be constructed computationally. One of the things to realize is this whole question of what thinks, where is the computational stuff going on? And you might say, well, humans do that, maybe our computers do that. Well, actually, nature does that, too. When people will have this thing, the weather has a mind of its own. Well, what does that mean? Typically, operationally, it means it seems like the weather is acting with free will.

[00:43:09]

We can't predict what it's going to do. But if we say, well, what's going on in the weather? Well, it's a bunch of fluid dynamics in the atmosphere and this and that and the other. And we say, well, how do we compare that with the electrical processes that are going on in our brains? They're both computations that operate according to certain rules. The ones in our brains We're familiar with the ones in the weather we're not familiar with. But in some sense, both of these cases, there's a computation going on. And one of the things that was a big piece of a bunch of science I've done is this thing called the principle of computational equivalence, which is this discovery, this idea that if you look at different kinds of systems operating according to different rules, whether it's a brain or the weather, there's a commonality. There's the same level of computation is achieved by those different kinds of systems. That's not obvious. You might say, well, I've got the system, and it's just a system that's made from physics, as opposed to the system that's the result of lots of biological evolution or whatever, or I've got the system, and it just operates according to these very simple rules that I can write down.

[00:44:15]

You might have thought that the level of computation that will be achieved in those different cases would be very different. The big surprise is that it isn't. It's the same. And that has all kinds of consequences. If you say, okay, I've got this system in nature. Let me predict what's going to happen in it. Well, essentially what you're doing by saying, I'm going to predict what's going to happen is you're somehow setting yourself up as being smarter than the system in nature. It will take it all these computational steps to figure out what it does. But you are going to just jump ahead and say, This is what's going to happen in the end. Well, the fact that there's this principle of computational equivalence implies this thing I call computational irreduciability, which is realization that there are many systems where to work out what will happen in that system, you have to do an irreducible amount of computational work. That's a surprise because we have been used to the idea that science lets us jump ahead and just say, oh, this is what the answer is going to be. And this is showing us from within science, it's showing us that there's a fundamental limitation where we can't do that.

[00:45:18]

That's important when it comes to thinking about things like AI, when you say things like, well, let's make sure that AIs never do the wrong thing. Well, the problem with that is there's this phenomenon of computation functional irreduciability. The AI is doing what the AI does. It's doing all these computations and so on. We can't know in advance. We can't just jump ahead and say, oh, we know what it's going to do. We are stuck having to follow through these steps. We can try and make an AI where we can always know what it's going to do, turns out that AI will be too dumb to be a serious AI. And in fact, we see that happening in recent times, where people saying, let's make sure they don't do the wrong thing. When you put enough constraints It can't really do the things that a computational system should be able to do, and it doesn't really achieve this level of capability that you might call real AI, so to speak.

[00:46:14]

We'll be right back after a quick break from our sponsors. Young improvisers, I've been a full-time entrepreneur for about four years now, and I finally cracked the code on hiring. I look for character, attitude, and reliability. But it takes so much time to make sure a candidate has these qualities on top of their core skills in the job description. And that's why I leave it to Indeed to do all the heavy lifting for me. Indeed is the most powerful hiring platform out there, and I can attract, interview, and hire all in one place. With YAP Media growing so fast, I've got so much on my plate. And I'm so grateful that I don't have to go back to the days where I was spending hours on all these other different inefficient job sites because now I could just use Indeed. They've got everything I need. According to US Indeed data, the moment Indeed sponsors a job, over 80% of employers get candidates whose resumes are a perfect match for the position. One of my favorite things about Indeed is that you only have to pay for applications that meet your requirements. No other job site will give you more mileage out of your money.

[00:47:24]

According to Talent Nest, 2019, Indeed delivers four times more hires than all other of Job sites combined. Join the more than 3 million businesses worldwide who count on Indeed to hire their next superstar. Start hiring now with a $75 sponsored job credit to upgrade your job post at indeed. Com/profiting. Offer is good for a limited time. I'm speaking to all you small and medium-sized business owners out there who listen to the show. This is basically free money. You can get a $75 sponsored job credit to upgrade your job post at indeed. Com/profiting. Claim your $75 sponsored job credit now at indeed. Com/profiting. Again, that's indeed. Com/profiting and support the show by saying you heard about Indeed on this podcast. Indeed. Com/profiting. Terms and conditions apply. Need to hire? You need Indeed. Yeah, fam, I did a big thing recently. I rolled out benefits to my US employees. They now get health care and 401(k)s. And maybe this doesn't sound like a big deal to you, but it was surely a big deal to me because benefits benefits were like the boogie man to me. I thought for sure we couldn't afford it. I thought that it was going to be so complicated, so hard to set up, lots of risk involved.

[00:48:41]

And in fact, so many of my star employees have left in the past, citing benefits as the only reason why. Here I was thinking that we couldn't afford benefits when it's literally not that expensive at all and you actually split the cost between the employee and the employer. I had no idea. I found out on JustWorks. Justworks has been a total lifesaver for me. We were using two other platforms for payroll, one for domestic in US, one for international. We had our HR guidelines and things like that, employee handbook on another site, and everything was just everywhere. Now, everything's consolidated with JustWorks, a tried and tested employee management platform. You get automated payments, tax calculations, and with holdings, with expert support anytime you need it. And on top of that, there's no hidden fees. You can leave all the boring stuff to JustWorks and just get to business. And with automatic time tracking, it has made managing my international hires a little bit more soothing for my soul that I know that they're actually working and they're tracking their time. I mean, it's really hard to manage remote employees. It's easy to get started right away.

[00:49:55]

All you need is 30 minutes. You don't even have to be in front of your computer. You can just get started right on your phone. Take advantage of this limited time offer. Start your free month now at justworks. Com/profiting. Let Justworks run your payroll so you don't have to. Start your free month now at justworks. Com/profiting. Next, I want to talk about how the world is going to change now that AI is here being more adapted by people. It's becoming more commonplace. How is it going to impact jobs? And also, if you Can you touch on the risks of AI, what are the biggest fears that people have around AI?

[00:50:35]

More and more systems in the world will get automated. This has been a story of technology throughout history. Ai is another step in the automation of things. When things get automated, things humans used to have to do with their own hands, they don't have to do anymore. The typical pattern of economies, like in the US or something, is 150 years ago in the US, most people were doing agriculture. You had to do that with your own hands. Then, machinery got built that let that be automated. And the people, it's like, well, then nobody's going to have anything to do. Well, it turned out they did have things to do because that very automation enabled a lot of new types of things that people could do. And for example, we're doing the podcasting thing we're doing right now is enabled by the fact that we have video, communication, and so on. There was a time when all of that automation that has now led to the telecommunications infrastructure we have wasn't there. And there had to be telephone switchboard operators, plugging wires in and so on. And people were saying, oh, gosh, if we automate telephone switching, then all those jobs are going to go away.

[00:51:45]

But actually what happened was, yes, those jobs weren't away, but that automation opened up many other categories of jobs. So the typical thing that you see, at least historically, is a big category. There's a big chunk of jobs that are something that people have to do for themselves. That gets automated, and that enables what becomes many different possible things that you end up being able to do. And I think the way to think about this is really the following, that once you've defined an objective, You can build automation that does that objective. Maybe it takes 100 years to get to that automation, but you can, in principle, do that. But then you have the question, well, what are you going to do next? What are the new things you could do? Well, that question, there are an infinite number of new things you could do. The AI left to its own devices, there's an infinite set of things that it could be doing. The question is, which things do we choose to do? And that's something that is really a matter for us humans, because it's like you could compute anything you want to compute.

[00:52:47]

And in fact, some part of my life has been exploring the science of the computational universe, what's out there that you can compute. And the thing that's a little bit sobering is to realize, of all the things that are out there to compute, the set that we humans have cared about so far in the developments of our civilization is a tiny, tiny, tiny slice. And this question of where do we go from here is, well, what other slices, which now they're possible, which things do we want to do? And I think that the typical thing you see is that a lot of new jobs get created around the things which are still a matter of human choice, what you do. Eventually, it gets standardized, and then it gets automated, and then go on to another stage. So I think that the spectrum of what jobs will be automated, one of the things that happened back, or several years ago now, people were saying, Oh, machine learning, the underlying area that leads to neural nets and AI and things like this. Machine learning is going to put all these people out of jobs. The thing that was confusing to me was that I knew perfectly well that the first category of jobs that would be impacted were machine learning engineers.

[00:54:00]

Because machine learning can be used to automate machine learning, so to speak. And so it was once the thing becomes routine, then it can be automated. And for example, a lot of people learned to do programming, low-level programming. I've spent a large part of my life trying to automate low-level programming. So in other words, the computational language we built, which people like, oh, my gosh, I can do this. I can get the computer to do this thing for me by spending an hour of my time If I were writing standard programming language code, I'd spend a month trying to set my computer up to do this. The thing we've already achieved is to be able to automate out those things. What you realize when you automate out something like that is people say, oh, my gosh, things have become so difficult now, because if you're doing low-level programming, some part of what you're doing is just routine work. You don't have to think that much. It's just like, oh, I turn the crank, I show up to work the next day, I get this piece of code written. Well, if you've automated out all of that, what you realize is most of what you have to do is figure out, so what do I want to do next?

[00:55:09]

And that's where this being able to do real computational thinking comes in, because that's where it's like, so how do you think about what you're trying to do in computational terms so you can define what you should do next? And I think that's an example of the low level, turn the crank programming. I mean, that should be extinct already because I've spent the last 40 years trying to automate that stuff. And in some segments of the world, it is extinct because we did automate it. But there's an awful lot of people where they said, oh, we can get a good job by learning C code, C plus plus programming or Python or Java or something like this, that's a thing that we can spend our human time doing. It's not necessary. And that's being more emphasized at this point. The thing that is still very much the human thing is, so what do want to do next, so to speak.

[00:56:01]

It's a good story because you're not saying, Hey, we're doomed. You're saying AI is going to actually create more jobs. It's going to automate the things that are repetitive and the things that we still need to make decisions on or decide the direction that we want to go in. That's what humans are going to be doing, shaping all of it. But do you feel that AI is going to supersede us in intelligence and have this apex intelligence one day where we are not in control of the next thing?

[00:56:32]

I mentioned the fact that lots of things in nature compute. Our brains do computation, the weather does computation, the weather is doing a lot more computation than our brains are doing. So if you say, what's the apex intelligence in the world? Already nature has vastly more computation going on than happens to occur in our brains. The computation going on in our brains is computation where we say, oh, we understand what that is and we really care about that. Whereas the computation that goes on in the babbling brook or something, we say, well, that's just some flow of water and things. We don't really care about that. So we already lost that competition of, are we the most computationally sophisticated things in the world? We're not. Many, many things are equivalent in their computational abilities. So then the question is, well, what will it feel like when AI gets to the point where routinely it's doing all sorts of computation beyond what we managed to do? I think it It feels pretty much like what it feels like to live in the natural world. The natural world does all kinds of things. Occasionally, a tornado will happen.

[00:57:38]

Occasionally, this will happen. We can make some prediction about what's going to happen, but we don't know for sure what's going to happen, when it's going to happen, and so on. And that's what it will feel like to be in a world where most things are run with AI. And we'll be able to do some science of the AI, just like we can do science of the natural world and say this is what we think is going to happen. But there's going to be this infrastructure culture of AI society. There already is to some extent, but that will grow of more and more things that are happening automatically as a computational process. But in a sense, that's no different from what happens in the natural world. The natural world is just automatic So it's basically doing things that are not where we can try and divert what it does, but it's just doing what it does. For me, it's one of the things I've long been interested in is how is the universe actually put together? If we drill down and look at the smallest scales of physics and so on, what's down there. And what we've discovered in the last few years is that it looks like we really can understand the whole of what happens in the universe as a computational process, that underneath them.

[00:58:43]

People have been arguing for a couple of thousand years, whether the world is made of continuous things, whether it's made of little discrete things like atoms and so on. And about a bit more than 100 years ago, it got nailed down. Matter is made of discrete stuff. There are individual atoms and molecules and so on. Then light is made of discrete stuff, photons and so on. Space people had still assumed was somehow continuous, was not made of discrete stuff. And the thing we nailed down, I think in 2020 was the idea that space really is made of discrete things. There are discrete elements, discrete atoms of space. And we can really think of the universe as made of a giant network of atoms of space. And Hopefully in the next few years, maybe if we're lucky, we'll get direct experimental evidence that space is discrete in that way. But one of the things that makes one realize is it's computation all the way down. At this lowest level, the universe consists of this discrete network that keeps on getting updated, and it's following these simple rules and so on. It's all rather lovely. But there's computation everywhere in nature, in our AIs, in our brains.

[00:59:58]

The computation that we care the most about is the part that we, with our brains and our civilization and our culture and so on, have so far explored. That's the part we care the most about. Progressively, we should be able to explore more. And as the computational X fields come into existence and so on, and we get to use our computers and computational language and so on. We get to colonize more of the computational universe, and we get to bring more things into, oh, yes, that's That's the thing we humans talk about. I mean, if you go back even just 100 years, nobody was talking about all these things that we now take for granted about computers and how they work and how you can compute things and so on. That was just not something within our human sphere. Now, the question is, as we go forward with automation, with the formalization of computational language, things like that, what more will be within our human sphere? It's hard to predict. It is It's, to some extent, a choice. There are things where we could go in this direction, we could go in that direction. These are things we will eventually humanize.

[01:01:09]

It's also if you look at the course of human history and you say, what did people think was worth doing? A thousand years ago, a lot of things that people think are worth doing today, people absolutely didn't even think about. A good example, perhaps, is walking on a treadmill. That would just seem completely stupid to somebody from even a a few hundred years ago. It's like, why would you do that? Well, I want to live a long life. Why do you even want to live a long life? That's because whatever, that wasn't in the past, that might not even have been a thought of as an objective. And then there's a whole chain of why are we doing this? And that chain is a thing of our time, and that will change over time. And I think what is possible in the world will change. What we get to explore out of the computational universe of all possibilities will change. There will no doubt be people who you could ask the question, what will be the role of the biological intelligence versus all the other things in the world? And as I say, we're already somewhat in that situation.

[01:02:16]

There are things about the natural world that just happen, and some of those things are things that are much more powerful than us. We don't get to stop the earthquakes and so on. So we already are in that situation. It's just that the things that we are doing with AI and so on, we happen to be building a layer of that infrastructure that is of our own construction rather than something which has been there all the time in nature and so we've gotten used to it.

[01:02:46]

It's so mind-blowing, but I love the fact that you seem to have a positive attitude towards it. We've had other people on the show that are worried about AI, but you don't have that attitude towards it. It seems like you're more accepting of the fact that It's coming whether we like it or not, right? And to your point, we're already living in nature, which is way more intelligent than us anyway. And so maybe this is just an additional layer.

[01:03:11]

Right. I'm an optimistic person. That's what happens. I've spent my life doing large projects and building big things. You don't do that unless you have a certain degree of optimism. But I think also what will always be the case as things change, things that people have been doing will stop making sense. You see this in the intellectual sphere with paradigms in science. I built some new things in science where people at first say, oh, my gosh, this is terrible. I've been doing this other thing for 50 years. I don't want to learn this new stuff. This is a terrible thing. And I think you see that there's a lot in the world where people are like, It's good the way it is. Let's not change it. Well, what's happening is in the sphere sphere of ideas and in the sphere of technology, things change. And I think to say, is it going to wipe our species out? I don't think so. But that will be a thing that we would probably think is definitively bad. If we say, well, I spent a lot of time learning how to do, I don't know, write. I don't know.

[01:04:21]

I became a great programmer in some low-level programming language. And by golly, that's not a relevant skill anymore. Yes, that can happen. I mean, for example, In my life, I got interested in physics when I was pretty young. And when you do physics, you end up having to do lots of mathematical calculations. I never liked doing those things. But there were other people who were like, that's what they're into. That's what they like doing. I never like doing those things. So I taught computers to do them for me. And me plus a computer, did pretty well at doing those things. But it's one who automated that away. To me, that was a big positive because it let me do a lot more. It let me take what I was thinking about and and get this superpower to go places with that. To other people, that's like, oh, my gosh, the thing that we really were good at doing of doing all these mathematical calculations by hand and so on, that just got automated away. The thing that we like to do isn't a thing anymore. So that's the dynamic that I think continues. But having said that, there are plenty of ridiculous things that get made possible by whenever there's powerful technology, you can do ridiculous things with it.

[01:05:31]

And the question of exactly what terrible scam will be made possible by what piece of AI, that's always a bit hard to predict. It's a computational irreduciability story of this thing of What will people figure out how to do? What will the computers let them do? And so on. But in general terms, it is my nature to be optimistic, but I think also there is an optimistic path through the way the world is changing, so to speak.

[01:06:00]

Well, it's really exciting. I can't wait to have you back on, maybe in a year, to hear all the other exciting updates that have happened with AI. I end my show asking two questions. Now, you don't have to use the topic of today's episode. You can just use your life experience to answer these questions. So one is, what is one actionable thing our young improvisers can do today to become more profitable tomorrow? And this is not just about money, but profiting in life.

[01:06:24]

Understand computational thinking. This is the coming paradigm of the 21st century. And if you understand that well, it gives you a huge advantage. And unfortunately, it's not like you go sign up for a computer science class and you'll learn that. Unfortunately, the educational resources for learning about computational thinking aren't really fully there yet. And it's something which, frustratingly, after many years, I've decided I have to really build much more of these things because other people aren't doing it. And it'll be another decade before it gets done otherwise. But yes, learn computational thinking, learn the tools that are around that. That's a quick way to jump ahead in whatever you're doing, because as you make it computational, you get to think more clearly about it, and you get the computer to help you jump forward.

[01:07:16]

And where can people get resources from you to learn more about that? Where do you recommend?

[01:07:21]

Our computational language, Wolfman language, is the main example of where you get to do computational thinking. There's a book I wrote a few years ago Elementary Introduction to Wolf and Language, which is pretty accessible to people. But hopefully in another, well, certainly within a year, there should exist a thing that I'm working on right now, which is directly an introduction to computational thinking, which you'll find a bunch of resources around Wolf and Language that explain more how one can think about things computationally.

[01:07:53]

Whatever links that we find, I'll stick them in the show notes. And next time, if you have something and you're releasing it, make sure that you contact us so you can come back on Young and Profiting podcast. Stephen, thank you so much for your time. We really enjoyed having you on Young and Profiting podcast. Thanks. Oh, boy, yeah, Bam. My brain is still buzzing from that conversation. I learned so much today from Stephen Wolfram, and I hope that you did, too. And although AI technology like ChatGPT seem to just pop up out of nowhere in 2022, it's actually been in the works for a long, long time. In fact, a lot of the thinking behind large language models have been in place for decades. We just didn't have the tools or the computing power to bring them to fruition. And one of the exciting things that we've learned about AI advances is that there's not as big as a gap between what our organic brains can do and what our Silicon technology can now accomplish. As Steven put it, whether a system develops from biological evolution or computer engineering, we're talking about the same rough level computational complexity.

[01:09:02]

Now, this is really cool, but it's also pretty scary. We're just creating this really smart thing that's going to get smarter. I asked him the question, do you think AI is going to have apex intelligence and take over the world? I record these outros a couple of weeks after I do the interview. I've been telling all my friends this analogy. Every time I talk to someone, I'm like, Oh, you want to hear something cool? I keep thinking about this. Ai, if it does become this apex intelligence that we have no more control over. He said it might just be like nature. Nature has a mind of its own. It's what everybody always says. We can try to predict it. We can try to analyze nature. We can try to figure out what it does. Sometimes it's terrible and inconvenient and disastrous, and horrible, and sometimes it's beautiful. It's so interesting to think about the fact that AI might become this thing that we just exist with, that we created, that we have no control over. It might not necessarily be bad. It might not necessarily be good. It just could be this thing that we exist with.

[01:10:04]

I thought that was pretty calming because we do already exist in a world that we have no control over. You never really think about it that way, but it's true. Speaking of AI getting smarter, let's talk about AI and work. Is AI going to end up eating our workforce's lunch in the future? Steven is more optimistic than most. He thinks AI and automation might just make our existing jobs more productive and likely even create new jobs in the future. Jobs where humans are directing and guiding AI in new innovative endeavors. I really hope that's the case because us humans, we need our purpose. Thanks so much for listening to this episode of Young and Profiting podcast. We are still very much human-powered here at YAP and would love your help. So if you listen, learned, and profited from this conversation with the super intelligent Stephen Wolfram, please share this episode with your friends and family. And if you did enjoy this show and you learned something, then please take two minutes to drop us a five-star review on Apple podcast. I love to read her reviews. I go check them out every single day. And if you prefer to watch your podcast as videos, you can find all of our episodes on YouTube.

[01:11:15]

You can also find me on Instagram at Yapp with Hala or LinkedIn by searching my name. It's Hala Tah. Before we go, I did want to give a huge shout out to my Yapp Media production team. Thank you so much for all that you do. You guys are the best. This is your host, Hala Taha, aka the podcast Princess, signing off..