Transcribe your podcast
[00:00:00]

The following is a conversation with Greg Brockmann. He's the co-founder and CEO of Open Eye, a world class research organization, developing ideas and A.I. with a goal of eventually creating a safe and friendly artificial general intelligence, one that benefits and empowers humanity. Opening is not only a source of publications, algorithms, tools and data sets.

[00:00:24]

Their mission is a catalyst for an important public discourse about our future, with both narrow and general intelligence systems. This conversation is part of the Artificial Intelligence Podcast at MIT and beyond if you enjoy it. Subscribe on YouTube, iTunes or simply connect with me on Twitter at Lex Friedman spelled F.R. I.D.. And now here's my conversation with Greg Brockman. So in high school and right after you wrote a draft of a chemistry textbook, saw that that covers everything from basic structure, the atom to quantum mechanics.

[00:01:18]

So it's clear you have an intuition and a passion for both the the physical world with chemistry and now robotics to the digital world with a deep learning, reinforcement learning and so on.

[00:01:32]

Do you see the physical world in the digital world is different? And what do you think is the gap?

[00:01:37]

A lot of it actually boils down to iteration speed, right? That I think that a lot of what really motivates me is is building things right is the you know, think about mathematics, for example, where you think really hard about a problem. You understand that you're right down to this very obscure form they call proof. But then this is a humanities library, right? It's there forever. There's some truth that we've discovered. You know, maybe only five people in your field will ever read it, but somehow you kind of move humanity forward.

[00:02:02]

And so I actually used to really think that I was going to be a mathematician. And then I actually started writing this chemistry textbook. One of my friends told me you'll never publish it because you don't have a Ph.D. So instead I decided to build a website and try to promote my ideas that way. And then I discovered programming. And you know that programming you think hard about a problem. You understand that you write down in a very obscure form that we call a program.

[00:02:27]

But then once again, it's in humanities library. Right. And anyone could get the benefit from it. And the scalability is massive. And so I think that the thing that really appeals to me about the digital world is that you can have this this this insane leverage. Right. A single individual with an idea is able to affect the entire planet. And that's something I think is really hard to do if you're moving around physical atoms.

[00:02:47]

But you said mathematics. So if you look at the what things over here our mind, do you ultimately see it as just math is just information processing or is there some other magic as you've seen if you've seen the biology and chemistry and so on?

[00:03:04]

Yeah, I think it's really interesting to think about humans is just information processing systems. And that, it seems like, is actually a pretty good way of describing a lot of kind of how the world works are a lot of what we're capable of. To think that that, you know, again, if you just look at technological innovations over time, that in some ways the most transformative innovation that we've had has been the computer. Right. In some ways, the Internet, you know, that what is the Internet done?

[00:03:27]

Right. The Internet is not about these physical cables. It's about the fact that I am suddenly able to instantly communicate with any other human on the planet. I'm able to retrieve any piece of knowledge that in some ways the human race has ever had and that those are these insane transformations.

[00:03:43]

Do you see the our society as a whole, the collective as another extension of the intelligence of the human being? So if you look at the human beings information processing system, you mentioned the Internet, the networking. Do you see us all together as a civilization, as a as a kind of intelligent system?

[00:03:58]

Yeah, I think this is actually a really interesting perspective to take and to think about that. You sort of have this collective intelligence of all of society. The economy itself is the super human machine that is optimizing something. Right. And it's also in some ways, a company has a will of its own right.

[00:04:15]

That you have all these individuals who are all pursuing their own individual goals and thinking really hard and thinking about the right thing to do. But somehow the company does something that is this emergent thing and that is it's a really useful abstraction. And so I think that in some ways, you know, we think of ourselves as the most intelligent things on the planet and the most powerful things on the planet. But there are things that are bigger than us that are these systems that we all contribute to.

[00:04:39]

And so I think actually, you know, it's a it's interesting to think about if you've read Isaac Asimov Foundation, right. That that there's this concept of psychohistory in there, which is effectively this, that if you have trillions or quadrillions of of beings, then maybe you could actually predict what that being that that huge macro being will do and almost independent of what the individuals want.

[00:04:59]

And I actually have a second angle on this I think is interesting, which is thinking about technological determinism.

[00:05:05]

One thing that I actually think a lot about with with opening I right is that we're kind of coming on onto this insanely transformational technology of general intelligence.

[00:05:14]

Right. That will happen at some point.

[00:05:16]

And there's a question of how can you take actions that will actually steer it to go better rather than worse? And that I think one question I need to ask is, as a scientist, as an inventor, as a creator, what impact can you have in general? Right. You look at things like the telephone invented by two people on the same day, like what does that mean? Like what does that mean about the shape of innovation? And I think that what's going on is everyone's building on the shoulders of the same giants.

[00:05:39]

And so you can kind of you can't really hope to create something no one else ever would. You know, if Einstein wasn't born, someone else would have come up with relativity. You know, he changed the timeline a bit, right. That maybe it would take another 20 years, but it wouldn't be that fundamentally humanity would never discover these these fundamental truths.

[00:05:54]

So there's some kind of invisible momentum that some people like Einstein are opening eyes, plugging into that anybody else can also plug into. And ultimately, that wave takes us into a certain direction. That's that's right. That's right, and, you know, this kind of seems to play out and a bunch of different ways that there's some exponential that is being written and that the exponential itself, which one it is, changes. Think about Moore's Law. An entire industry set its clock to it for 50 years.

[00:06:22]

Like, how can that be right? How is that possible? And yet somehow it happened. And so I think you can't hope to ever invent something that no one else will. Maybe you can change the timeline a little bit, but if you really want to make a difference, I think that the thing that you really have to do, the only real degree of freedom you have is set the initial conditions under which a technology is born.

[00:06:42]

And so you think about the Internet, right, that there are lots of other competitors trying to build similar things and the Internet one, and that the initial conditions where that was created by this group that really valued people being able to be, you know, anyone being able to plug in this very academic mindset of of being open and connected. And I think that the Internet for the next 40 years really played out that way. You know, maybe today things are starting to shift in a different direction.

[00:07:07]

But I think that those initial conditions were really important to determine the next 40 years worth of progress.

[00:07:12]

That's really beautifully put. Another example of that I think about you know, I recently looked at it.

[00:07:18]

I looked at Wikipedia, the formation of Wikipedia, and I wonder what the Internet would be like if Wikipedia had ads. You know, there's an interesting argument that why they chose not to make it put advertising on Wikipedia. I think it's I think Wikipedia is one of the greatest resources we have on the Internet. And it is extremely surprising how well it works and how well it was able to aggregate all this kind of good information. And essentially the creator Wikipedia, I don't know, there's probably some debates there, but set the initial conditions and it carried itself forward.

[00:07:50]

That's really interesting. So you're the way you're thinking about ajai or artificial intelligence is you're focused on setting the initial conditions for the progress.

[00:07:58]

That's right. That's powerful. OK, so look into the future. If you create an AGI system like one that can answer Turing test natural language, what do you think would be the interactions you would have with it?

[00:08:13]

What do you think are the questions you would ask? Like what would be the first question you would ask it for him? That's right.

[00:08:19]

I think that at that point, if you've really built a powerful system that is capable of shaping the future of humanity, the first question that you really should ask is how do we make sure that this plays out well? And so that's actually the first question that I would ask a powerful ajai system is so you wouldn't ask your colleague, you wouldn't ask like Illia, you would ask the system.

[00:08:39]

Oh, we've already had the conversation with Iliya. Right. And everyone here. And so you want as many perspectives and piece of wisdom as you can for for answering this question. So I don't think you necessarily defer to whatever your powerful system tells you, but you use it as one input to try to figure out what to do. But I guess fundamentally what it really comes down to is if you've built something really powerful and you think about it, think about, for example, the creation of of shortly after the creation of nuclear weapons.

[00:09:05]

Right. The most important question in the world was what's the world order going to be like? How do we set ourselves up in a place where we're going to be able to survive as a species with ajai?

[00:09:16]

I think the question slightly different, right, that there is a question of how do we make sure that we don't get the negative effects. But there's also the positive side, right? You imagine that. You know, like like what won't it be like like what will be capable of? And I think one of the core reasons that in Ajai can be powerful and transformative is actually due to technological development. Right. If you have something that's capable, as capable as a human and that it's much more scalable, that you absolutely want that thing to go read the whole scientific literature and think about how to create cures for all the diseases.

[00:09:47]

Right. You want to think about how to go and build technologies to help us create material abundance and to figure out societal problems that we have trouble with, like how are we supposed to clean up the environment?

[00:09:57]

And, you know, maybe you want this to go and invent a bunch of little robots that'll go out and be biodegradable and turn ocean debris into harmless molecules.

[00:10:06]

And I think that that that positive side is something that I think people miss sometimes when thinking about what an ajai will be like.

[00:10:15]

And so I think that if you have a system that's capable of all of that, you absolutely want its advice about how do I make sure that we're using your your capabilities in a positive way for humanity.

[00:10:26]

So what do you think about that psychology that looks at all the different possible trajectories of an ajai system, many of which perhaps the majority of which are positive and nevertheless focuses on the negative trajectories? And you get to interact with folks who get to think about this maybe within yourself as well. You look at Sam Harris and so on. It seems to be sorry to put it this way, but almost more fun to think about the negative possibilities, whatever that's deep in our psychology.

[00:10:56]

What do you think about that and how do we deal with it? Because we want to help us.

[00:11:01]

So I think there's kind of two problems entailed in that question. The first is more of the question of how can you even picture what a world with a new technology will be like? Now, imagine we're in nineteen fifty and I'm trying to describe Uber to someone.

[00:11:20]

Apps and the Internet. Yeah, I mean, yeah, that's that's going to be extremely complicated, but it's imaginable. It's imaginable, right.

[00:11:28]

But and now imagine being in 1950 and predicting Uber. Right. And you need to describe the Internet. You describe GPS. You need to describe the fact that everyone's going to have this phone in their pocket.

[00:11:41]

And so I think that that just the first truth is that it is hard to picture how a transformative technology will play out in the world. We've seen that before with technologies that are far less transformative than ajai will be. And so I think that that one piece is that is just even hard to imagine and to really put yourself in a world where you can predict what that that positive vision would be like.

[00:12:04]

And, you know, I think the second thing is that it is I think it is always easier to support the negative side than the positive side. It's always easier to destroy than create. And, you know, in in a physical sense, a more just in an intellectual sense. Right. Because, you know, I think that with creating something, you need to just get a bunch of things right and to destroy you just need to get one thing wrong.

[00:12:27]

Yeah. And so I think that what that means is that I think a lot of people's thinking dead ends as soon as they see the negative story.

[00:12:34]

But that being said, I actually actually have some hope. Right. I think that the that the positive vision is something that I think can be is something that we can we can talk about. And I think that just simply saying this fact of. Yeah, like, that's positive. There's negatives. Everyone likes to dwell on the negative people that respond well to that message and say, ha, you're right, there's a part of us that we're not talking about, not thinking about.

[00:12:56]

And that's actually something that's that's that's, I think really been a key part of how we think about ajai at open. I right. You can kind of look at it as like, OK, like opening up talks about the fact that there are risks and yet they're trying to build the system. Like, how do you square that those two facts?

[00:13:13]

So do you share the intuition that some people have?

[00:13:16]

I mean, from Sam Harris to even Elon Musk himself, that it's tricky as you develop ajai to keep it from slipping into the existential threats, into the negative.

[00:13:29]

What's your intuition about how hard is it to keep a development on the positive track?

[00:13:36]

And what's your intuition there?

[00:13:38]

To answer the question, you can really look at how we structure open A.I. So we really have three main arms that we have capabilities which is actually doing the technical work and pushing forward what these systems can do. There's safety, which is working on technical mechanisms to ensure that the systems we build are aligned with human values. And then there's policy which is making sure that we have governance mechanisms answering that question of, well, who's values. And so I think that the technical safety one is the one that people kind of talk about the most.

[00:14:07]

Right. And you talk about like think about, you know, all of the dystopic AI movies. A lot of that is about not having good technical safety in place. And what we've been finding is that, you know, I think that actually a lot of people look at the technical safety problem and think it's just intractable. Right. This question of what do humans want? How am I supposed to write that down? Can I even write down what I want?

[00:14:28]

No way. And then they stop there. But the thing is, we've already built systems that are able to learn things that humans can't specify, you know, even the rules for how to recognize if there's a cat or dog in an image. Turns out it's intractable to write that down, and yet we're able to learn it and that what we're seeing with systems, we build it open and they're still in early proof of concept stage is that you are able to learn human preferences.

[00:14:53]

You're able to learn what humans want from data. And so that's kind of the core focus for our technical safety team. And I think that they're actually we've had some pretty encouraging updates in terms of what we've been able to make work.

[00:15:05]

So you have an intuition and a hope that from data, you know, looking at the value alignment problem from data, we can build systems that align with the collective better angels of our nature. So align with the ethics and the morals of human beings to even say this in a different way.

[00:15:23]

I mean, think about how how do we align humans, right? Think about like a human baby can grow up to be an evil person or a great person. And a lot of that is from learning from data. Right. That you have some feedback as a child is growing up. They get to see positive examples. And so I think that that just like that, the only example we have of a general intelligence that is able to learn from data to align with human values and to learn values, I think we shouldn't be surprised that we can do the same sorts of of techniques or whether the same sort of techniques end up being how we we saw value for ages.

[00:15:58]

So let's go even higher. I don't know if you've read the book Sapience, but there's an idea that you know that as a collective, as us human beings have kind of developed together and ideas that we hold, there's no in that context, objective truth. We just kind of all agree to certain ideas and hold them as a collective.

[00:16:18]

Did you have a sense that there is in the world of good and evil, do you have a sense to the first approximation, there are some things that are good and that you could teach systems to behave to be good.

[00:16:31]

So I think this actually blends into our 13 right. Which is the policy team. And this is the one aspect I think people really talk about way less than they should. Right. Because imagine that we build super powerful systems that we've managed to figure out all the mechanisms for these things to do whatever the operator wants. The most important question becomes who's the operator? What do they want and how is that going to affect everyone else? Right. And and I think that this question of what is good, what are those values?

[00:17:01]

I mean, I think you don't even have to go to those those very grand existential places to start to realize how hard this problem is. You just look at different countries and cultures across the world and that there's there's a very different conception of how the world works. And, you know, what what what kinds of ways that society wants to operate. And so I think that that that the really core question is, is actually very concrete. And I think it's not a question that we have ready answers to.

[00:17:29]

Right.

[00:17:30]

Is how do you have a world where all the different countries that we have the United States, China, Russia and, you know, the hundreds of other countries out there are able to continue to not just operate in the way that they see fit, but in the world that emerges in these where you have these very powerful systems operating alongside humans ends up being something that empowers humans more, that makes like human existence be a more meaningful thing, and that people are happier and wealthier and able to live more fulfilling lives.

[00:18:06]

It's not an obvious thing for how to design that world once you have that very powerful system.

[00:18:10]

So if we take a little step back and we're having like a fascinating conversation and open as in many ways a tech leader in the world, and yet we're thinking about these big existential questions, which is fascinating, really important. I think you're a leader in that space. And it's a really important space of just thinking how I affect society in a big picture view.

[00:18:31]

So Oscar Wilde said we're all in the gutter, but some of us are looking at the stars. And I think Open has a charter that looks to the stars, I would say, to create intelligence, to create general intelligence, make it beneficial, safe and collaborative. Can you tell me?

[00:18:49]

How that came about, how a mission like that and the path to creating a mission like that opening I was found.

[00:18:56]

Yeah. So I think that in some ways it really boils down to taking a look at the landscape right now.

[00:19:02]

If you think about the history of A.I., that basically for the past 60 or 70 years, people have thought about this goal of what could happen if you could automate human intellectual labor.

[00:19:12]

Right. Imagine you could build a computer system that could do that. What becomes possible?

[00:19:17]

We have a lot of sci fi that tells stories of various dystopias and, you know, increasingly of movies like her that tell you a little bit about maybe more of a little bit utopian vision. You think about the impacts that we've seen from being able to have bicycles for our minds and computers, and that I think that the that the impact of computers and the Internet has just far outstripped what anyone really could have predicted. And so I think that it's very clear that if you can build an ajai, it will be the most transformative technology that humans will ever create.

[00:19:51]

And so what it boils down to then is a question of, well, is there a path? Is there hope? Is there a way to build such a system? And I think that for 60 or 70 years that people got excited and that, you know, ended up not being able to deliver on the hopes that the people had pinned on them. And I think that then, you know, that after, you know, two two winters of development, the people I, you know, I think kind of almost stopped daring to dream.

[00:20:17]

Right. Really talking about ajai or thinking about ajai became almost this taboo in the community.

[00:20:23]

But I actually think that people took the wrong lesson from history. And if you look back starting in 1959 is when the Perceptron was released. And this is basically, you know, one of the earliest neural networks, it was released to what was perceived as this massive overhype. So in The New York Times in 1959, you have this article saying that, you know, the Perceptron will one day recognize people, call their names, instantly translate speech between languages and people at the time looked at this and said, this is your system, can't do any of that.

[00:20:53]

And basically spent 10 years trying to discredit the whole Perceptron direction. And it succeeded and all the funding dried up. And, you know, people kind of went in other directions. And, you know, the 80s there was a resurgence. And I'd always heard that the resurgence in the 80s was due to the invention of back propagation and these these algorithms that got people excited. But actually, the causality was due to people building larger computers that you can find these these articles from the 80s saying that the democratization of computing power suddenly meant you could run these larger neural networks.

[00:21:21]

And then people started to do all these amazing things back propagation algorithm was invented. And you know that the neural nets people were running were these tiny little like 20 neuron neural nets. Right. Right. Like, what are you supposed to learn with 20 neurons?

[00:21:32]

And so, of course, they weren't able to get great results. And it really wasn't until 2012 that this approach that's almost the most simple natural approach that people have come up with in the 50s. Right. In some ways, even in the 40s before there were computers with Pitts' McCallan, their neuron. Suddenly, this became the best way of solving problems, right, and I think there are three core properties that deep learning has that I think are very worth paying attention to.

[00:22:01]

The first is generality with a very small number of deep learning tools, deep, deep neural net, maybe some some, you know, RL and it solves this huge variety of problems, speech recognition, machine translation, game playing, all of these problems, a small set of tools. So there's the generality. There's a second piece, which is the competence. You want to solve any of those problems throughout 40 years worth of normal computer vision research replaced with deep neural net.

[00:22:29]

It's going to work better. And there's a third piece, which is the scalability. Right? The one thing that has been shown time and time again is that you if you have a larger neural network through more compute, more data, add it, it will work better. Those three properties together feel like essential parts of building a general intelligence.

[00:22:48]

Now, it doesn't just mean that if we scale up what we have, that we will have an ajai right.

[00:22:52]

There are clearly missing pieces. There are missing ideas. We need to have answers for reasoning. But I think that the core here is that for the first time, it feels that we have a paradigm that gives us hope the general intelligence can be achievable. And so as soon as you believe that everything else becomes comes into focus. Right. If you imagine that you may be able to and you know that the timeline, I think, remains uncertain. But I think that that, you know, certainly within our lifetimes and possibly within a much shorter period of time than than people would expect, if you can really build the most transformative technology that will ever exist, you stop thinking about yourself so much.

[00:23:29]

Right. And you start thinking about just like how do you have a world where this goes well and that you need to think about the practicalities of how do you build an organization and get together a bunch of people and resources and to make sure that people feel motivated and ready to do it.

[00:23:45]

But I think that then you start thinking about, well, what if we succeed and how do we make sure that when we succeed that the world is actually the place that we want ourselves to exist? And, you know, almost the results in Bell sense of the word. And so that's kind of the broader landscape and opening. I was really formed in 2015 with that high level picture of ajai might be possible sooner than people think and that we need to try to do our best to make sure it's going to go well.

[00:24:14]

And then we spent the next couple of years really trying to figure out what does that mean, how do we do it? And, you know, I think that typically with a company, you start out very small CEO and co-founder, and you build a product, you get some users, you get product market fit. You know, at some point you raise some money, you hire people, you scale, and then you go down the road.

[00:24:33]

Then the big companies realize you exist and try to kill you. And for opening I it was basically everything and exactly the opposite order.

[00:24:42]

Let me just pause for a second. He said a lot of things. Let me just admire the jarring aspect of what open A.I. stands for, which is daring to dream. I mean, you said it's pretty powerful. You caught me off guard because I think that's very true.

[00:24:57]

The the the step of just daring to dream about the possibilities of creating intelligence in a positive and a safe way. But just even creating intelligence is a much needed, refreshing catalyst for the AI community. So that's that's the starting point. OK, so then formation of open.

[00:25:18]

I, uh, what I would just say that, you know, when we were starting opening AI, that kind of the first question that we had is, is it too late to start a lab with a bunch of the best people possible?

[00:25:30]

That was an actual question was are those those are the core question of of you know, we have this dinner in July of twenty, twenty fifteen. And there's that was that was really what we spent the time talking about. And, you know, because you think about kind of where I was, is that transition from being an academic pursuit to an industrial pursuit. And so a lot of the best people were in these big research labs and that we want to start our own one that, you know, no matter how much resources we could accumulate, would be, you know, pale in comparison to the big tech companies.

[00:26:00]

And we knew that. And it was a question of are we going to be actually able to get this thing off the ground? You need a critical mass. You can't just do you and a co-founder build the product.

[00:26:09]

Right. You really need to have a group of, you know, five to 10 people. And we kind of concluded it wasn't obviously impossible. So it seemed worth trying.

[00:26:19]

Well, you're also a dreamer, so who knows, right? That's right.

[00:26:22]

OK, so speaking of that, competing with the with the big players, let's talk about some of the some of the tricky things as you think through this process of growing, of seeing how you can develop these systems at a at scale that competes.

[00:26:39]

So you recently formed OpenAir LP and you cap profit company that now carries the name OpenAir. So open your eyes.

[00:26:48]

Now, this official company, the original non-profit company, still exists and carries the open air and nonprofit name. So can you explain what this company is, what the purpose of its creation is, and how did you arrive at the decision? Yep.

[00:27:05]

To create it open I the whole entity and open the LP as a vehicle is trying to accomplish the mission of ensuring that artificial general intelligence benefits everyone. And the main way that we're trying to do that is by actually trying to build general intelligence ourselves and make sure the benefits are distributed to the world. That's the primary way. We're also fine if someone else does this right, it doesn't have to be us.

[00:27:27]

If someone else is going to build an AGI and make sure that the benefits don't get locked up in one company or, you know, one one, one with one set of people, like, we're actually fine with that. And so those ideas are baked into.

[00:27:41]

Our charter, which is kind of the the foundational document that, ah, that describes kind of our values and how we operate, but it's also really baked into the structure of open ILP And so the way that we set up opening ILP is that in the case where we succeed, right.

[00:27:59]

If we actually build what we're trying to build. Then investors are able to get a return and but that return is something that is capped. And so if you think of AIG in terms of the value that you could really create, you're talking about the most transformative technology ever created. It's going to create orders of magnitude more value than any existing company and that all of that value will be owned by the world like legally title to the nonprofit to fulfill that mission.

[00:28:26]

And so that's that's the structure.

[00:28:30]

So the mission is a powerful one. And it's it's one that I think most people would agree with.

[00:28:36]

It's how we would hope A.I. progresses.

[00:28:40]

And so how do you tie yourself to that mission?

[00:28:42]

How do you make sure you do not deviate from that mission, that, you know, other incentives that are profit driven, what don't interfere with the mission?

[00:28:54]

So this is actually a really core question for us for the past couple of years, because, you know, I say that, like, the way that our history went was that for the first year we were getting off the ground. Right. We had this high level picture, but we didn't know exactly how we wanted to accomplish it. And really two years ago, when we first started realizing in order to build ajai, we're just going to need to raise way more money than we can as a nonprofit.

[00:29:17]

And you are talking many billions of dollars.

[00:29:20]

And so the first question is, how are you supposed to do that and stay true to this mission?

[00:29:25]

And we looked at every legal structure out there, and you could have none of them are quite right for what we wanted to do. And I guess it shouldn't be too surprising if you're going to do some, like, crazy, unprecedented technology that you're going have to come up with some crazy, unprecedented structure to do it in. And a lot of a lot of our conversation was with people at opening. I write the people who really joined because they believe so much in this mission and thinking about how do we actually raise the resources to do it and also stay true to to what we stand for.

[00:29:53]

And the place you've got to start is to really align on what is it that we stand for? What are those values? What's really important to us? And so I'd say that we spent about a year really compiling the opening charter and that that determines and if you even look at the first the first line item in there, it says that, look, we expect we're going to have to marshal huge amounts of resources, but we're going to make sure that we minimize conflicts of interest with the mission.

[00:30:15]

And that kind of aligning on all of those pieces was the most important step towards figuring out how do we structure a company that can actually raise the resources to do what we need to do.

[00:30:27]

I imagine open I the decision to create open Alpay was a really difficult one, and there was a lot of discussions, as you mentioned, for a year. And there's different ideas, perhaps detractors within open I sort of different paths that you could have taken. What were those concerns? What were the different paths considered? What was that process of making that decision like?

[00:30:52]

But so if you look actually at the opening charter that there's almost two paths embedded within it. There is we are primarily trying to build ajai ourselves, but we're also OK if someone else does it. And this is a weird thing for a company.

[00:31:06]

It's really interesting, actually. Yeah. That there is an element of competition that you do want to be the one that does it. But at the same time, you OK somebody else's.

[00:31:16]

And we'll talk about that a little bit. That tradeoff, that's the dance.

[00:31:19]

That's really interesting. And I think this was the core tension as we were designing opening ILP. And really the opening strategy is how do you make sure that both you have a shot at being a primary actor, which really requires building an organization, raising massive resources and really having the will to go and execute on some really, really hard vision. Right. You need to really sign up for a long period to go and take on a lot of pain and a lot of risk.

[00:31:44]

And to do that, normally you just import the startup mindset. Right. And that you think about, OK, like how do we how to execute everyone?

[00:31:51]

You have this very competitive angle, but you also have the second angle of saying that, well, the true mission isn't for opening it to build CGI. The true mission is for Ajai to go well for humanity. And so how do you take all of those first actions and make sure you don't close the door on outcomes that would actually be positive and infill the mission? And so I think it's a very delicate balance. Right. And I think that going hundred percent, one direction or the other is clearly not the correct answer.

[00:32:18]

And so I think that even in terms of just how we talk about opening I and think about it, there's just like like one thing that's always in the back of my mind is to make sure that we're not just saying opening. Eyes goal is to build ajai. Right. That it's actually much broader than that. Right. That first of all, you know, it's not just aget safe ajai that's very important. But secondly, our goal isn't to be the ones to build it.

[00:32:40]

Our goal is to make sure it goes well for the world. And so I think that figuring out how do you balance all of those and to to get people to really come to the table and compile the the like a single document that that encompasses all of that wasn't trivial.

[00:32:54]

So part of the challenge here is your mission is, I would say beautiful, empowering and a beacon.

[00:33:02]

I hope for people in the research community and just people think about I so your decisions are scrutinized more than I think, a regular profit driven company. Do you feel the burden of this in the creation of the charter and just in the way you operate?

[00:33:17]

Yes.

[00:33:20]

So why do you lean into the burden by creating such a charter? Why not keep it quiet?

[00:33:27]

I mean, it just boils down to the to the mission.

[00:33:30]

Like I'm here and everyone else is here because we think this is the most important mission. I dare to dream. All right.

[00:33:36]

So what do you think?

[00:33:38]

You can be good for the world or create an ajai system that's good when you're a for profit company? From my perspective, I don't understand why profit interferes with the positive impact on society. I don't understand why Google makes most of its money from ads. Can't also do good for the world or other companies, Facebook, anything. I don't I don't understand why those have to interfere. You can profit isn't the thing, in my view that affects the impact of a company.

[00:34:14]

What affects the impact of the company is the charter is the culture, is the you know, the people inside and profit is the thing that just fuels those people. So what are your views there?

[00:34:26]

Yeah, so I think it's a really good question. And there's there's some some, you know, real like long standing debates in human society that are wrapped up in it.

[00:34:33]

The way that I think about it is just think about what what are the most impactful nonprofits in the world? What are the most impactful for profits in the world, right? Yes, much easier to list the for profits. That's right. And I think that there's there's some real truth here that the system that we set up, the system for, kind of how, you know, today's world is organized is one that that really allows for huge impact.

[00:34:58]

And that that kind of part of that is that you need to be that for profits are are self-sustaining and able to to kind of build on their own momentum. And I think that's a really powerful thing. It's something that when it turns out that we haven't set the guardrails correctly, causes problems. Right. Think about logging companies that go into the forest. You know, the rainforest. That's really bad. We don't want that. And it's actually really interesting to me that kind of this this question of how do you get positive benefits out of a for profit company, it's actually very similar to how do you get positive benefits out of an AGI, right.

[00:35:33]

That you have this, like, very powerful system. It's more powerful than any human and it's kind of autonomous in some ways. You know, it's superhuman and a lot of axes. And somehow you have to set the guardrails to get good things to happen. But when you do, the benefits are massive. And so I think that that when when I think about nonprofit versus for profit, I think it's just not enough happens in nonprofits. They're very pure, but it's just kind of, you know, it's just hard to do things.

[00:35:57]

They're in for profits in some ways, like too much happens.

[00:36:01]

But if it kind of shaped in the right way, it can actually be very positive.

[00:36:05]

And so with our help, we're picking a road in between. Now, the thing I think is really important to recognize is that the way that we think about opening Alpay is that in the world where it actually happens, right. In a world where we are successful, we build the most transformative technology ever, the amount of value we're going to create will be astronomical. Mm hmm. And so then in that case, that the the cap that we have will be a small fraction of the value we create and the amount of value that goes back to investors and employees looks pretty similar to what would happen in a in a pretty successful startup.

[00:36:41]

And that's really the case that we're optimizing for, right, that we're thinking about in the success case, making sure that the value we create doesn't get locked up. And I expect another, you know, for profit companies that it's possible to do something like that.

[00:36:55]

I think it's not obvious how to do it right. I think that as a for profit company, you have a lot of fiduciary duty to your shareholders and that there are certain decisions that you just cannot make in our structure. We've set it up so that we have a fiduciary duty to the charter, that we always get to make the decision that is right for the charter rather than even if it comes at the expense of our own stakeholders. And and so I think that when I think about what's really important, it's not really about nonprofit versus for profit.

[00:37:23]

It's really a question of if you build ajai and you kind of, you know, communities now in this new age, who benefits, whose lives are better. And I think that what's really important is to have an answer that is everyone. Yeah.

[00:37:37]

Which is one of the core aspects of the charter.

[00:37:40]

So one concern people have not just with openness, but with Google, Facebook, Amazon, anybody really that that's creating impact at scale is how do we avoid, as your charter says, avoid enabling the use of Ajai to unduly concentrate power?

[00:38:00]

Why would not a company like open, I keep all the power of any system to itself, the charter, the charter. So how does the charter? Externalise itself in day to day, so I think that the first to zoom out right there, the way that we structure the company so that the power for sort of dictating the actions that opening it takes ultimately rests with the board or the board of the nonprofit. And the board is set up in certain ways, certain certain restrictions that you can read about in the opening ILP blog post.

[00:38:33]

But effectively, the board is the is the governing body for opening ILP and the board has a duty to fulfill the mission of the nonprofit. And so that's kind of how we tie how we thread all these things together. Now, there's a question of day to day. How do people, the individuals who in some ways are the most empowered ones, you know, the board sort of gets to call the shots at the high level, but the people who are actually executing are the employees or the people here on a day to day basis who have the, you know, the keys to the technical kingdom.

[00:39:06]

And there I think that the answer looks a lot like, well, how does any company's values get actualized? Right. I think that a lot of that comes down to that. You need people who are here because they really believe in that mission and they believe in the charter and that they are willing to take actions that maybe are worse for them but are better for the charter. And that's something that's really baked into the culture. And honestly, I think it's you know, I think that's one of the things that we really have to work to preserve as time goes on.

[00:39:35]

And that's a really important part of how we think about hiring people and bringing people into open air.

[00:39:40]

So there's people here, there's people here who could speak up and say, like, hold on a second, this is totally against what we stand for culture wise.

[00:39:51]

Yeah, yeah, for sure. I mean, I think that that we actually have I think that's like a pretty important part of of how we operate and how we have even again with designing the charter and designing open Alpay in the first place, that there has been a lot of conversation with employees here and a lot of times where employees said, wait a second, this seems like it's going in the wrong direction and let's talk about it. And so I think one thing that's that's I think a really and, you know, here's here's actually one thing that I think is very unique about us as a small company is that if you're at a massive tech giant, that's a little bit hard for someone who's a line employee to go and talk to the CEO and say, I think that we're doing this wrong.

[00:40:28]

And, you know, you look at companies like Google that have had some collective action from employees to, you know, make ethical change around things like Mavin. And so maybe there are mechanisms that other companies that work but here are super easy for anyone to pull me aside, to pull Sammis ideology aside and people do it all the time.

[00:40:45]

One of the interesting things in the charter is this idea that it'd be great if you could try to describe or untangle switching from competition to collaboration. And late stage development was really interesting, this dance between competition and collaboration. How do you think about that?

[00:41:00]

Yeah, assuming you can actually do the technical side of development. I think there's going to be two key problems with figuring out how do you actually deploy it, make it go? Well, the first one of these is the run up to building the first ajai. You look at how self-driving cars are being developed and it's a competitive race.

[00:41:18]

And the thing that always happens in a competitive race is that you have huge amounts of pressure to get rid of safety.

[00:41:24]

And so that's one thing we're very concerned about, right, is that people, multiple teams figuring out we can actually get there. But, you know, if we took the slower path that is more guaranteed to be safe, we will lose. And so we're going to take the fast path. And so the more that we can both ourselves be in a position where we don't generate that competitive race, where we say if the race is being run and that, you know, someone else is further ahead than we are, we're not going to try to to leapfrog.

[00:41:52]

We're going to actually work with them. We will help them succeed. As long as what they're trying to do is to fulfill our mission, then we're good. We don't have to build it yourselves. And I think that's a really important commitment from us. But it can't just be unilateral. Right. I think it's really important that other players were serious about building ajai, make similar commitments. Right. And I think that that, you know, again, to the extent that everyone believes that aid should be something to benefit everyone, then it actually really shouldn't matter which company builds it.

[00:42:19]

And we should all be concerned about the case where we just race so hard to get there that something goes wrong.

[00:42:24]

So what role do you think government, our favorite entity, has in setting policy and rules about this domain, from research to the development to early stage to late stage and energy development?

[00:42:40]

So I think that, first of all, is really important. The government's in their right in some way, shape or form. You know, at the end of the day, we're talking about building technology that will shape how the world operates and that there needs to be government as part of that answer. And so that's why we've we've done a number of different congressional testimony as we interact with a number of different lawmakers. And, you know, right now, a lot of our message to them is that it's not the time for regulation, it is the time for measurement that our main policy recommendation is that people and, you know, the government does this all the time with bodies like next spend time trying to figure out just where the technology is, how fast it's moving, and can really become literate and up to speed with respect to what to expect.

[00:43:30]

So I think that today the answer really is about about about measurement. And I think that there will be a time and place where that will change. And I think it's a little bit hard to predict exactly what what exactly that trajectory should look like.

[00:43:44]

So there will be a point at which regulation federal in the United States, the government steps in and and helps be the I don't want to say the adult in the room to make sure that there is strict rules, maybe conservative rules that nobody can cross.

[00:44:02]

Well, I think there's there's kind of maybe two two angles to it. So today, with narrow applications that I think there are already existing bodies that are responsible and should be responsible for regulation. You think about, for example, with self-driving cars that you want the you know, the National Highway I. Netzer. Exactly. To be very good. That that makes sense. Right. That basically what we're saying is that we're going to have these technological systems that are going to be performing applications that humans already do great.

[00:44:29]

We already have ways of thinking about standards and safety for those. So I think actually empowering those regulators today is also pretty important. And then I think for for ajai, you know, that there's going to be a point where we'll have better answers. And I think that maybe a similar approach of first measurement and start thinking about what the rules should be, I think it's really important that we don't prematurely squash, you know, progress.

[00:44:53]

I think it's very easy to kind of smother the abiding field. And I think that's something to really avoid. But I don't think it's the right way of doing it is to say, let's just try to be ahead and not involve all these other stakeholders. So you've recently released a paper on two language modeling, but did not release the full model because you had concerns about the possible negative effects of the availability of such model. It's outside of just that decision is super interesting because of the discussion as at a societal level.

[00:45:32]

The discourse it creates is fascinating in that aspect. But if you think about the specifics here at first, what are some negative effects that you envisioned? And, of course, what are some of the positive effects?

[00:45:45]

Yeah, so again, I think to zoom out like the way that we thought about GP2 is that with language modeling, we are clearly on a trajectory right now where we scale up our models and we get qualitatively better performance. Right to itself was actually just a scale up of a model that we've released in the previous June. Right. We just ran it at a much larger scale and we got these results. We're suddenly starting to write coherent prose, which was not something we'd seen previously.

[00:46:17]

And what are we doing now? Well, we're going to scale up to by 10x, by 100 X by thousand X, and we don't know what we're going to get. And so it's very clear that the model that we that we released last June, you know, I think it's kind of like it's a good academic to say it's not something that we think is something that can really have negative applications or, you know, to the extent that we can do the positive of people being able to play with it is is, you know, far, far outweighs the possible harms you.

[00:46:45]

Fast forward to not to, but you 20 and you think about what that's going to be like. And I think that the capabilities are going to be substantive. And so there needs to be a point in between the two where you say this is something where we are drawing the line and that we need to start thinking about the safety aspects.

[00:47:05]

And I think for too, we could have gone either way. And in fact, when we had conversations internally that we had a bunch of pros and cons and it wasn't clear which one, which one outweighed the other. And I think that when we announced that, hey, we decide not to release this model, then there was a bunch of conversation where various people said, it's so obvious that you should have just released it. There are other people said it's so obvious you should not have released it.

[00:47:26]

And I think that that almost definitely means that holding it back was the correct decision. Right. If it's if there's if it's not obvious whether something is beneficial or not, you should probably default to caution. And so I think that that the overall landscape for how we think about it is that this decision could have gone either way. There's great arguments in both directions. But for future models down the road and possibly sooner than you'd expect because, you know, scaling these things up doesn't actually take that long.

[00:47:52]

Those ones you're definitely not going to want to release into the wild. And so I think that we almost feel that this is a test case.

[00:47:59]

And to see can we even design, you know, how do you have a society where how do you have a system that goes from having no concept of responsible disclosure, where the mere idea of not releasing something for safety reasons is unfamiliar to a world where you say, OK, we have a powerful model, let's at least think about it, let's go through some process and think about the security community.

[00:48:19]

It took them a long time to design responsible disclosure. You know, you think about this question of, well, I have a security exploit. I send it to the company. The company is like tries to prosecute me or just just ignores it. What do I do? Right. And so, you know, the alternatives of oh, I just just always publish your exploits. That doesn't seem good either. Right. And so it really took a long time and took this this it was bigger than any individual right is really about building a whole community that believe that.

[00:48:44]

OK, we'll have this process where you sent to the company. You know, if they don't act at a certain time, then you can go public and you're not a bad person. You've done the right thing. And I think that in a high part of the responsibility to just proves that we don't have any concept of this. So that's the high level picture. And so I think that I think this was this was a really important move to make.

[00:49:08]

And we could have maybe delayed it for TV3. But I'm really glad we did it for you, too. And so now you look contribute to itself and you think about the substance of, OK, what a potential negative applications. So you have this model that's been trained on the Internet, which, you know, it's also going to be a bunch of very biased data, a bunch of, you know, very offensive content in there. And you can ask it to generate content for you on basically any topic.

[00:49:31]

Right. You just give it a prompt and they'll just start start writing and our content, like you see on the Internet, you know, even down to you like saying advertisement in the middle of some of its generations.

[00:49:41]

And you think about the possibilities for generating fake news or abusive content.

[00:49:46]

And, you know, it's interesting seeing what people have done with, you know, we released a smaller version of GP2 and that people have done things like try to generate I, you know, take my own Facebook message history and generate more Facebook messages like me and people generating fake politician content. Or, you know, there's a bunch of things there where you at least have.

[00:50:08]

To think, is this going to be good for the world? There's the flip side, which is I think that there's a lot of awesome applications that we really want to see, like creative applications in terms of if you have sci fi authors that can work with this tool and come up with cool ideas like that, seems that seems awesome.

[00:50:25]

If we can write better sci fi through the use of these tools and we've actually had a bunch of people writing to us asking, hey, can we use it for, you know, a variety of different creative applications?

[00:50:36]

The positive are actually pretty easy to imagine there, if you know the usual and the applications are really interesting.

[00:50:46]

But let's go there. It's kind of interesting to think about a world where we look at Twitter.

[00:50:53]

Where they just fake news, but smarter and smarter bots being able to spread it in an interesting, complex networking way in information that just floods out as regular human beings with our original thoughts.

[00:51:10]

So what are your views of this world? Twenty. Right.

[00:51:17]

What do you how do we think about again, it's like one of those things about in the 50s trying to describe the the Internet or the smartphone. What do you think about that world, the nature of information? And one possibility is that we'll always try to design systems that identify robot versus human and will do so successfully. And so we will authenticate that we're still human. And the other world is that we just accept the fact that we're swimming in a sea of fake news and just learn to swim there.

[00:51:49]

Well, have you ever seen the, you know, popular meme of of robot with a physical physical arm and pen clicking? The I'm not a robot button.

[00:52:00]

Yeah, I think I think the truth is that that really trying to distinguish between robot and human is a losing battle.

[00:52:09]

Ultimately, you think it's a losing battle? I think it's a losing battle ultimately. Right. I think that that is that in terms of the content, in terms of the actions that you could take, I mean, think about how capturers have gone. Right. The capturers used to be a very nice, simple use of this image. All of our OCR is terrible. You put a couple of of artifacts in it. You know, humans are going to be able to tell what what it is an AI system would be able to do today, like I could barely do.

[00:52:31]

Captious Yeah. And I think that this is just kind of where we're going, I think captures where we're a moment in time thing. And as A.I. systems become more powerful, that they're being human capabilities that can be measured in the very easy, automated way that the AIS will not be capable of, I think that's just like it's just an increasingly hard technical battle.

[00:52:51]

But it's not that all hope is lost. Right.

[00:52:53]

And you think about how do we already authenticate ourselves. Right. You know, we have systems. We have Social Security numbers. If you're in the US or, you know, you have you have, you know, ways of identifying individual people and having real world identity tied to to digital identities seems like a step towards, you know, authenticating the source of content rather than the content itself. Now, there are problems with that.

[00:53:17]

How can you have privacy and anonymity in a world where the only content you can really trust is we're the only way you can trust content is by looking at where it comes from. And so I think that building out good reputation networks may be maybe one possible solution. But, yeah, I think that this this question is it's not an obvious one.

[00:53:34]

And I think that we, you know, maybe sooner than we think we'll be in a world where, you know, today I often will read a tweet and be like, I feel like a real human wrote this or do I feel like this is like genuine? I feel like I can kind of judge the content a little bit.

[00:53:47]

And I think in the future it just won't be the case. You look at, for example, the FCC comments on net neutrality. It came out later that millions of those were auto generated and that the researchers were able to do various statistical techniques to do that.

[00:54:01]

What do you do in a world where the statistical techniques don't exist? It's just impossible to tell the difference between humans and ice. And in fact, the the the most persuasive arguments are written by by AI, all that stuff. It's not sci fi anymore. You'll get to making a great argument for why recycling is bad for the world.

[00:54:19]

You got to read that, huh? You're right.

[00:54:23]

Yeah, that's that's quite interesting. I mean, ultimately, it boils down to the physical world being the last frontier of proving he said, like basically networks of people, humans vouching for humans in the physical world and somehow the authentication ends there.

[00:54:40]

I mean, if I had to ask you, I mean, you're way too eloquent for a human. So if I had to ask you to authenticate, they prove how do I know you're not a robot and how do you know I'm not a robot?

[00:54:52]

Yeah, I think that's so far. Where were this in this space, this conversation we just had the physical movements we did is the biggest gap between us and A.I. systems is the physical manipulation. So maybe that's the last frontier. Well, here's another question is, is you know why?

[00:55:11]

Why is why is solving this problem important? Right. Like what aspects are really important to us? I think that probably where we'll end up is will hone in on what do we really want out of knowing if we're talking to a human. And and I think that, again, this comes down to identity. And so I think that that the Internet of the future, I expect to be one that will have lots of agents out there that will interact with you.

[00:55:33]

But I think that the question of is this, you know, a real flesh and blood human or is this an automated system? It may actually just be less important.

[00:55:43]

Let's actually go there. It's too as impressive. And that's a big twenty. Why is it so bad that all my friend. Are GP2 20, why is it so why is it so important on the Internet, do you think, to interact with only human beings?

[00:56:04]

Why can't we live in a world where ideas can come from models trained on human data?

[00:56:10]

Yeah, I think this is I think it is actually a really interesting question. This comes back to the how do you even picture a world with some new technology? And I think that that one thing that I think is important is, is, you know, I would say honesty. And I think that if you have, you know, almost the Turing Test style sense sense of of technology, you have eyes that are pretending to be humans and deceiving you.

[00:56:31]

I think that is you know, that feels like a bad thing. Right? I think that it's really important that we feel like we're in control of our environment. Right. That we understand who we're interacting with and if it's an eye or a human, that that's not something that we're being deceived about.

[00:56:45]

But I think that the flip side of can I have as meaningful of an interaction with an A.I. as I can with a human? Well, I actually think you can turn to sci fi. And her, I think, is a great example of asking this very question. One thing I really love about her is it really starts out almost by asking how meaningful are human virtual relationships. Right.

[00:57:04]

And and then you have a human who has a relationship with an AI and that you really start to be drawn into that. Right. That all of your emotional buttons get triggered in the same way as if there was a real human that was on the other side of that phone.

[00:57:17]

And so I think that this is one way of thinking about it is that I think that we can have meaningful interactions and that if there's a funny joke, sometimes it doesn't really matter if it was written by a human A.I. But what you don't want, and I think we should really draw hard lines is deception. And I think that as long as we're in a world where, you know, why, why do we build A.I. systems at all? Right.

[00:57:39]

The reason we want to build them is to enhance human lives, to make humans be able to do more things, to have human humans feel more fulfilled. And if we can build AI systems that do that, you know, sign me up.

[00:57:50]

So the process of language modeling. How far do you think it takes us?

[00:57:56]

Let's look and move her. Do you think dialogue, natural language, conversation as formulated by the Turing Test, for example, do you think that process could be achieved through this kind of unsupervised language modeling?

[00:58:10]

So I think the Turing test in its real form isn't just about language. Right? It's really about reasoning. To write that to really pass the Turing test, I should be able to teach calculus to whoever's on the other side and have it really understand calculus and be able to, you know, go in and solve new calculus problems.

[00:58:28]

And so I think that to really solve the Turing test, we need more than what we're seeing with language models. We need some way of plugging in reasoning. Now, how different will that be from what we already do? That's an open question, right? Might be that we need some sequence of totally radical new ideas, or it might be that we just need to kind of shape our existing systems in a slightly different way.

[00:58:50]

But I think that in terms of how far our language model will go, it's already gone way further than many people would have expected. Right. I think that things like and I think there's a lot of really interesting angles to poke in terms of how much does to understand physical world like, you know, you read a little bit about fire underwater in in Djibouti, too. So it's like, OK, maybe he doesn't quite understand what these things are.

[00:59:12]

But at the same time, I think that you also see various things like smoke coming from flame and, you know, a bunch of these things that GBC to it has no body, it has no physical experience.

[00:59:22]

It's just statically read data. And I think that I think that the answer is like, we don't know yet these questions, though, we're starting to be able to actually ask them to physical systems that the real systems that exist. And that's very exciting.

[00:59:37]

Do you think what's your intuition? Do you think if you just scale language modeling, like significantly scale, that reasoning can emerge from the same exact mechanisms?

[00:59:48]

I think it's unlikely that if we just scale GBG to that will have reasoning and the full fledged way. And I think that there is like, you know, the type signatures a little bit wrong. Right? Like there's something we do with that we call thinking right. Where we spend a lot of compute, like a variable amount of compute to get to better answers. Right. I think a little bit harder. I get better answer and that that kind of type signature isn't quite encoded in LGBT.

[01:00:15]

Right. Will kind of like it's been a long time and it's like evolutionary history making in all this information, getting very, very good at this predictive process.

[01:00:24]

And then at runtime I just kind of do one forward pass and so and I'm able to generate stuff. And so, you know, there might be small tweaks to what we do in order to get the type signature. Right. For example. Well, you know, it's not really one forward pass, right. You know, you generate some assemble and so maybe you generate like a whole sequence of thoughts and you only keep, like, the last bit or something.

[01:00:44]

Right. But I think that at the very least, I would expect you have to make changes like that. Yeah.

[01:00:49]

Yeah. Just exactly how we you said. Think is the process of generating thought by thought in the same kind of way, like you said, keep the last bit the thing that we converge towards.

[01:01:01]

Yeah, I think there's there's another piece which is which is interesting, which is this out of distribution generalisation. Right. That like thinking somehow that's a do that. Right, that we haven't experienced a thing and yet somehow we just kind of keep refining our mental model of it. This is, again, something that feels tied to whatever reasoning is.

[01:01:20]

And maybe it's a small tweak to what we do. Maybe it's many ideas and will take us many decades.

[01:01:25]

Yeah. So the assumption there generalization out of distribution is that it's possible to create new new ideas.

[01:01:35]

You know, it's possible that nobody's ever created new ideas and then was scaling to did you 20 you would you would essentially generalize to all possible thoughts as he was going to have to just to play devil's advocate.

[01:01:52]

I mean, how many how many new new story ideas have we come up with? That's Shakespeare, right?

[01:01:56]

Yeah, exactly.

[01:01:58]

It's just all different forms of love and drama and so on. OK, not sure if you read, but a lesson. A recent blog post by Sutton.

[01:02:07]

I have.

[01:02:08]

He basically says something that echoes some of the ideas that you've been talking about, which is, he says the biggest lesson that can be read from 70 years of research is that general methods of leverage computation are ultimately going to ultimately win out. Do you agree with this basically of and open in general about the ideas you're exploring, about coming up with methods? Well, there's tippity to modeling or whether it's open air five playing Dota or a general method is better than the more fine tuned expert tuned.

[01:02:46]

Method. Yeah, so I think that well, one thing that I think was really interesting about the reaction to that blog post was that a lot of people have read this as saying that compute is all that matters and it's a very threatening idea. Right. And I don't think it's a true idea either. It's very clear that we have algorithmic ideas that have been very important for making progress and to really build ajai. You want to push as far as you can on the computational scale and you want to push as far as you can on human human ingenuity.

[01:03:12]

And so I think you need both. But I think the way that you phrased the question is actually very good, right. That it's really about what kind of ideas should we be striving for. And absolutely, if you can find a scalable idea for more computing to put more data into it, it gets better like that. That's the real Holy Grail.

[01:03:31]

And so I think that the answer to the question, I think is yes, that that that's really how we think about it. And that part of why we're excited about the power of deep learning, the potential for building ajai is because we look at the systems that exist in the most successful AI systems and we realize that you scale those up, they're going to work better. And I think that that scalability is something that really gives us hope for being able to build transformative systems.

[01:03:56]

So I'll tell you, this is partially an emotional you know, I think that response that people often have as computers so important for state of the art performance, you know, individual developers, maybe a 13 year old sitting somewhere in Kansas or something like that. You know, they're sitting they might not even have a GPU and or maybe have a single GPU or ten eighty or something like that.

[01:04:17]

And there's this feeling like, well, how can I possibly compete or contribute to this world of I if scale so important.

[01:04:27]

So if you can comment on that and in general, do you think we need to also in the future focus on democratizing compute resources more, more or as much as we democratize the algorithms?

[01:04:39]

Also, the way that I think about it is that there's the space of of possible progress. There's a space of ideas and sort of systems that will work, that will move us forward. And there's a portion of that space and to some extent increasingly significant portion of that space that does just require massive compute resources.

[01:04:58]

And for that that I think that the answer is kind of clear, that part of why we have the structure that we do is because we think it's really important to be pushing the scale and to be, you know, building as large clusters and systems.

[01:05:11]

But there's another portion of the space that isn't about the large scale compute that these ideas that and again, I think that for the ideas to really be impactful and really shine who they should be, ideas that if you scale them up, would work way better than they do at small scale, but you can discover them without massive computational resources. And if you look at the history of of recent developments, you think about things like the gown or the VA, that these are ones that I think you could come up with them without having.

[01:05:39]

And, you know, in practice, people did come up with them without having massive, massive computational resources. I just talked to a good fellow.

[01:05:45]

But the thing is, the initial Gane produced pretty terrible results. Right?

[01:05:51]

So only because it wasn't a very specific it was only because they're smart enough to know that this is quite surprising in general, anything that they know. Do you see a world or is that too optimistic? And Dreamer like to imagine that the compute resources are something that's owned by governments and provided as utility?

[01:06:12]

Actually, to some extent, this this question reminds me of of blog posts from one of my former professors at Harvard, this guy, Matt, Matt Welsh, who is a systems professor. I remember sitting in his tenure talk. Right. And you know that he had literally just gotten tenure. He went to Google for the summer and then decided he wasn't going back to academia. Right. And that kind of in his blog post, he makes this point that, look, as a systems researcher, that I come to these cool system ideas.

[01:06:40]

Right. And kind of a little proof of concept. And the best thing I could hope for is that the people at Google or Yahoo!

[01:06:47]

Which was around at the time, will implement it and actually make it work at scale. Right. That's like the dream for me, right? I built the little thing and they the big thing that's actually working.

[01:06:57]

And for him, he said, I'm done with that. I want to be the person who's who's actually doing the building and deploying. And I think that there's a similar dichotomy here. Right? I think that there are people who really actually find value. And I think it is a valuable thing to do to be the person who produces those ideas. Right. Who build the proof of concept.

[01:07:16]

And yeah, you don't get to generate the coolest possible Gane images, but you invented the gane. Right. And so that there's there's there's a real tradeoff there. And I think that's a very personal choice. But I think there's value in both sides.

[01:07:28]

So do you think creating ajai something or some new models would we would see echoes of the brilliance even at the prototype level, so you would be able to develop those ideas without scale, the initial sew seeds.

[01:07:44]

So to take a. And, you know, I always like to look at the examples that exist, right, look at real precedent. And so take a look at the June twenty eighteen model that we released that we scaled up to turn to. And you can see that at Smallscale, it set some records. This was the original Deepti. We actually had some some cool generations that weren't nearly as amazing and really stunning as the two ones. But it was promising.

[01:08:09]

It was interesting. And so I think it is the case that with a lot of these ideas that you see promise that small scale. But there is an asterisk here, a very big asterisk, which is sometimes we see. Behaviors that emerge that are qualitatively different from anything we saw at small scale and that the original inventor of whatever algorithm looks at and says, I didn't think I could do that. This is what we saw in Dota. Right.

[01:08:34]

So Pippo was was created by John Coleman, who's a researcher here. And and with with Doda. We basically just ran a at massive, massive scale and some tweaks in our store in order to make it work.

[01:08:46]

But fundamentally it's p0 at the core.

[01:08:48]

And we were able to get this long term planning, these these behaviors to really play out on a time scale that we just thought was not possible.

[01:08:58]

And John looked at that and was like, I didn't think it could do that. That's what happens when you get three orders of magnitude more scale tested at.

[01:09:05]

Yeah, but it still has the same flavors of, you know, at least echoes of the expected billions, although I suspect with Deepti is scaled more and more, you might get surprising things. So, yeah, you're right. It's interesting that it's it is difficult to see how far an idea will go when it's scaled.

[01:09:26]

It's an open question also to that point with with Dota and people like I mean, here's here's a very concrete one, right? It's like it's actually one thing it's very surprising about Dota, but I think people don't really pay that much attention to is the degree of generalization our distribution that happens.

[01:09:41]

Right, that you have this A.I. that's trained against other bots for its entirety, the entirety of existence, decided to take a step back and you can't talk through it.

[01:09:51]

You know, a story of Dota, a story of leading up to open five and that passed. And what was the process of self playing. So out of training. Yeah, yeah.

[01:10:03]

Yeah. So so with his daughter Dota, it's a complex video game. And we started training. We started trying to solve Dota because we felt like this was a step towards the real world relative to other games like chess or go. Right. Those very cerebral games where you just kind of have this bored, very discrete moves.

[01:10:19]

Dota starts to be much more continuous time that you have this huge variety of different actions, that you have a 45 minute game with all these different units. And it's got a lot of messiness to it that really hasn't been captured by previous games. And famously, all of the hardcoded bots for Dota were terrible.

[01:10:35]

It's just impossible to write anything good for it because it's so complex. And so this seemed like a really good place to push what's the state of the art in reinforcement learning? And so we started by focusing on the one versus one version of the game. And and we're able to to solve that. We're able to beat the world champions. And that the that the learning, you know, the skill curve was this crazy exponential. Right. It was like constantly we were just scaling up that we were fixing bugs.

[01:10:59]

And, you know, that you look at the at the skill curve and it was really very, very smooth. One was actually really interesting to see how that, like, human iteration loop yielded very steady, exponential progress.

[01:11:09]

And to one side note, first of all, it's an exceptionally popular video game. The side effect is that there's a lot of incredible human experts at that video. Again, so the benchmark that you're trying to reach is very high. And the other can you talk about the approach that was used initially and throughout the training, these agents to play this game?

[01:11:29]

Yep. And so the approach that we used is self play. And so you have two agents that don't know anything. They battle each other. They discover something a little bit good, and now they both know it and they just get better and better and better without bound.

[01:11:41]

And that's a really powerful idea, right, that we then went from the one versus one version of the game and scaled up to four versus five. Right. So you think about kind of like with basketball, where you have this, like, team sport and you need to do all this coordination. And we were able to push the same idea, the same self play to to really get to the professional level at the full five versus five version of the game.

[01:12:06]

And and the things I think are really interesting here is that these agents, in some ways, they're almost like an insect like intelligence, right. Where, you know, they have a lot in common with how an insect is trained. Right. Insect kind of lives in this environment for very long time. You know, the answer to this insect, I've been around for a long time and had a lot of experience that gets baked into into into this agent.

[01:12:27]

And, you know, it's not really smart in the sense of a human right. It's not able to go in, learn calculus, but it's able to navigate its environment extremely well and able to handle unexpected things in the environment that's never seen before pretty well. And we see the same sort of thing with our Doda bots. Right? They're able to within this game, they're able to play against humans, which is something that never existed in its evolutionary environment, totally different play styles from humans versus the bots, and yet it's able to handle extremely well.

[01:12:54]

And that's something that I think was very surprising to us, was something that doesn't really emerge from what we've seen with Pippo at smaller scale, the kind of scale we're running, the stuff that was so, you know, I could take one hundred thousand CPU cores running with like hundreds of DPAs is probably about, you know, like, you know, something like hundreds of of years of experience going into this bot every single real day. And so that scale is massive and we start to see very different kinds of behaviors out of the algorithms that we all know and love.

[01:13:28]

You mentioned beat the world expert one by one, and then you didn't weren't able to win five five this year.

[01:13:38]

Yeah. At the best players in the world. So what's what's the comeback story? What's first of all, talk through that does exceptionally exciting event and what's what's the following months and this year look like.

[01:13:50]

Yeah. Yeah.

[01:13:51]

So well, one thing that's interesting is that, you know, we lose all the time, right. Because we because we so the Dota team at opening night, we played the bot against better players than our system all the time. Or at least we used to write like, you know, the first time we lost publicly was we went up on stage at the International and we played against some of the best teams in the world. And we ended up losing both games.

[01:14:13]

But we gave them a run for their money. Right. The both games were kind of 30 minutes. Twenty five minutes. And they went back and forth, back and forth, back and forth. And so I think that really shows that we're at the professional level. And that kind of looking at those games, we think that the queen could have gone a different direction and we could have could have had some wins. That was actually very encouraging for us.

[01:14:33]

And, you know, it's interesting because the international was at a fixed time. Right? So we knew exactly what day we were going to be playing and we pushed as far as we could as fast as we could. Two weeks later, we had about that had an 80 percent win rate versus the one that played at T'ai. So the march of progress, you know, you should think of as a snapshot rather than as an end state. And so, in fact, Will will be announcing our our finals pretty soon.

[01:14:56]

I actually think that we'll announce our final match prior to this podcast being released.

[01:15:02]

So there should be we'll be playing or playing against the the world champions. And, you know, for us, it's really less about like that. The way that we think about what's upcoming is the final milestone, the final competitive milestone for the project.

[01:15:17]

Right. That our goal in all of this isn't really about beating humans at Dota. Our goal is to push the state of the art and reinforcement learning. And we've done that right. And we've actually learned a lot from our system. And that we have, you know, I think a lot of exciting next steps that we want to take. And so a final showcase of what we built. We're going to do this match. But for us, it's not really the success or failure to see, you know, do do we have the clean up go in our direction or against.

[01:15:43]

Where do you see the field of deep learning heading in the next few years? What do you see the work and reinforcement learning, perhaps hiding and.

[01:15:56]

More specifically, with Open Eye, all the exciting projects that you're working on. What is 12 19 hold for the massive scale scale?

[01:16:05]

I will put an asterisk on that and just say, you know, I think that it's about eight years plus scale. You need both.

[01:16:10]

So that's a really good point. So the question in terms of ideas, you have a lot of projects that are exploring different areas of intelligence. And the question is, when you when you think of scale, do you think about growing the scale of those individual projects or do you think about adding new projects and solitude?

[01:16:32]

If you are thinking of adding new projects or if you look at the past, what's the process of coming up with new projects, new ideas, you know?

[01:16:40]

So we really have a lifecycle of project here. So we start with a few people just working on a small scale idea. And language is actually a very good example of this. That is really, you know, one person here who was pushing on language for a long time. I mean, then you get signs of life. Right? And so this is like let's say, you know, with with the original GBG, we had something that was interesting and we said, OK, it's time to scale this right.

[01:17:02]

It's time to put more people on it, put more computational resources behind it. And and then we just kind of keep pushing and keep pushing at the end state is something that looks like Dota or robotics, where you have a large team of, you know, 10 or 15 people that are running things at very large scale and that you're able to really have material engineering and and and, you know, sort of machine learning science coming together to make systems that work and get material results that just would have been impossible otherwise.

[01:17:29]

So we do that whole life cycle. We've done a number of times, you know, typically end to end, probably to two years or so to do it. You know, the organization's been around for three years or so. Maybe we'll find that we also have longer lifecycle projects, but we work up to those we have.

[01:17:46]

So so one one team that we were actually just starting, Éliane, I are kicking off a new team called the Reasoning Team, and that this is to really try to tackle how do you get neural networks to reason. And we think that this will be a long term project and one that we're very excited about in terms of reasoning.

[01:18:03]

Super exciting topic. What do you what kind of benchmarks, what kind of tests of reasoning do you envision what would if you said back, whatever drink and you would be impressed that the system is able to do something, what would that look like? They're improving. They're improving.

[01:18:23]

So some kind of logic and especially mathematical logic.

[01:18:27]

I think so. Right. And I think that there's there's there's kind of other problems that are dual to think improving in particular.

[01:18:33]

You know, you think about programming. I think about even like security analysis of code that these all kind of capture the same sorts of core reasoning and being able to do some out of distribution generalization. It would be quite exciting if OpenAir reasoning team was able to prove that P equals NP would be very nice.

[01:18:53]

It would be very, very exciting, especially if it turns out the pig will then be. That'll be interesting, too.

[01:18:58]

Maybe just it would be ironic and humorous.

[01:19:04]

Uh, so what problem stands out to you as the most exciting and challenging, impactful to the work for us as a community in general and for open eye this year? He mentioned reasoning. I think that's that's a heck of a problem. Yeah.

[01:19:18]

So I think reasoning is an important one. I think it's going to be hard to get good results in twenty nineteen. You know, again, just like we think about the life cycle takes time I think for twenty nineteen language modeling seems to be kind of on that ramp. Right.

[01:19:29]

It's at the point that we have a technique that works. We want to scale one hundred thousand actually. What happens. Awesome.

[01:19:36]

Do you think we're living in a simulation.

[01:19:39]

I think it's I think it's hard to have a real opinion about it. You know, it's actually interesting. I separate out things that I think can have like, you know, yield materially different predictions about the world from ones that are just kind of, you know, fun to speculate about. And I kind of view simulation. It's more like, is there a flying teapot between Mars and Jupiter? Like, maybe, but it's a little bit hard to know what that would mean for my life.

[01:20:02]

So there is something actionable. So some of the best work opening has done is in the field of reinforcement learning and some of the success of reinforcement learning come from being able to simulate the problem and trying to solve.

[01:20:17]

So do you have a hope for reinforcement for the future, reinforcement learning and for the future of simulation, like what we're talking about autonomous vehicles or any kind of system. Do you see that scaling up or will it be able to simulate systems and hence be able to create a simulator that echoes our real world and proving once and for all, even though you're denying it, that we're living in a simulation?

[01:20:42]

It seems that for questions. Right. So, you know, kind of at the core there of like, can we use simulation for self driving cars?

[01:20:48]

Take a look at our robotic system. Dactyl, right. That was trained in simulation using the Doda system, in fact. And it transfers to a physical robot. And I think everyone looks at our data system, OK, it's just a game. How are you ever going to escape to the real world? And the answer is, well, we did it with the physical robot that no one can program. And so I think the answer, a simulation goes a lot further than you think if you apply the right techniques to it.

[01:21:11]

Now, there's a question of, you know, ah, the being in that simulation going to going to wake up and have consciousness. I think that one seems a lot harder to take and reason about.

[01:21:20]

I think that, you know, you really should think about like where where exactly does human consciousness come from and our own self awareness.

[01:21:26]

And, you know, is it just that, like, once you have like a complicated enough neural net, you have to worry about the agents feeling pain. And, you know, I think there's like interesting speculation to do there. But but, you know, again, I think it's a little bit hard to know for sure.

[01:21:40]

Let me just keep the speculation, do you think, to create intelligence, general intelligence, you need one consciousness and two a body. Do you think any of those elements are needed or is intelligence something that's that's orthogonal to those?

[01:21:55]

I'll stick to the kind of like the non grand answer first. Right.

[01:21:59]

So the non grand answer is just to look at, you know, what are we already making work you'll get to? A lot of people would have said that even get these kinds of results. You need real world experience. You need a body, you need grounding. How are you supposed to reason about any of these things? How are you supposed to, like, even kind of know about smoke and fire and those things? If you've never experienced them and you do shows that you can actually go way further than that kind of reasoning would predict?

[01:22:24]

So I think that in terms of do need conscience, do we need a body, it seems the answer is probably not right, that we probably just continue to push kind of the systems we have. They already feel, general, they're not as competent or as general or able to learn as quickly as an ajai would. But, you know, there are at least like kind of Proteau ajai in some way, and they don't need any of those things now.

[01:22:47]

Now, let's move to the grand answer, which is, you know, if our neural net nets conscious already, would we ever know? How can we tell?

[01:22:55]

Right. Yeah. Here, here's where the speculation starts to become become, you know, at least interesting or fun and maybe a little bit disturbing, depending on where you take it. But it certainly seems that when we think about animals that there's some continuum of consciousness. You know, my cat, I think is is conscious in some way. Right. You know, not as conscious as a human. And you could imagine that you could build a little consciousness metre.

[01:23:18]

Right. You point a cat, gives you a little breathing point and assuming and gives you a much bigger reading.

[01:23:23]

What would happen if you point to one of those at a the neural net? And if you're training this massive simulation, do the neural nets feel pain?

[01:23:32]

You know, it becomes pretty hard to know that the answer is no and it becomes pretty hard to to really think about what that would mean if the answer were yes.

[01:23:42]

And it's very possible. You know, for example, you could imagine that maybe the reason that humans are have consciousness is because it's a it's a convenient computational shortcut. Right. If you think about it, if you have a being that wants to avoid pain, which seems pretty important to survive in this environment and wants to like, you know, eat food, then that may be the best way of doing it is to have a being that's conscious. Right.

[01:24:04]

That, you know, in order to succeed in the environment, you need to have those properties and how are you supposed to implement them? And maybe this this consciousness is a way of doing that. If that's true, then actually maybe we should expect that really competent reinforcement learning agents will also have consciousness.

[01:24:19]

But, you know, that's a big if. And I think there are a lot of other arguments that can make another directions.

[01:24:24]

I think it's a really interesting idea that even GP2 has some degree of consciousness, that something is actually not as crazy to think of, as useful to think about, as as we think about what it means to create intelligence of a dog, intelligence of the cat and the intelligence of human. So last question, do you think. We will ever fall in love, like in the movie her with an artificial intelligence system or an artificial intelligence system falling in love with a human.

[01:24:55]

I hope so. If there's any better way to end, it is on love. So Greg, thanks so much for talking to me. Thank you for having me.