Transcribe your podcast
[00:00:00]

The following is a conversation with Rajat Monga.

[00:00:03]

He's an engineer and director of Google, leading the intensive flow team tends to flow as an open source library at the center of much of the work going on in the world and deep learning both the cutting edge research and the large scale application of learning based approaches, but is quickly becoming much more than a software library. It's now an ecosystem of tools for the deployment of machine learning in the cloud, on the phone, in the browser, on both generic and specialized hardware to be GPU and so on.

[00:00:31]

Plus, there's a big emphasis on growing a passion community of developers. Roger, Jeff Dean and a large team of engineers at Google Brain are working to define the future of machine learning with of Flow 2.0, which is now in Alpha. I think the decision to open source Tancer flow was a definitive moment in the tech industry. It showed that open innovation can be successful and inspire many companies to open source code to publish and in general engage in open exchange of ideas.

[00:01:01]

This conversation is part of the Artificial Intelligence podcast if you enjoy it. Subscribe on YouTube, iTunes or simply connect with me on Twitter at Lux, Friedman spelled F.R. I.D.. And now here's my conversation with Roger Monga. You were involved with Google Brain since its start in 2011 with Jeff Dean. It started with this belief, the proprietory machine learning library and turning to tons of 2014, the open source library. So what were the early days of Google brain like?

[00:01:55]

What were the goals, the missions? How do you even proceed forward when there's so much possibilities before you?

[00:02:04]

It was interesting back then, you know, when I started when you were even just talking about it. The idea of deep learning was interesting and intriguing in some ways it hadn't yet taken off, but it held some promise.

[00:02:21]

It had shown some very promising and early results. I think that the idea of Andrew and Jeff had started was, what if we can take this work people are doing and research and scale it to what Google has in terms of the compute power. And also put that kind of data together. What does it mean? And so far the results have been if you scale the compute scale the data, it does better and would that work? And so that that was the first year or two.

[00:02:49]

Can we prove that outrage and disbelief when we started the first year, we got some early wins, which is always great.

[00:02:57]

What were the wins like? What was the wins where you were? There are some problems to this. This is going to be good. I think that too early wins were one of the speech that we collaborated very closely with the speech research team who is also getting interested in this. And the other one was on images where we, you know, the paper, as we call it, that was covered by a lot of folks.

[00:03:19]

And the birth of Google brain was around neural networks. That was so it was deep learning from the very beginning that that was the whole mission. So what what in terms of scale, what was the sort of dream of what this could become? What were their echoes of this opensource tons of little community that might be brought in? Was there a sense of CPU's? Was there a sense of like machine learning is now going to be at the core of the entire company is going to grow into that direction?

[00:03:52]

Yeah, I think so. That was interesting. Like, if I think back to 2012 or 2011 and first was can we scale it in in the year or so, we had started scaling it to hundreds and thousands of machines and in fact, we had some runs even going to ten thousand machines and all of those shows. Great promise in terms of machine learning at Google. The good thing was Google's been doing machine learning for a long time. Deep learning was new.

[00:04:18]

But as we scaled this up, we showed that, yes, that was possible and it was going to impact lots of things, like we started seeing real products wanting to use this again. Speech was the first there were image things that photos came out of and then many other products as well. So so that was exciting as we went into it that a couple of years externally, also academia started to you know, there was lots of push on.

[00:04:43]

OK, deep learning is interesting. We should be doing more and so on. And so by 2014, we were looking at, OK, this is a big thing, it's going to grow. And not just internally. Externally as well. Yes. Maybe Google's ahead of where everybody is, but there's a lot to do. So a lot of the stuff to make sense and come together.

[00:05:02]

So the decision to open source I was just chatting with Chris Flattener about this, the decision to open source for Tancer flow. I would say so. For me personally, it seems to be one of the big seminal moments in all of software engineering ever. I think that's when a large company like Google decides to take a large project that many lawyers might argue has a lot of IP. Just decide to go open source to that and in so doing, lead the entire world and saying, you know what, open innovation is?

[00:05:33]

This is a pretty powerful thing and it's OK to do that. That was I mean, that's an incredible, incredible moment in time. So do you remember those discussions happening now? There are opensource should be happening. What was that like? I would say I think the initial idea came from Jeff, who was a big proponent of this. I think it came off of two big things. One was research wise, we were doing search group. We were putting on our research out there, if you wanted to.

[00:06:06]

We were building another research and we wanted to push the state of the art forward. And part of that was to share the research. That's how I think deep learning and machine learning is really growing so fast.

[00:06:17]

So the next step for us, OK, now we're software help for that, and it seemed like they were existing a few libraries out there.

[00:06:26]

They are not being one charge, being another and a few others, but they were all done by academia and the level was was significantly different.

[00:06:35]

The other one was from a software perspective, Google had done lots of software that we used internally, you know, and we publish papers often, there was an open source project that came out of that, that somebody else picked up that paper and implemented and they were very successful back then.

[00:06:55]

It was like, OK, there's Hadoop, which has come off of tech that we've built. We know the tech we've built is way better for a number of different reasons. We've invested a lot of effort and nagged. And turns out we have reached out and we are now not really providing Artec, but we are saying, OK, you a big table, which is the additional thing we are going to now provide APIs on top of that, which isn't as good, but that's what everybody is used to.

[00:07:23]

So there's there's like can we make something that is better and really just provide help to the community in lots of ways where it also helps push to write a good standard forward.

[00:07:34]

So how does Cloud fit into that? There's a sense of open source library. And how does the fact that you can use so many of the resources that Google provides and the cloud fit into that strategy, so so of law itself is open and you can use it anywhere.

[00:07:50]

Right. And we want to make sure that continues to be the case. On Google Cloud, we need to make sure that there's lots of integrations with everything else and we want to make sure that it works really, really well there.

[00:08:02]

So you're leading the transfer effort. Can you tell me the history in the timeline in terms of the project, in terms of major design decisions like the opensource decision, but really, you know what to include and not? There's this incredible ecosystem that I'd like to talk about. There's all of these parts. But what if you just some sample moments that defined what tends to for eventually became through?

[00:08:31]

It's you. I don't know if you're allowed to say history when it's just.

[00:08:35]

But in deep learning, everything moves so fast in just a few years. Is there any history? Yes. Yes.

[00:08:41]

So looking back, we were building tons of flow. I guess we open source 20 15 November 2015.

[00:08:50]

We started on it in summer of 2014, I guess, and somewhere like three or six late 2014. By then we decided that, OK, there's a high likelihood we'll open source search. So we started thinking about that and making sure we're heading down that path. At that point, by that point, we had seen a few, you know, lots of different use cases at Google, so there were things like, OK, yes, you won it on an at large scale in the data center.

[00:09:20]

Yes, we need to support different kind of hardware. We had GPS at that point. We had our first CPU at that point or was about to come out, you know, roughly around that time. So the design sort of included those we had started to push on mobile, so we were running models on mobile at that point people were customizing code, so we wanted to make sure tons of luck would support that as well. So that that sort of became part of that overall design.

[00:09:51]

When you say mobile, you mean like pretty complicated algorithms running on the phone? That's correct. So so when you have a model that you deploy on the phone and run it right.

[00:10:01]

Already at that time, there was ideas of running machine learning on the phone. That's correct. We already had a couple of products that were doing that by that. Right. And in those cases, we had basically customized handcrafted code or some internal libraries that were using. So I was actually at Google during this time in a parallel, I guess, universe. But we were using Siano and Kafe. Yeah. We was there some degree to which you were balancing.

[00:10:27]

I like trying to see what Kafe was offering people trying to see what the was offering that you want to make sure you're delivering on whatever that is, perhaps the python part of thing, maybe that influenced any design decisions. Totally. So when we built this belief and some of that was in parallel with some of these libraries coming up in theater itself is older. But we were building this belief focused on our internal thing because our systems were very different.

[00:10:59]

By the time we got to this, we looked at a number of libraries that were out there. Tiziano there were folks in the group who had experience with Torch, with Lewa. There were folks here who had seen Cafe. I mean, actually, Yangcheng was here as well.

[00:11:15]

There's, uh, what other libraries, I think we looked at a number of things might even have looked at Eyre back then and trying to remember, of course, there. In fact, we did discuss ideas that are OK, should we have a graph or not? And they were so so putting all these together was definitely, you know, they were key decisions that we wanted. We we had seen limitations in our Pryors disbelief things.

[00:11:45]

A few of them were just in terms of research, was moving so fast, we wanted the flexibility we want the hardware was changing fast. We expected to change that so that those probably were two things. And I think the flexibility in terms of being able to express all kinds of crazy things was definitely a big one then. So what the graph decisions without moving towards sensor flow 2.0, you know, there's more by default will be eager execution, the sort of hiding the graph a little bit because it's less intuitive in terms of the way people develop.

[00:12:19]

And so what was that discussion like with in terms of using graphs, it seems is kind of the way that it seemed the obvious choice.

[00:12:27]

So I think where it came from was our, like, disbelief, had a graph like thing as well as much more. It wasn't a general graph. It was more like a straight line thing. More like what you might think of Kafe, I guess, in that sense, but the graph was and we always cared about the production stuff like even with disbelief, you were displaying a whole bunch of stuff in production. So Jeff did come from that when we thought of, OK, should we do that in Python?

[00:12:55]

And we experimented with some ideas where it looked a lot simpler to use. But not having a graph meant, OK, how do you apply now?

[00:13:04]

So that was probably what tilted the balance for us. And eventually we ended up with the graph.

[00:13:09]

And I guess the question there is, did you I mean, the production seems to be the really good thing to focus on, but did you even anticipate the other side of it where there could be what is it? What are the numbers? Something crazy. A 41 million downloads.

[00:13:25]

Yep. I mean, was that even like a possibility in your mind that there would be as popular as it became?

[00:13:35]

So I think we didn't see a need for this. A lot from the research perspective and like early days of deep learning, in some ways, 41 million. No, I don't think I imagine this number. Then there it seemed like there's a potential future where lots more people would be doing this.

[00:14:00]

And how do we enable that? I would say this kind of growth. I probably started seeing somewhat after the open source thing, there was like, OK, you know, deep learning is actually growing faster for a lot of different reasons. And we are in just the right place to push on that and and leverage that and deliver on a lot of things that people want.

[00:14:23]

So what changed once the open source, like how, you know, this incredible amount of attention from a global population of developers? What how did the project start changing? I don't you actually remember it during those times? I know looking now, there's really good documentation. There's an ecosystem of tools. There is a community. There is a YouTube channel now.

[00:14:46]

Yeah, it's very, very community driven back then, I guess zero point one version is that the version which I think we call two point six or five, something like that.

[00:14:59]

I forget what changed leading into 1.0. It's interesting.

[00:15:04]

You know, I think we've gone through a few things there when we started out, when we first came out. People love the documentation we have because it was just a huge step up from everything else, because all of those were academic projects, people doing, you know, we don't think about documentation.

[00:15:20]

I think what that changed was instead of deep learning being a research thing, some people who were just helpers could now suddenly take this out and do some interesting things for that. Right. Who had no clue what machine learning was before then. And that, I think, really changed how things started to scale up in some ways and pushed on it over the next few months as we looked at, you know, how do we stabilize things?

[00:15:48]

As we look at not just researchers now, we want stability. People learn to play things. That's how we start planning for when at and there are certain needs for that perspective. And so, again, documentation comes up, designs more kinds of things to put that together.

[00:16:05]

And so that was exciting to get to a stage where more and more enterprises wanted to buy in and really get behind that. And I think post-war nado and, you know, over the next few releases that enterprise adoption also started to take off, I would say, between the initial release and when I it was OK. Researchers, of course, then a lot of hobbyists and early interest people excited about this, who started to get on board.

[00:16:32]

And then over the one next thing, lots of enterprises, I imagine anything that's, you know, below 1.0 guess pressure to be an enterprise problem or something that's stable. Exactly. And the do you have a sense now that Tenzer flow mistake like it feels like deep learning in general is extremely dynamic field.

[00:16:54]

So much is changing. Java and Taseff has been growing incredibly.

[00:16:59]

You have a sense of stability at the helm of it. I mean, I know you're in the midst of it, but, yeah, it's it's I think in the midst of it, it's often easy to forget what an enterprise warns and what some of the people on that side when there's still people running models that are three years old, four years old. So inception is still used by tons of people, even less than 50 years. What, a couple of years old now or more.

[00:17:25]

But there are tons of people who use that and they're fine. They don't need the last couple of bits of performance or quality. They want some stability and things that just work. And so there is value in providing that with that kind of stability and making it really simpler because that allows a lot more people to access it. And then there's the the research crowd. Which ones, OK, they want to do these crazy things exactly like you're seeing.

[00:17:50]

Right. Not just deep learning in the straight up models that used to be there. They weren't often ends and, you know, own ends and maybe all they are transformer's now and now it needs to combine with Ordell and gangs and so on. So so there's definitely that area that like the boundary that's shifting and pushing the state of the art. But I think there's more and more of the past that's much more stable. And even stuff that was two, three years old is very, very usable by lots of people.

[00:18:21]

That makes that part makes it a little easier. So I imagine maybe you can correct me if I'm wrong. One of the biggest use cases is essentially taking something like Rasner 50 and doing some kind of transfer. Learning on a very particular problem that you have is basically probably what a majority of the world does. And you want to make that as easy as possible, so so I would say for the hobbyist perspective, that's the most common case, right.

[00:18:49]

In fact, the apps on phones and stuff that you'll see the only ones that's the most common case. I would say there are a couple of reasons for that. One is that everybody talks about that. It looks great on slides. Yeah, that's a visual presentation. Yeah, exactly. What Enterprise is worrying is that is part of it, but that's not the big thing. Enterprises really have data that they want to make predictions on. This is often what they used to do with the people who are doing email was just regression models, linear regression, logistic regression, linear models or maybe gradient worster trees and so on.

[00:19:26]

Some of them still benefit from deep learning, but they weren't that that that's the bread and butter, like the structured data and so on.

[00:19:32]

So depending on the audience, you look at it a little bit different and they just have I mean, the best of enterprise probably just has a very large data set or deep learning can probably shine. That's correct. That's right. And then the I think the other pieces that they wanted to point out that the build up or something to be put together is that the whole tenth floor extended piece, which is the entire pipeline, they care about stability across that entire thing.

[00:19:59]

They want simplicity across the entire thing. I don't need to just train a model. I need to do that every day again, over and over again.

[00:20:07]

I wonder to which degree you have a role. And I don't know. So I teach a course on deep learning. I have people like lawyers come up to me and say, you know, let's say one is machine learning to enter legal, the legal around the same thing in all kinds of disciplines. Immigration insurance. Often when I see what it boils down to is these companies are often a little bit old school and the way they organize the day.

[00:20:37]

So the data is just not ready yet. It's not digitized.

[00:20:40]

Do you also find yourself being in the role of an evangelist for like, let's get organize your data, folks, and then you'll get the big benefit of Texas law? Do you get those have those conversations?

[00:20:54]

So, yeah, yeah. I you know, I get all kinds of questions there from. OK, what can I what do I need to make this work right? To do we need deep learning, I mean, there all these things, I already use this linear model, why would this help? I don't have enough data or let's say, you know, or I want to use machine learning, but I have no clue where to start. So I'd really stuck to all the way to the experts who I very specific things.

[00:21:23]

It's interesting. Is there a good answer? It boils down to oftentimes digitizing data. So whatever you want, automated, whatever data want to make prediction based on, you have to make sure that it's in an organized form you've like within the ecosystem. There's now you're providing more and more data sets and more and more pre models. Are you finding yourself also the organizer of data sets?

[00:21:48]

Yes, I think with tons of raw data sets that we just realized, that's definitely come up. People want these data sets. Can we organize them and can we make that easier? So that's that's definitely one important thing. The other related thing I would say is I often tell people, you know, I don't think of the most fanciest thing that the newest model does. He make something very basic work and then you can improve it.

[00:22:12]

There's just lots of things you can do with it.

[00:22:15]

Yeah, start with the basics. Sure. One of the big things that makes it makes sense to follow even more accessible was the parents whenever that happened. Of course, the standard sort of outside of tends to fail. I think it was Carus on top of the piano at first only, and then Charis became on top of Tancer flow. Do you know when. Carus chose to also add Tancer for the back end, who was the was it just the community that drove that initially?

[00:22:50]

Do you know if there was discussions, conversations? Yeah.

[00:22:53]

So Franza started to get this project before he was at Google and the first thing was Siano. I don't remember if that was after it was created or day before. And then at some point in terms, of course, are becoming popular, there were enough similarities that you decided to create this interface and pretend to float back. And I believe that might still have been before he joined Google. So I you know, we weren't really talking about that. He decided on his own.

[00:23:24]

And I thought that was interesting and relevant to the community. In fact, I didn't find out about him being at Google until a few months after he was here, he was working on some research ideas and doing courteousness nights and weekends, project and stuff I was just doing.

[00:23:41]

So he wasn't like part of the Tancer floor. He didn't join and she joined research and he was doing some amazing research papers on that area and research used. And he's a great researcher as well. And at some point we realize, oh, he's he's doing this good stuff, people seem to like the API and he's right here. So we talk to him and he said, OK, why don't I come over to your team and work with you for a quarter, know, let's make that integration happen.

[00:24:11]

And we talked to his manager and he said, sure, my water's fine. And I quote, There's been something like two years now.

[00:24:19]

And so he's fully on this.

[00:24:21]

So Carras got integrated into Tenzer flow like in a deep way. Yeah. And now 2.0, 24 2.0 sort of keris is kind of the recommended way for a beginner to interact with tethering, which makes that initial sort of transfer learning or the basic use cases even for enterprise. Super simple. Right. That's correct. That's right. So what was that decision like? That seems like, uh, that's kind of a bold decision as well. We did spend a lot of time thinking about that when we had a bunch of API, some players.

[00:25:02]

There was a parallel layer API that we were building and then we decided to do karasin parallel. So they were like, OK, two things that we are looking at. And the first thing we was trying to do is just have them look similar, like be as integrated as possible, shared all of that stuff.

[00:25:18]

There are also like three other APIs that others had built over time because we didn't have a standard one.

[00:25:25]

But one of the messages that we kept hearing from the community, okay, which one do we use? And they kept saying, like, OK, here's a model in this one and here's a model in this one, which I pick. So that that's sort of like, OK, we had to address that straight on with 2.0. The whole idea was you need to simplify. You had to pick one. Based on where we were, we were like, OK.

[00:25:48]

Let's see what's what are the what are the people like? And Chris was clearly one that lots of people loved. There were lots of great things about it. Uh, so we settled on that organically.

[00:26:00]

That's kind of the best way to do it. It was great because it was surprising, nevertheless, to sort of bring in an outsider. I mean, there was a feeling like Chris might be almost like a competitor and a certain kind of a two tons of flow and a sense it became an empowering element of tension flow.

[00:26:18]

That's right. Yeah. It's interesting how you can put two things together which don't which can align. Right. And in this case, I think Francois, the team and I, you know, a bunch of us have chatted and I think we all want to see the same kind of things we all care about making it easier for the huge set of developers out there.

[00:26:37]

And that makes a difference. So Python has good over and awesome who until recently held the position of benevolent dictator for life. Right.

[00:26:48]

So there's a huge successful open source project like tons of flow need one person who makes a final decision. So you did a pretty successful tends to flow dev summit just now. The last couple of days. There's clearly a lot of different new features being incorporated, an amazing ecosystem. So on.

[00:27:10]

Whose, uh, how are those design decisions made? Is there is there a B DFL in terms of flow and or is it more distributed and organic? I think it's it's somewhat different, I would say. I've always been involved in the key design directions.

[00:27:32]

But there are lots of things that are distributed there, there are a number of people, Martinek being one, who has really driven a lot of opensource stuff, a lot of the epis in there.

[00:27:44]

There are a number of other people who have been, you know, pushed and been responsible for different parts of it.

[00:27:51]

We do have regular design reviews over the last year, we've really spent a lot of time opening up to the community and adding transparency. Resetting more processes in place, so all of these special interest groups really grow that community and skilled at.

[00:28:09]

I think the kind of scale that ecosystem is in, I don't think we could scale with having me as the starting point of decision maker, so I got it.

[00:28:19]

OK, so, yeah, the growth of that ecosystem, maybe you can talk about it a little bit. First of all, when I started with or party when he first did come, that just the fact that you can train in your network and the browsers in JavaScript was incredible. So now tons of Rojas is really making that a serious like a legit thing, a way to operate, whether it's in the back end or the front end. Then there's the potential for extended.

[00:28:47]

Like you mentioned, there's tons of flight for mobile and all of it.

[00:28:52]

As far as I can tell, it's really converging towards being able to, you know, save models in the same kind of way.

[00:28:59]

You can move around, you can train on the desktop and then move it to mobile and so on.

[00:29:05]

Like, that's right. There's that cohesiveness.

[00:29:08]

So can you maybe give me whatever I missed the bigger overview of the mission of the ecosystem that's trying to be built and where is it moving forward?

[00:29:18]

Yeah, so in short, the way I like to think of this as our goal is to enable machine learning and in a couple of ways. One is we have lots of exciting things going on in email today. We started with deep learning, but we now support a bunch of other algorithms to. So so one is to on the research side, keep pushing on the state of the art, can we you know, how do we enable researchers to build the next amazing thing?

[00:29:45]

So it came out recently. You know, it's great that people are able to do new kinds of research. And there are lots of amazing research that happens across the world. So that's one direction.

[00:29:55]

The other is how do you take that across all the people outside who want to take that research and do some great things with it and integrated to build real products to to have a real impact on people.

[00:30:08]

And so if that's the other axes, in some ways, you know, at a high level, one way I think about it is there a crazy number of computer devices across the world?

[00:30:20]

And we often used to think of HTML and training and all of this as something you do either in a workstation or the data center cloud.

[00:30:29]

But we see things running when the phones. We see things running on really tiny chips. I mean, we had some demos at the developers summit. And so the way I think about this ecosystem is how do we help get machine learning on every device that has a compute capability and that continues to grow. And so in some ways, this ecosystem has looked at various aspects of tagine growing over time to cover more of those. And we continue to push the boundaries in some areas.

[00:31:01]

We've built more tooling and things around that to help you. I mean, the first tool we started was Sensabaugh. Do you want to learn just the training piece defects or tenso flow extended to really do your entire Emelle pipelines if you're, you know, care about all that production stuff, but then going to the edge, going to different kinds of things. And it's not just us now. They're a place where, you know, there are lots of libraries being built on Paape.

[00:31:32]

So there are some for research, maybe things like Henslow agents or dance floor probably that started as research things or for researchers for focusing on certain kinds of algorithms. But they're also being deployed or used by production folks. And some have come from within Google, just teams across Google who wanted to do build these things. Others have come from just the community because there are different pieces that different parts of the community care about. And I see our goal as enabling even that.

[00:32:05]

Right. It's not we cannot and won't build every single thing. That just doesn't make sense. But if we can enable others to build the things that they care about and there's a broader community that cares about that and we can help encourage that. And that that's great. That really helps the entire ecosystem, not just those. One of the big things about 2.0 that we are pushing on is, OK, we have these so many different pieces. Right.

[00:32:30]

How do we help make all of them work well together? So there are a few key pieces there that we are pushing on, one being the core format in there and how we share the models themselves through save model and what kinds of floor hub and so on.

[00:32:46]

And, you know, a few of the pieces that we really put this together, I was very skeptical that that's, you know, in terms of Rojas came out, I didn't seem or deep learning jazz.

[00:32:56]

Yeah. That was first, it seemed like technically very difficult project as a standalone.

[00:33:02]

It's not as difficult, but as a thing that integrates into the ecosystem is very difficult. So, I mean, there's a lot of aspects of this. You make it look easy. But on the technical side, how many challenges have to be overcome here? A lot and still have to be. Yes, that's the question here, too. There are lots of steps to be reiterated over the last few years that there's like we've learned.

[00:33:26]

I often when things come together well, things look easy, and that's exactly the point, it should be easy for the end user, but there are lots of things that go behind that if I think about still challenges ahead.

[00:33:41]

There are. You know, we have a lot more devices coming on board, for example, from the hardware perspective, how do we make it really easy for these vendors to integrate with something like of flow? So there's a lot of compiler stuff that others are working on. There are things we can do in terms of our APIs and so on that we can do as we.

[00:34:08]

You know, the flow started as a very monolithic system and to some extent it still is. There are lots of tools around it, but the core is still pretty large and monolithic. One of the key challenges for us to scale that out is how do we break that apart with clearer interfaces? It's, you know, in some ways it's software engineering, one and one. But for a system that's now four years old, I guess, or more, and that's still rapidly evolving and that we're not slowing down with, it's hard to, you know, change and modify and really break apart.

[00:34:44]

It's sort of like as people say, right. It's like changing the engine with a car running or figuring out that's exactly what we're trying to do.

[00:34:51]

So there's a challenge here, because the downside of so many people being excited about underflow and becoming to rely on it in many other applications is that you're kind of responsible is the technical debt. You're responsible for previous versions to some degree still working. So when you're trying to innovate, I mean, it's probably easier to just start from scratch every few months.

[00:35:22]

Absolutely. So do you feel the pain of that?

[00:35:26]

A 2.0 does break some back compatibility, but not too much. It seems like the conversion is pretty straightforward. And do you think that's still important, given how quickly deep learning is changing?

[00:35:39]

Can you just the things that you've learned, can you just start over or is there pressure to not it's it's a tricky balance. So if it was just a researcher writing a paper who a year later will not look at that code again. Sure, it doesn't matter. There are a lot of production systems that rely on intensive flow both at Google and across the world. And people worry about this. I mean, these systems run for a long time.

[00:36:09]

So it is important to keep that compatibility and so on. And yes, it does come with a huge cost.

[00:36:16]

There's we have to think about a lot of things as we do new things and make new changes. I think it's a tradeoff. Right. You can you might slow certain kinds of things down, but the overall value you're bringing because of that is is much bigger because it's not just about breaking the person yesterday. It's also about telling the person tomorrow that, you know what? This is how we do things. We're not going to break you and you come on board because there are lots of new people who are also going to come on board a one way.

[00:36:49]

I like to think about this, and I always seem to think about as well. When you want to do new things, you want to start with a clean slate design with a clean slate in mind, and then we'll figure out how to make sure all the other things work. And yes, we do make compromises occasionally. But unless your design with the clean slate and not worry about that, you'll never get to a good place. That's brilliant.

[00:37:15]

So even if you're do, you are responsible when in the idea stage, when you're thinking of new, just put all that behind you. That's OK. That's really, really well put. So I have to ask this because a lot of students, developers ask me how I feel about PI Torch for success. So I've recently completed my research group to Tancer floor. I wish everybody would just use the same thing.

[00:37:39]

Intensifies as close to that, I believe, as we have.

[00:37:43]

But do you enjoy competition?

[00:37:48]

It leading in many ways and many dimensions to the ecosystem in terms of the number of users, momentum, power production level, so on.

[00:37:57]

But, you know, a lot of researchers are now also using PI torch. Do you enjoy that kind of competition or do you just ignore it and focus on making tons of the best that it can be?

[00:38:08]

So just like research or anything people are doing right. It's great to get different kinds of ideas.

[00:38:14]

And when we started, we tend to flow, like I was saying earlier, when it was very important for us to also have production in mind.

[00:38:23]

We didn't want just research. Right. And that's why we chose certain things. Now, pirates came along and said, you know what, I only care about research. This is what I'm trying to do. What's the best thing I can do for this? And it started iterating and said, OK, I don't need to worry about graphs. Let me just run things. I don't care if it's not as fast as it can be, but let me just make this part easy.

[00:38:46]

And there are things you can learn from that right there. They again had the benefit of seeing what had come before, but also exploring certain different kinds of spaces. And they had some good things they're building on, say, things like gender and so on before that.

[00:39:02]

So competition. Stephani, interesting. It made us you know, this is an area that we are. Thought about it, like I said earlier on, over time, we had revisited this a couple of times, should we add this again at some point we said, you know what, here's it seems like this can be done well, so let's try it again. And that's how, you know, we started pushing on a good execution. And how do we combine those two together, which has finally come very well together into a point.

[00:39:29]

But it took us a while to get all the things together and so on.

[00:39:32]

So let me I mean, ask put another way, I think your execution is a really powerful thing that was added. Do you think it wouldn't have been? You know, Muhammad Ali versus Frazier. Do you think it wouldn't have been added as quickly if Pocketwatch wasn't there? It might have taken longer. Yeah, it was. I mean, we tried some variants of that before, so I'm sure it would have happened, but it might have taken longer.

[00:39:58]

I'm grateful that the first, by the way, they did is doing some incredible work last couple of years. What are the things that we didn't talk about? Are you looking forward and to point out that comes to mind? So we talk about some of the ecosystem stuff making it easily accessible to Carus Eco execution. Is there other things that we miss? Yeah. So I would say one is just Web 2.0 is and you know, with all the things that we've talked about, I think as we think beyond that, there are lots of other things that it enables us to do and that we're excited about.

[00:40:34]

So what it's setting us up for the hair, the Israeli clean APIs, we've cleaned up the surface for what the users want, what it also allows us to do, a whole bunch of stuff behind the scenes ones we have we are ready with 2.0.

[00:40:47]

So, for example, intensive floor with graphs and all the things you could do, you could always get a lot of good performance if you spent the time to tune it. Right. And we've clearly shown that lots of people do that. Web 2.0, that these APIs where we are, we can give you a lot of performance just with whatever you do, you know, if you're because we see these, it's much cleaner. We know most people are going to do things this way.

[00:41:20]

We can really optimize for that and engage a lot of those things out of the box.

[00:41:25]

And it really allows us, you know, both for single machine and distributed and so on to really explore other spaces behind the scenes after Know 2.0 in the future versions as well. So right now, the team is really excited about that, that over time, I think we'll see that.

[00:41:42]

The other piece that I was talking about in terms of just restructuring the monolithic thing into more pieces and making it more modular, I think that's going to be really important for a lot of the other people in the ecosystem, other organizations and so on, that wanted to build things.

[00:42:01]

Can you elaborate a little bit what you mean by making tons of flow, more ecosystem, more modular?

[00:42:06]

So the way it's organized today is there's one there are lots of repositories in the sense of organization at GitHub, the core one where we have tens of floor. It has the execution engine. It has, you know, the key backhands for CPU's in use. It has the work to do distributed stuff. And all of these just work together in a single library or binary. There's no way to split them apart easily. I mean, there are some interfaces, but they're not really in a perfect world.

[00:42:38]

You would have clean interfaces where OK, I want to run it on my fancy cluster with some custom networking just to implement this and do that. I mean, we kind of support that.

[00:42:48]

But it's hard for people today, I think, as we are starting to see more interesting things in some of these places, having that clean separation will really start to help.

[00:42:59]

And again, we're into the the large size of the ecosystem and the different groups involved. They're enabling people to evolve and push on things more independently, just allows it to scale better.

[00:43:12]

And by people, you mean individual developers and and organizations and organizations. That's right. So the hope is that everybody sort of major, I don't know, Pepsi or something, users like major corporations go to tons of flow to this kind of. Yeah. If you look at enterprises like Pepsi or these, I mean, a lot of them are already using tons of what they are not the ones that do the development or changes in thickener. Some of them do, but a lot of them don't.

[00:43:38]

I mean, they had small pieces. There are lots of these some of them being, let's say, hardware vendors who are building their custom hardware and they want their own business or some of them being bigger companies, say IBM. I mean, they're involved in some of our special interest groups and they see a lot of users who want certain things and they want to optimize for that.

[00:43:58]

To folks like that of autonomous vehicle companies, perhaps.

[00:44:02]

Exactly, yes. So, yeah, like I mentioned, Densify has been downloaded 41 million times, 50000 commits almost ten thousand requests. Eighteen hundred contributors. So. I'm not sure if you can explain it, but what does it take to build a community like that? What in retrospect, what do you think? What is the critical thing that allowed for this growth to happen and how does that growth continue?

[00:44:30]

Yeah, yeah, that's an interesting question. I wish I had all the answers discussed so we could replicate it. I think there's there are a number of things that need to come together. Right. One, just like any new thing it is about there's a sweet spot of timing.

[00:44:52]

What's needed, you know, does it grow with what's needed in this case, for example, is not just one, because it is a good tool. It's also growing with the growth of deep learning itself. So those factors come into play. Other than that, though. I think just hearing listening to the community, what they're there to, what they need, being open to like in terms of external contributions, we've spent a lot of time in making sure we can accept those contributions.

[00:45:22]

Well, we can help the contributors in adding those putting the right processes in place, getting the right kind of community, welcoming them and so on. Like for the last year, we've really pushed on transparency that that's important for an open source project. People want to know where things are going and they're like, OK, here's a process where you can do that here or overseas and so on. So thinking what through?

[00:45:47]

There are lots of community aspects that come into that you can really work on as a small project.

[00:45:53]

It's maybe easy to do because there's like two developers in and you can do those as you grow. Putting more of these processes in place, thinking about the documentation, thinking about what to developers care about, what kind of tools would they want to use, all of these come into play?

[00:46:12]

I think so. One of the big things I think that feeds the sense of the fire is people building something on tensen floor and, you know, some implement a particular architecture, does something cool and useful, and they put that on GitHub. And so it just fezzes of this growth. Do you have a sense that with 2.0 and 1.0 that there may be a little bit of a partitioning like there is a python two and three, that there will be a code base and in the older versions of tasks before they will not be as compatible easily or a pretty confident that this kind of conversion is pretty natural and easy to do.

[00:46:54]

So we're definitely working hard to make that very easy to do. There's lots of tooling that we talked about at the developer summit this week and we continue to invest in that tooling. It's you know, when you think of these significant words and changes, that's always a risk. And we are really pushing hard to make that transition very, very smooth.

[00:47:15]

I think so. So at some level, people want to move and they see the value in the new thing. They don't want to move just because it's a new thing and some people do it. But most people want a really good thing. And I think over the next few months, as people start to see the value, we'll definitely see that shift happening. So I'm pretty excited and confident that we will see people moving. As you said earlier, this field is also moving rapidly.

[00:47:41]

So that will help because we can do more things and, you know, all the new things will clearly happen in my neck. So people will have lots of good reasons to move.

[00:47:48]

So what do you think Dancefloor 3.0 looks like? Is that is there are things happening so crazy that even at the end of this year, it seems impossible to plan for? Or is it possible to plan for the next five years? I think it's tricky.

[00:48:07]

There are some things that we can. Expect in terms of a change, yes, change is going to happen. Are there some things going to stick around and something's not going to stick around? I would say. The the basics of deep learning, the, you know, second relational models or the basic kinds of things, they'll probably be around in some form still in five years, will Aurélien can stay very likely based on where they are.

[00:48:37]

We have new things, probably, but those are hard to predict and. Some directionally, some things that we can see as. You know, things that we're starting to do right with some of our projects right now is just to point out combining good execution and graphs where we're starting to make it more like just your natural programming language, you're not trying to program something else. Similarly, it's a pretense of law. We're taking that approach. Can you do something ground up?

[00:49:05]

Right. So so some of those ideas seem like, OK, that's the right direction. In five years, we expect to see more in that area. Other things we don't know as well. Hardware accelerators be the same. Will we be able to train with four bricks instead of 32 bits?

[00:49:25]

And I think the CPU side of things is exploring that. I mean, CPU's already on version three, it seems that the evolution of CPU intensive floor, sort of their core evolving almost in terms of both, are learning from each other and from the community and from the applications where the biggest benefit is that you. That's right. You've been trying to share with Igoe, with carriers to make dancefloor as accessible and easy to use as possible. What do you think for beginners is the biggest thing they struggle with?

[00:49:56]

Have you encountered that or is basically what Keris is solving? Is that Ægir like we talked about?

[00:50:03]

Yeah, for some of them, like you said. Right. Beginners want to just be able to take some image model. They don't care if it's Inception or resonant or something else and do some training or transfer learning or the kind of model being able to make that easy is important. So in some ways, if you do that by providing them simple models would say in Hubb or so on, they don't care about what's inside that box, but they want to be able to use it.

[00:50:31]

So we are pushing on, I think, different levels. If you look at just a component that you get, which has the layers already smooshed in the beginning is probably just one that then the next step is, OK, look at building layers, because if you go to research, then they are probably creating custom layers themselves or doing their own lives. So there's a whole spectrum there.

[00:50:52]

And then providing the retrain models seems to really decrease the time from you trying to start so you could basically in a CoLab notebook, achieve what you need. So I basically answering my own question, because I think what tends to flow delivered on recently is is trivial for beginners. So I was just wondering if there was other pain points to try to ease, but I'm not sure there would know that those are probably the big ones. I mean, I see high schoolers doing a whole bunch of things now, which is pretty amazing.

[00:51:25]

It is both amazing and terrifying.

[00:51:27]

Yes.

[00:51:28]

In a sense that is that when they grow up, it's just some incredible ideas will be coming from them. So there's certainly a technical aspect to your work, but you also have a management aspect to your role with Tenzer for leading the project, large number of developers and people.

[00:51:47]

So what do you look for in a good team? What do you think? Know Google's been at the forefront of exploring what it takes to build a good team and Tenzer flow.

[00:51:59]

As one of the most cutting edge technologies in the world, so in this context, what do you think makes for a good team? It's definitely something I think a fair bit about. I think the. It in terms of, you know. The team being able to deliver something. Well, one of the things that's important is a cohesion across the team. So being able to execute together and doing things, it's not like at this scale, an individual engineer can only do so much.

[00:52:31]

There's a lot more that they can do together, even though we have some amazing superstars across Google and in the team. But there's, you know, often the way I see it as the product of what the team generates as larger than the whole or, you know, uh, each individual put together. And so how do we have all of them work together?

[00:52:53]

The culture of the team itself, hiring good people is important. But part of that is it's not just that, OK, we hire a bunch of smart people and throw them together and let them do things. It's also people have to care about what they're building. People have to be motivated for the right kind of things. That's often an important factor. Um, and, you know, finally, how do you put that together with a somewhat unified vision of where we want to go?

[00:53:25]

So are we all looking in the same direction or each of us going on over? And sometimes it's a mix.

[00:53:32]

Google's a very bottom up organization in some sense. Also research even more so in that's how we started. But as we've become this larger product, an ecosystem, I think it's also important to combine that well with a mix of here's the direction we want to go in. There is exploration. We do around that, but let's keep staying in that direction, not just all over the place.

[00:54:00]

And is there where you monitor the health of the team, sort of like, is there a way, you know, you did a good job?

[00:54:08]

The team is good. Like I mean, you're sort of you're saying nice things, but it's sometimes difficult to determine.

[00:54:15]

Yes. How aligned.

[00:54:17]

Yes, because it's not binary. It's not like it's there's tensions and complexities and so on. And the other element of the superstars, you know, there's so much even a Google, such a large percentage of work is done by individual superstars, too. So there's a need. And sometimes those superstars can be against the dynamic of a team. And those those tensions have was that has that I mean, I'm sure intensive thought might be a little bit easier because the mission of the project is so.

[00:54:47]

Sort of beautiful, you're at the cutting edge as exciting. Yeah, but have you had a struggle with that? Has there been challenges? There are always people challenges and different kinds of fears that. But I think we've been. What's good about getting people who care and are, you know, have the same kind of culture that's Google in general to a large extent, but also like you said, given that the project has had so many exciting things to do.

[00:55:15]

There's been room for lots of people to do different kinds of things and grow rich, which does make the problem a bit easier, I guess.

[00:55:22]

And it allows people depending on what they're doing, if there's room around them. And that's fine.

[00:55:29]

But yes, we do it. We do care about whether a superstar or not they need to work well with the team across the world. That's interesting to hear. So it's like a superstar, not the productivity broadly is about the team. Yeah, yeah.

[00:55:47]

I mean, they might add a lot of value, but if they're holding the team, then that's a problem.

[00:55:51]

So in hiring engineers, it's so interesting, the hiring process. What do you look for? How do you determine a good developer or a good member of a team from just a few minutes or hours together?

[00:56:05]

So, again, no magic answers, I'm sure. Yeah, yeah. I mean, Google has a hiring process that we've refined over the last 20 years, I guess, and that you've probably heard and seen a lot about how we do work with the same hiring process. And that that's really helped. For me in particular, I would say in addition to the the core technical skills, what does matter is their motivation in what they want to do, because if that doesn't align with where we want to go, that's not going to lead to long term success for either team or their team.

[00:56:43]

And I think that becomes more important the more senior the person is part. It's important at every level, like even the junior most engineer, if they're not motivated to do well at what they're trying to do, however smart they are, it's going to be hard for them to succeed.

[00:56:56]

Does the Google hiring process touch on that passion? So like trying to determine? Because I think as far as I understand, maybe you can speak to it like the Google hiring process sort of helps the initial like determines the skill set there. Is your puzzle solving ability, problem solving ability. Good. But like, I'm not sure. But it seems that the determining whether the person is like fire inside them. Yeah. That got to do anything really.

[00:57:25]

It doesn't really matter. It's just some cool stuff.

[00:57:27]

I'm going to do it that I don't know. Is that something that ultimately ends up when they have a conversation with you or once it gets closer to the.

[00:57:38]

So one of the things we do have as part of the process is just a culture like part of the interview process itself, in addition to just the technical skills and each engineer or whoever the interviewer is is supposed to.

[00:57:52]

Rate the person on the culture and the culture fit with Google and so on, so so that is definitely part of the process. Now, there are various kinds of projects and different kinds of things. So there might be variants and of the kind of culture you want there and so on. And yes, that does vary.

[00:58:09]

So, for example, Tenzer Flo's always been a fast moving project and we want people who are comfortable with that.

[00:58:17]

But at the same time now, for example, we are at a place where we are also very full fledged product and we want to make sure things that work really, really work. Right. You can't cut corners all the time so that balancing that out in finding the people who are the right fit for fit for those as is important. And I think those kind of things do variable across projects and teams and product areas across Google. And so you'll see some differences there in the final checklist.

[00:58:43]

But a lot of the corporate culture comes along with just the engineering excellence and so on.

[00:58:50]

What is the hardest part of your job? I'll take your pick, I guess it's it's fun, I would say. Right hard. Yes. I mean, lots of things at different times. I think that that does vary. So maybe clarify that difficult things are fun.

[00:59:08]

Yeah. When you saw them, right. Yes.

[00:59:12]

It's fun in that in that sense, I think the key to a successful thing across the board and you know, in this case, it's a large ecosystem now, but even a small product is striking that fine balance across different aspects of it. Sometimes that's how fast you go versus how perfect it is. Sometimes it's how do you involve this huge community? Who do you involve or do you decide, OK, now's not a good time to involve them because it's not the right fit.

[00:59:46]

You know, sometimes it's saying no to certain kinds of things, those are often the hard decisions. Some of them you make quickly because you don't have the time, some of them, you get time to think about them, but they're always hard to win.

[01:00:01]

Both both choices are pretty good at those decision about deadlines. Is this do you find to flow to be driven? By deadlines to a degree that a product might or is there still a balance to where? And the less that you had the dev summit, they came together incredibly, look like there's a lot of moving pieces and so on, so that that deadline make people rise to the occasion releasing the full 2.0 Alpha. Yeah, I'm sure that was done last minute as well.

[01:00:36]

I mean, like the up to the. Yes, up to the up to the last point. Yes.

[01:00:41]

Again, you know, it's one of those things that you need to strike the good balance. There's some value that deadlines bring that does bring a sense of urgency to get the right things together instead of, you know, getting the perfect thing out.

[01:00:54]

You need something that's good and works well. And the team definitely did a great job in putting that together. So it was very amazed and excited by everything, how that came together. That said, across there, we try not to put artificial deadlines. We focus on key things that are important, figure out what that how much of it's important. And then we are developing in the open what you know, internally and externally, everything's available to everybody.

[01:01:24]

So you can pick and look at where things are. If you do releases at a regular cadence. So fine, if something doesn't necessarily end up at this month, it'll end up in the next release in a month or two.

[01:01:35]

And that's OK.

[01:01:37]

But we want to get like keep moving as fast as we can in these different areas because we can iterate and improve on things. Sometimes it's OK to put things out that aren't fully ready. We'll make sure it's clear that, OK, this is experimental, but it's out there. If you want to try and give feedback, that's very, very useful. I think that quick cycle and quick iteration is important. That's what we often focus on rather than here's a deadline where you get everything else is to point out.

[01:02:06]

Is there pressure to make that stable or like, for example, WordPress 5.0 just came out with and there was no pressure to it. It was a lot of build updates to deliver it way too late. But and they said, OK, well, but we're going to release a lot of updates really quickly to improve it. And that is, do you see tons of two point on that same kind of way or is there there's pressure to once it's 2.0, once you get to the release candidate and then you get to the final, that that's going to be the the the stable thing.

[01:02:38]

So it's going to be stable in just like when Nadex was where every API that's there is going to remain and work. It doesn't mean we can't change things under the covers. It doesn't mean we can't add things. So there's still a lot more for us to do and we continue to have more releases. So in that sense, there's still I don't think we would be done in like two months when we released this.

[01:03:02]

I don't know if you can say, but is there you know, there's not external deadlines for 10 Safak 2.0, but is there internal deadlines, the artificial or otherwise, that you're trying to set for yourself? Is or is it whenever it's ready?

[01:03:19]

So we want it to be a great product. And that's a big, important piece for us.

[01:03:26]

Tons of laws already out there. We have, you know, 41 million downloads for Wonderlic. So it's not like we really have to have it. Yeah, exactly. So it's not like a lot of the features that we've, you know, really polishing and putting them together out there. You don't have to rush that just because. So in that sense, we want to get it right and really focus on like that said, we have said that we are looking to get this out in the next few months, in the next quarter.

[01:03:50]

And we you know, as far as possible, we try to make that happen.

[01:03:56]

Yeah, my my favorite line was Spring is a relative concept and I love it. Yes. Spoken like a true developer.

[01:04:03]

So, you know, something I'm really interested in and your previous line of work is before tense, before you let a team and Google and search ads.

[01:04:13]

I think this is like this is a very interesting topic on every level, on a technical level, because at their best as connect people to the things they want and need. Yeah. So and at their worst, they're just these things that annoy the heck out of you to the point of ruining the entire user experience of whatever you're actually doing.

[01:04:36]

And so they have a bad rep I guess, and at the other end, so that it's connecting users to the thing they need and want is a beautiful opportunity for machine learning to shine like huge amounts of data that's personalized and you kind of map to the thing they actually want or get annoyed. So what have you learned from this Google that's leading the world in this aspect?

[01:05:01]

What have you learned from that experience and what do you think is the future of ads take you back to the fact that, yes, it's been a while, but I totally agree with what you said.

[01:05:15]

I think the search ads, the way it was always looked at, and I believe it still is, is it's an extension of what search is trying to do.

[01:05:24]

The goal is to make the information and make the world's information accessible, that it's not just information, but maybe products or, you know, other things that people care about. And so it's really important for them to align with what the users need. And, you know, the in search ads, there's a minimum quality level before that ad would be shown if you don't have an ad that hits that quality, but it will not be shown even if we have it.

[01:05:52]

And OK, maybe we lose some money there. That's fine. That that is really, really important. And I think that that is something I really liked about being there.

[01:06:01]

Advertising is a key part. I mean, as a model, it's been around for ages, right? It's not a new model.

[01:06:09]

It's it's been adapted to the Web and, you know, became a core part of search in many of their search engines across the world. I do hope, you know, like I said, there are aspects of ads that are annoying and I go to a website and if it just keeps popping in my face not to let me read that, that's going to be annoying clearly.

[01:06:30]

So I hope we can strike that balance between. Showing a good ad where it's valuable to the user and provides the monetization to the to the, you know, service and just might be searched, this might be a website. All of these, they they do need the monetization for them to provide that service.

[01:06:55]

But if it's done in a good balance between.

[01:07:01]

Showing just some random stuff that's distracting versus showing something that's actually valuable.

[01:07:07]

So do you see it moving forward as to continue being a model that, you know, that funds businesses like Google, that that's a significant revenue stream because that's one of the most exciting things. But also limiting things in the Internet is nobody wants to pay for anything. And advertisements, again, coupled at their best, are actually really useful, not annoying to continue. Do you see that continuing and growing and improving or is there do you see sort of more Netflix type models where you have to start to pay for content?

[01:07:45]

I think it's a mix. I think it's going to take a long while for everything to be paid on the Internet, if at all. Probably not. I mean, I think there's always going to be things that are sort of monetized with things like ads. But over the last few years, I would say we've definitely seen that transition towards more paid services across the Web and people are willing to pay for them because they do see the value. I mean, Netflix is a great example.

[01:08:09]

I mean, we have YouTube doing things. People pay for the apps they buy more people I find are willing to pay for newspaper content for the good news websites across the Web.

[01:08:23]

That wasn't the case a few years, even a few years ago, I would say. And I see that change in myself as well, and just lots of people around me. So definitely hopeful that will transition to that mix model where maybe you get to try something out for free, maybe with ads, but then there's a more clear revenue model that sort of helps go beyond that.

[01:08:46]

So speaking of revenue, how is it that a person can use the TPO and a Google CoLab for free?

[01:08:55]

So what's the I guess the question is, what's the future of Tancer flow in terms of empowering, say, a class of 300 students and they're amassed by Amitay?

[01:09:11]

What is going to be the future of them being able to do their homework intensive, like where they're going to train these networks? Right. Right. What's their future look like with CPU's, with cloud services and so on? I think a number of things that we need to answer for opensource. You can run it wherever it is. You can run it on your desktop and your desktops always keep getting more powerful. So maybe you can do more. My phone is like, I don't know how many times more powerful than my first desktop.

[01:09:39]

I probably trained on your phone, though.

[01:09:41]

That's right. So so in that sense, the power you have in your hands is a lot more.

[01:09:47]

Clouds are actually very interesting from, say, students or or courses perspective, because they make it very easy to get started. I mean, Kulab, the great thing about is go to a website and it just works. No installation there. Nothing, you know, you're just there and things are working. That's really the power of cloud as well. And so I do expect that to grow again, you know, CoLab as a free service.

[01:10:14]

It's great to get started, to play with things, to explore things that are, you know, with free. You can only add so much, you be happy.

[01:10:24]

So just like we were talking about, you know, free or subscription. Yeah, there are there are services you can pay for and get a lot more. Great.

[01:10:32]

So if I'm a complete beginner interested in machine learning, it tends to flow. What should I do? Probably started going to our website and playing. There is not a test for that audience are clicking on things like our tutorials and guides their stuff. You can just click there and Cordeaux CoLab and do things no installation needed. You can get started right there.

[01:10:50]

OK, awesome. Thank you so much for talking today. Thank you. Next on this. Great.