Transcribe your podcast

You're listening to leading up with Udemy. This podcast is your guide to developing your skills as an emerging or seasoned leader. I'm Alan Tod, your host and the vice president of leadership development at Udemy. Together we can work, lead, and live differently to create a better world. I was super excited to have Prasad Raje, the chief product officer for Udemy, on the podcast this week. I really loved how Prasad connected the entire Gen AI movement to the historical context, all the way back from the printing press to the web to the smartphone. It helped set the foundation for where we are and where we're going.


The question is, well, why now? Why 2023? Does this happen in why not five years from now? Why not five years ago? And the answer is, it happened this way because the amount of compute power available to do training of sufficiently large language models became reasonable, reasonable being tens of millions of dollars, but not impossible. And that's what ended up creating, of course, the first of the lnms that you saw a year and a half ago.


This week, I'm tapping into the deep well of knowledge within Udemy's doors and speaking with our chief product officer, Prasad Raje, about generative AI and how we all should be thinking about reskilling ourselves for the future of work. As the founder of several successful tech startups, Prasad is the consummate serial entrepreneur. He has decades of experience building and growing software as a service companies of all sizes, from five to 50,000 employees. And he holds multiple degrees from Stanford and the Indian Institute of Technology, Bombay. Prasad, welcome to the podcast.


I am so happy to be here.


Yeah, super. Well, Prasad, you've been talking about generative AI and what it means for us as knowledge workers for a long time. And I've heard you talk about it as the sort of fourth big change in the way we create and transmit content. So give us your sort of framing of where we stand today with regards to Gen AI.


Great question, Alan. So, in fact, early on, when the Gen AI revolution was starting more than a year ago now, I had to sit back and say, what is it that feels so big? Why does this feel so big? And believe me, I did feel it felt big back more than a year ago. And I connected this with what I call the four revolutions of content and going all the way back to the earliest days of the printing press in the 1490s, when the printing press was born. And it led to the Gutenberg Bible and the ability for knowledge to be not written in manuscripts, but typeset and then printed and replicated so that was a content replication revolution. And just those two things, content and replication, changed the world. We all know that. It led to information deliveries, the spread of religion, the spread of knowledge, the rise of universities, the creation of libraries. You can see that little technology change had such vast and long standing impacts on our society and the world at large. Now, the second wave of content change, the second revolution in content came in the Internet, in the beginnings of the Internet, which I would call in the 1970s and more broadly in the early days of the web in 1993.


But really, the Internet itself fundamentally laid the foundation for moving information, not in the form of physical atoms from point a to point b, but moving information in the form of bits. So we were able to transmit data through information networks. Fundamentally, it made possible the ability for content to go from one place to another for essentially zero cost. And we got, the web, of course, was born. In fact, email was born even before that. And that content transmission revolution, of course, changed the world in a big way. The Internet fundamentally changed the world. And that was the second revolution. Now, the third wave of that was the arrival of the smartphone. The smartphone, again, was a technological revolution. But more importantly, it was a content consumption revolution, in that people were not tethered to their desktops or even their laptops. They could be anywhere, anytime, and consume information wherever they were. This led to the explosion of information instantly being everywhere, the explosion of social networks, the explosion also, frankly, of video content consumption in a large scale, untethered way. So that content consumption revolution, of course, is still with us and has changed the world, as we all know, just by the act of having billions of cell phones in everybody's hands all over the world.


So all of those lead us to the Gen AI revolution. The Gen AI revolution is a revolution of content creation and content synthesis in a completely different way. Through these previous three revolutions, the content fundamentally was still created by humans, and humans were the only ones that had the facility with natural language. And whereas with Gen AI and especially the large language models, it became possible for the first time for machines, for computers to have a facility with language, with content, that was unprecedented. We've all had the experience of having spoken to generative AI chatbot, whether it's OpenAI or bard or bing or what have you, and had the experience of saying, wow, this is so natural. That was never possible before. But that content creation is one thing. I also want to point out that content synthesis is another thing. So the large language models have the ability to synthesize content as well. Yeah.


So I'm sitting here as you're framing the issue to discuss, and I can't help but imagine you're a college student at the IIT in Bombay, and all of this stuff has happened in this compressed timeframe, and you've gotten to sit at the top of it. I mean, you were in probably the most difficult university to get in, in the world, statistically speaking, and you went on to multiple advanced degrees at Stanford. Did you ever think you were going to get to sit in a front row seat of the world of technology?


Wow, that's a great question. No one's asked me that. Well, I have to say, IIT definitely prepared me to be a technologist at the core. I've always been one who has sat at the essence of technology and to understand how stuff works all the way down from there to my degree at Stanford in device physics, to understand how transistors work at a fundamental level, and then subsequent to that, how computer systems and computer architectures are put together and computer systems are put together and then eventually networks. And my work in moving on during the early days of the Internet to the software side, to building Internet websites and so on and so forth. So, yes, the Internet revolution also was something that I saw up close and personal happening at that time to build the amazingly innovative thing called a website for companies. That was the business model. Build and host websites was what that company did.


So, Prasad, where do we sit right now, just in the context of what you just said, is this as big as the Internet? Is the AI revolution that big?




You've watched all of it. What's your gut tell you right now?


It is that big and bigger, and I'll tell you why. In some sense, if you think about it from an actual technological computer systems and computing standpoint, if you look at the oldest computers, mainframes, and then the personal computers in the mini computers, and then the laptops and then even the phones, they're fundamentally the same computer. If you open it up other than scale, they actually have the same architecture. They have an arithmetic logic unit, they have a memory unit. They fetch instructions, they fetch data, they act on data, they send stuff back to memory. It's fundamentally, actually quite similar. Anybody you wake up from the 1960s, a mainframe engineer from IBM, and show them a smartphone, they'll be amazed at the size of it, but they'll recognize the instruction set and the architecture. However, today, if you look at what's happening with generative AI, the underlying architecture of them is something completely different, completely new. It's not the same old thing. In the sense that this is a whole new computing paradigm. We're not giving computers an instruction to follow. Ultimately, no matter how complex our world is, this conversation we're having, the voice of mind converts into audio.


The audio goes into bits, and the bits go into a cpu, into memory and cpu. You can follow all that. It's all deterministic. Some set of humans has written the instructions to make all this stuff work, and sufficiently smart people can sit together and describe the whole thing to you. Now, in the case of AI, it's completely different. If you look at the workings of a large language model and the architecture of a large language model, it's not like someone has written instructions on how to respond to a prompt. What's happening from a computational standpoint is completely different. You have the notion of a neural network. You have the notion of a sophisticated network that has been trained on a bunch of data to behave a certain way. By the way, very similar to how a human brain behaves, how a human brain, in fact, from the earliest days of a baby, and how it receives inputs around the world, basically, when I watch little babies now, I can't help thinking they are actually in their training phase. Their neural networks are going through training, and their mental model weights, if you will, the weights of the neurons and the connection to the neurons are building as we go.


By the way, human systems are still superior, let me tell you, because in human systems, new connections get created, not just new weights get created. So llms by no means are as good as human brains.


But do you think we'll get there in ten years, 20 years?


Evolving, reconfigurable computing? That's sort of a tantalizing topic, but certainly what we will get to in the next few years is more and more scale, because we can have these networks built out at much larger scale. However, I have to say, in a nod again to the human and the biological systems, we do it much more power efficiently than any of these neural networks do. The neural networks are notoriously large, computationally expensive. Training takes a lot of power, and, of course, running them consumes a lot of power as well.


Well, first, let's talk about what do you mean by synthesis? And how does content synthesis make our lives better?


Yeah. Okay, so let me explain what I mean by content synthesis, first of all. So, content generation is when you go to a large language model and you say, write me an essay on the mosquito problems in sub saharan Africa, and it'll create content for you. And to be clear, by the way, to the earlier point, it's not a deterministic. It's pulling up an article that it has in its database. Large language models don't have a quote unquote database. It's actually creating the content along the way. That's content creation. The flip side of this synthesis is if you give a large language model a pre written essay on the mosquito problem in some Saharan Africa, then in fact you can ask it questions. You're given this essay, what is the writer trying to say about the biggest challenge that exists? So you can ask questions in the other direction. So given content, it can extract meaning and information out of the content. That's what I mean by synthesis. So content generation, content synthesis are two sides of that coin.


Yeah, it just made me think. Vassant Dar is a professor at NYU, and we had him on the podcast and he described an example that I suppose would be synthesis. But he said, imagine it can watch the US Open tennis match for several hours, and then you can just ask it. Give me a five minute summary or a two minute summary or seven minute summary, and it'll instantly give you that answer. And that's something that he was just marveling at, that had never, no such thing had existed before.


That's absolutely right. And that actually brings us to the fact that I want to be clear that we're talking about large language models, which is mainly text, but we should also, of course, make sure that content is not just language, it's images, video, and so on. In that example, it would be ingesting video to do the synthesis act that you described. And actually, that leads me to another example, by the way, which might be closer to some of our listeners. In the context of sales. In my prior company, this was our vision that the act of selling in a complex selling environment is the act of having many conversations, many sales reps from a selling company, many buying parties from the buying company, many emails exchange back and forth, phone calls made, video calls made, proposals sent and received, documents exchanged, calendar events and meetings held, et cetera. And all of these things are essentially a complex collection of content. And one of our ideas was, okay. Given this unstructured, complex collection of content. Today, we have to rely, of course, on human judgment and the ability of the sales rep to be completely on top of everything that's happening to make an informed opinion about how well the deal is going, whether it has any deal risk, whether it's going to close on time, et cetera.


The idea was to feed all this information, use that synthesis capability I described, to essentially ask the machine. The question how is this deal going? How is it likely to close on time and et cetera? So that's another example. And of course, closer to home, we can certainly talk about examples in the course and learning domain as well.


Well, since you mentioned it, one of the things that you've talked about before is just this idea of marrying Gen AI with proprietary company data and where the possibilities go. So at Udemy, we've got nearly 70 million learners on the platform, 70,000 instructors, hundreds of thousands of courses. So you're sitting on maybe the largest data set in the history of learning. So tell me, what does that mean? How will geni impact the experience of learning for learners instructors, where do you see it?


I'm super excited about that. And it comes down to that one word we started with, which is synthesis. So you're absolutely right. We are sitting on this massive amount of content, and now we're going to turn it on its head to use that content to synthesize, extract information from it. So let me explain a couple of different ways in which we're going to do it. So, one is, of course, the classic problem of finding the right course. So we have 200,000 courses, and customers come to us, and users come to us and say, I'm looking for a course and I can do a search, but a search is an incomplete and a one shot way to find a particular course. You can say, I'm looking for advanced Python programming for machine learning. You can do that, and you'll find a bunch of courses in Udemy. But we can go one step beyond. We can say, all right, are you a beginner or an intermediate? Do you already know object oriented programming or do you not? Do you already know another language, Python, your first language? Do you know data science already, or are you learning data science?


And so we can refine this requirement deeply and dramatically so that the learner is sent to the exact course that is going to be responsive to their needs. And in fact, not just course. We're doing this to create a learning path. So what we call our skills mapping capability is the ability for us to have this type of a conversation with a learner or a learning professional in a company who's trying to compose a learning path and essentially prescribe or describe their needs, and for us to take that and create the exact learning path. Now, let me tell you, this was not going to be possible before the advent of Gen AI. This would have been a pipe dream. Suffice it to say that Gen AI technology is an absolute prerequisite for us being able to do what we plan to do here with this.


Well, people have been trying that, by the way, for the 30 years that I've been following. People have tried skills based initiative and competency based education and breaking things down into chunks that drive outcomes, and no one's ever cracked the code. So I think we're at a whole new forefront of kind of revolution as well. And the way we learn.


Absolutely. And talking of the way we learn, one other exciting thing I'm super excited about, what we're building here at Udemy now is a learning assistant. And what that means is when you're in the middle of a course and you're listening to an instructor, you might have a question about a concept that was presented ten minutes ago, or even yesterday, or even a minute ago and today, if you have a question about it, we give you the option to write a question, and the question goes to the instructor, and the instructor will come back and respond to you. Hopefully soon, but sometimes not so soon. Now, with our learning assistant, which has been driven by the content of the instructor's course, we are able to answer the questions for the user in a way that is, in fact, contained within the instructor's domain, within what the instructor is teaching. So that the learning assistant is basically telling the user the answer to the question right away. So the user asks a question in a chat flow, for example, a clarifying question, an explanation of a concept, something that was difficult to understand, something that might have been forgotten even though it was well explained by the instructor.


But the learner just wants a very quick re education or a quick piece of information to solidify their learning. With the learning assistant, we're going to be able to do that. And again, to your point earlier, Alan, the fact that we have this extremely rich course content, this tremendous quality of our instructors who have built this terrific content, we're going to be able to bring that to our learners in a dramatically a different way while still honoring the integrity of the instructors, explanations, and so on.


If you want to develop invested leaders who motivate, inspire, and engage distributed teams across your organization, visit business


Invested leadership.


So let me zoom out for a second and sort of come at this a different way. A lot of people are fearful right now about AI. It's going to take my job. And everybody seems to have weighed in. The IMF, the OECD, the World Economic Forum, Accenture, Goldman Sachs, Morgan Stanley, every one of them, they all say somewhere between 20 and 50% of all jobs are going to be impacted by Gen AI. Even the pope weighed in on this topic last week, kind of famously. So I'd love for you to tell me, what would you say to people with that worry that Genai is going to take my job or things are going to be radically disrupted so our listeners can, they could do something about it today.


What would they do? Great question. Very important question. Let me do a little bit of a historical thing, and we'll come back to this particular era. If you think about it, when the Internet happened, the argument was, well, this is going to change brick and mortar forever. Remember the word brick and mortar. Businesses are going to completely change. Everything is going to go online. And, well, it did happen in a way. Right? We have, Amazon is where we order everything today, and our malls are much less frequented than they used to be. But somehow, even despite that massive technology change, and of course, that's just one example, many, many other changes happen in the net, at least in the United States, we're having record low unemployment, and the economy has a way of absorbing those losses. So technological change, when it leads to efficiencies and faster and easier ways of doing things, actually increases productivity. And when productivity increases, the invisible hand of the markets makes sure that people get transitioned to the new economy, the new normal, and people find different ways of working, different ways of finding employment. Now the question becomes, okay, so is this going to have the same sort of arc that we had before, or is this going to be somehow different?


And to be honest, the question is very nuanced, and it's going to be hard for me to say categorically one thing or another. So I'd like to discuss it in the form of nuance rather than absolutes.




So, for example, for sure, this is a much broader impact. There is no two ways about it. This is not limited to folks that were brick and mortar businesses. Gen AI, by its very nature, because it's about content, affects all information workers. And to be very clear, when we talk about content, we're not just talking about text. Content can come in many forms. Code is content. Software is content. Legal opinions and legal documents are content. Graphs and analysis of data is content. Of course, like we said earlier, images and video creation, et cetera, is also content. So it does affect a larger swath of employees and information workers, broadly speaking, in an information economy than it did before. So it would be absolutely correct to say a larger fraction of our workforce is going to get changed in some way. Now, in what way is the question for us to talk about. I think that, number one, every information worker owes it to themselves to make sure they understand the technology at some level, as well as understand the implications of that technology in their business. So if you are a salesperson, you should be learning about how you might use Gen AI more effectively for your sales calls to make you more efficient, to make you more productive.


If you're a customer support person, you should be looking at how you might use Gen AI in your business to make your customer support responses better. By the way, software companies in all these domains are working like crazy to make sure that their offerings are going to be upgraded to take advantage of the technology. If you're a lawyer, you should be making sure you understand how this is used for either drafting content or for analyzing the content of contracts that already exist in your world. If you're a data analyst person or someone who creates business analysis from data, you should be looking at how you can use it for your data analysis and synthesis of insights from data, et cetera. So for this, of course, self servingly, I would say udemy has content for just about every one of these areas. You should be able to find it. You should be able to learn about how this will come into your domain and skill yourselves into how to use the technologies and have a basic understanding of what is a prompt, basic understanding of what is training, what is a context window. How do you augment the information that a large language model has with information that it doesn't have?


And how do you make it reason and synthesize output, even from information that it wasn't trained on in the first place? And so these basic concepts should be understood by everyone, in my view. And then the specific application of those concepts, whether it's through software that you have access to or particular efforts you do on your own to enhance your own expertise. Because there's one thing I can say will happen, which is somebody who doesn't use Gen AI needs to worry more about somebody who does use Gen AI in their jobs than Gen AI itself, taking away their jobs. Or put more positively, they should view it as a superpowered tool that makes them have dramatically better superpowers.


As you're describing all this, we talk a lot about everybody has to be a continuous learner or a lifelong learner, but I'm thinking about specifically people like Bill Gates or Mark Zuckerberg or Elon Musk taught himself rocket science. They're all self taught in the disciplines. And I guess as I think about this, I mean, everything you're saying is we all have to become kind of lifelong power learners, self motivated and driven, because even this revolution that we're on, this isn't going to go away and it's not going to slow down. It's going to just keep happening in waves, isn't it?


Absolutely. There's no question about it. You can think of that lifelong learning, in fact, with Gen AI itself as your tutor, you can ask questions to a capable gen AI LLM about all kinds of topics and for which you can enrich your own learning. So that's an easy way to do that. There's no question that lifelong learning is a must. It's upon us. And the pace of technology, of this is dramatic. It's something that is absolutely unbelievable, how fast it's moving, how fast new things are coming out. We've reached a tipping point.


Yeah, well, I think there's another revolution afoot that's running alongside of Gen AI and AI, and that is this idea of moving to skills based organizations. And we're seeing examples. A certain number more and more states in the United States are removing the college degree as a requirement for a whole bunch of jobs, and a bunch of companies are doing that. And what they're saying is that the college degree is a lot less relevant in today's world because the degree depreciates fast. And whatever skills you have today are going to depreciate fast.




So continuous and lifelong learning is the only solution to rapid knowledge obsolescence. So I'm wondering what you think about this movement to becoming skills based organizations and skills based hiring.


Great question. I have the privilege of talking to a number of our customers who are going through this right now. Imagine, if you will, you were the chief learning officer of a large multinational it services company, and you have 10,000, 20,000, 10,0000 employees who have all been skilled over the last 1015 years over all the technologies for the web and databases and front ends and back ends and data and so on. All that is great. Along comes Gen AI. And all of a sudden your customers are asking you, as every company is creating a gen AI strategy, they're going to their it consultants and saying, what can you do for me? And of course, the IT consultants can't hire 100,000 people with Gen AI capabilities. Well, because they don't exist. So what are they doing? They're reskilling their existing workforces in this new technology at a rapid, rapid pace. And if I might say so, udemy has the content to satisfy these wide range of needs in a single place like never before.


Yeah, that's powerful. Well, and on that skills based organization, you actually told me a story about the whole nation of Ireland. What was that about?


Yes, I had the opportunity to visit our Dublin office last week and spoke to some representatives of the government, and I was amazed to hear that the country has a skilled strategy as a whole. Why? Simply put, a significant part of their population works for it. Companies and a significant number of entities from the US and internationally have created hundreds of thousands of jobs. And all of those jobs have been created in an environment where these high tech companies need skills from their workforce. And the country recognizes that re skilling their existing population is an essential component. Of course they have other components like open immigration policies and so on, to import those knowledge workers, but there is a recognition that skills are a fundamental driver of the economy. And this was just phenomenal to hear that coming straight from some of those representatives. Yeah.


So it's amazing. I mean, if I summarize, the rate of change is going to continue to accelerate. Gen AI is a good example, and that the evolution to skills based organizations will change the way jobs are organized and formulated and hired for and fired for and so on. So if we are all nothing but walking sort of bags of skills, we're going to have to think a lot about making learning a habit and making it a critical part of our life.


No question about it. The degree that everybody or most folks have is a label for a bag of skills they've accumulated at a particular point in time, and hopefully also a label for the ability and interest in acquiring new skills over time. So I think that ability and willingness to learn and willingness to adapt and grow your skill set by ongoing learning is absolutely critical for every information worker. There's no doubt about it.


As we wrap up, Prasad, we have a question that we ask all of our guests, and that is, what are you curious about and learning?


Now, I am madly looking at every announcement about Gen AI and llms on my social media feeds at all times. So back to the original question you asked me. I'm deeply interested in how these things work, the internals of them, as well as how they might be used in different ways in different scenarios. So the pace of evolution of this technology is absolutely fascinating. I'm spending a large amount of my time learning about it.


Well, I can't wait to hear what comes from you next. Prasad, thank you so much for joining us on the podcast.


It's a delight to be here. Thank you very much for having me. Alan. Thanks again.


To Prasad Raje for joining us today on the podcast. Follow leading up a podcast from udemy Business wherever you find your podcast, we'll be back next Wednesday with another episode to help you level up your leadership skills. Follow the show so you never miss a new episode, and if you like the show, leave a rating or a review. We love the feedback and it really helps us to find new listeners. To learn more about leading up or how udemy can help you develop leaders at scale and move business forward, visit business The leading up podcast is produced in partnership with PodPeople. Our original theme is by Soundboard.