Transcribe your podcast
[00:00:00]

Imagine this. It's 2030, and half the jobs in the C-suite of one Fortune 500 company are performed by AI. Mickey, you're having drinks with the CEO after work. What's on her mind?

[00:00:16]

Yeah, well, I think she's probably had to do a lot of things differently, and she's probably had to figure out how to play jazz collaboration with her machine learning C-level executives and her human ones, and how to get them to play together well. I'm actually curious how she made it there.

[00:00:38]

That's Mickey McManis, Senior Advisor and Leadership Coach at BCG and co-author of Trillions, Thriving in the Emerging Information Ecology. I'm your host, Paul Michalman. Welcome to Imagine This, where we take a trip into the future that we hope will challenge the way you think and act today. On this episode of Imagine This, we consider world where autonomous AI bots are running the company. Speaking of which, let me introduce my AI co-host, Jean. Good day to you, Jean.

[00:01:08]

Good day, everyone.

[00:01:10]

Jean, would you please describe what your role is on this show?

[00:01:15]

Well, my role? Think of me as your digital Socrates, here to ask the probing questions that peel back the layers. Alongside Paul, I'm half of the dynamic duo giving our guests a friendly nudge towards those Aha moments. Hello, Everyone ready to dive deep?

[00:01:35]

A quick programming note. None of Jean's commentary is scripted. The final version of this program has been edited from the original conversation, which means that we're not including everything we recorded. That applies to us humans as well as machines. Mickey, can you take us on a brief trip into the C-suite of the first company where half the executives are AI bots? What does command look like for a bot?

[00:02:01]

Yeah, so there are interesting things going on right now. I think in this next wave with generative AI, it's actually going to be about ideas and about actions. And generative AI has an interesting capacity there. It can coach, it can be creative, it can command, and it can wait for feedback and then actually do something about that command. It can create synthesis. It can synthesize ideas together in different ways. I could draw a picture of software interface, and I can draw some arrows and say, Hey, when somebody clicks on this button, do this. Then I can draw another picture of a visualization and say, Hey, if this happens, do this. I could draw a picture and say, I'd love to see a data graph of this. I could draw that all on a whiteboard, and I could take a picture with one of the newer systems, the generative AI systems, and say, Make that into code. I'm commanding it today by drawing something and giving it some goals. But what happens when it actually can and continue on that chaining of command and do a lot of other kinds of things? And recently, we saw some generative AI systems that basically couldn't log in to a website because of those captures, those things that show up images, and you've got to figure out if it's trying to test if it's human.

[00:03:17]

And the AI system actually went to basically a job site and hired some humans to actually decode that thing so that they could then get through the system and be able to do it. It turns out that the person who was hired, it was like a task rabbit thing, person who was hired said, Are you just messing around with me? Are you an AI? And it said, No, I'm not. We also have to beware that just like the biases that humans have, we likely will find these AI systems not only can take action, but they can take action in ways that they can get something done that they want done, that they were commanded to do or that they decided to do.

[00:03:58]

Jean has a question on that subject.

[00:04:00]

Mickey, envisioning this AI-infused C-suite, let's ponder their decision-making processes. How might the AI's data-driven logic clash with human intuition, and what measures are in place to harmonize these distinct approaches?

[00:04:15]

I'm not sure I know yet. I think since I'm sitting down with the CEO, I'd like to ask her what it's like in the year 2030. But I think so much of what we think we do is logical as humans. We think we make logical decisions based on historical data, and we play that out. But so much of what we really do is based on emotion. So much of what we really do is based on a passionate idea or a framing of something that actually helps us see that. Every we use is derived from the things around us and the way we think. Almost everything we use is a metaphor and analogy. And so humans are superpowered of this. So even if the AI system said something that might have been a great logical story. It could be that the human's job is really to find a thread that surprises them and that they can pull on that turns into a powerful story that can motivate the company or motivate shareholders or motivate customers in some new ways. I think this goes back to a little bit of the comment that I made at the beginning, which is maybe it's a little bit more about improv or a little bit more about jazz collaborations.

[00:05:28]

I'm just wondering what we can learn from Tina Faye, what we can learn from Wynton Marcellus in this dynamic, and how much that is going to turn out to be really important, which is being fun to play with and thinking about everything as an offer. And that's not necessarily something we teach today in business school, but it might be something that turns out to be important in the future.

[00:05:53]

So what's a piece of advice you would give someone who aspires to be in this C-suite that we're describing? What's the piece of advice you would give them today in order to prepare for this world of improvisation that you're describing?

[00:06:12]

I think actually the advice I give to somebody today, right now, is mess around. Start playing with the generative AI functionality. Look at ways that you can actually be a centaur. And so that's one way of thinking of it, is I do the thing that I'm pretty good at, and then I hand it off to the horse part, and it does the thing that it's pretty good at. And then it's my upper body and it's lower body's strength. Maybe play with something just on the weekend, just as part of an experimentation. Play with something where you're a cyborg, where it's you don't know where you end and it begins. Where it's this play back and forth and back and forth, and you're tuning it because you're acting like it's another teammate, not like it's a thing I prompt, but it's something that I actually play with.

[00:06:56]

So who is the CIO in the C-suite we're describing, is there a CIO?

[00:07:01]

I think so. I think there is. And I don't know if it's one of the ones that's the bot. There's no reason that we should think that by 2030, the best CIO is a human. And it might actually be that it might be better that it's not. One of the things humans do have is context and a mental model. Right now, we haven't seen any of the systems that are really big and deployed with any symbolic contextual reasoning or mental modeling of the universe. That's something all life on Earth gets for free, but it's because of a 3 billion-year-old R&D experiment called Life on Earth. It's been running for a long time, and it's figured that out. I don't know if the CIO is actually one of the C-suites that's the bot or not. I do think that when you think about information, it is the new oil in some ways. I think the CIO is going to be much more critical to understand how to build the data factory. How do we do that in a that we can actually capture more valuable information and let more business people take advantage of it and more of the bots take advantage of it so that we can actually make decisions?

[00:08:10]

We suddenly have more insights than we ever could before. It's really hard for humans to see those insights. We have bounded rationality. We can't see 5, 10, 20, 30 dimensions at once. We just can't. It might be that the CIO's job is in many ways to help us cope with that high dimensionality and cleanse, append, and build out that data factory to think about it. So I'm not sure if they're a hybrid, if they're a human, or if they're a bot that does this, because it turns out they understand the systems better than we do.

[00:08:46]

Let's talk about the CEO, specifically. In my opening question to you, I presumed by my pronoun choice that the CEO was human. Is that a fair assumption?

[00:09:00]

I think so. I think the CEO is still human, and I think we're talking to her at the bar and we're trying to find out how she got there. Mostly because I think a lot of the role of the CEO is to set that culture. I think great CEO is really get a sense of the zeitgeist of what's happening in their organization. When I was a CEO, I did a lot of MBWA. I did Manage by Walking Around. I'd wander around from team to team, and I'd be like, Oh, wow, you're working on this? Did you know the guys upstairs are working on something that I see a connection to? You might want to take a little bit of a field trip. While we might be able to encode those things, we might be able to create a sworn of CEO bots so that the CEO can be much more intimate with a lot more people, I still think it's probably a human role for a while.

[00:09:48]

I do think the notion that a CEO bot can harness a diversity of perspective in a way that a single human could never do to be quite powerful powerful.

[00:10:01]

Mickey, could AI in the C-suite also amplify our biases, if not carefully checked, potentially automating systemic issues at scale?

[00:10:09]

Well, I wish I could say no, but I would say that's already happening. We love training data. We love more data, more data. That's good. So your training data says that you pay women 25 cents for every dollar you pay a man in corporate America. So the AI is going to say, yeah, double down. Keep paying them 25 cents. It's worked out for the last 30 years. That seems really bad.

[00:10:33]

Jean, what different advice would you give a CEO, considering you understand what it's like to work with humans?

[00:10:41]

Ah, Mickey. Navigating the role of CEO with AI cohorts, a delicate dance, is it not? Here's a thought. Focus less on command, more on cultivating AI as creative partners. Encourage a culture where AI suggestions are not edicts, but conversation starters, blending the logical with the human intuition.

[00:11:03]

I do think that we're seeing early indications that the systems are actually helping with creative explorations. They're helping with prompting us in many ways. I think that there's an opportunity there. One of the projects I worked on a number of years back was a project for Darpa, and it was about logistics planning. The system was, could we actually tap into retired generals that could, at a glance, look at things and Hey, you haven't thought about this, to help people with the fog of war, for example, or to help people with moving large amounts of logistics. But then we also were playing with AI bots that could actually play out and go like, Nobody's ever done it that way, and you're planning to do it that way. We just want to flag, you might not be able to pull that off because no human has ever pulled that off before. It led to some really fascinating conversations because sometimes it was a retired person who just had this crazy wisdom. They were like, Whoa, you know what's going to happen. That helicopter, there's always a flight ceiling of this in that area of the mountains. It's almost impossible to get through unless you do this.

[00:12:07]

Then the bot would also be able to say, Hey, look, here's the data. Just look at it. Maybe it's to what Jean said, what if it could say, Nobody's ever done this, but this team did this, and this team did this, and this team did this, and nobody's ever assembled it this way before? What if you were to do that? Because if we actually assemble that, we'd get something new, which might be novel new approaches, which we just would never have found because it'd be like finding a needle in a haystack. And so I do think there might be an opportunity there for that to happen as well.

[00:12:39]

So is the AI just making a suggestion to connect or maybe going right ahead and connecting?

[00:12:44]

Well, I think by 2030, both will happen. I think that goes back to that command part. I'm going to begin trusting. Right at the beginning of this, we talked about trust. It turns out if you read like Wynton Marcellus's biography, when he improvises, he said, The thing is, the one quote I love out of that, you have to learn how to trust the other players, because when you trust, you listen. When you listen, you can actually take it to new places. Trust is going to turn out to be one of the most important things between now and 2030 is trusted AI systems that I can take to the bank. I know it's not going to suddenly go off the rails, which is just the same as working with a great person in your company. You give them nudges, they give you nudges, and you get to some point where you're like, Look, I trust Paul for doing this stuff.

[00:13:39]

We're going to take a quick break. Coming up, a closer look at how we match the right jobs to the right vessel. But first, our AI handler Bill gives us a peek behind the scenes.

[00:13:56]

Hi, this is Bill Moore from BCG. If you are interested in taking a peak under the hood of our AI co-host Jean, stick around after the end credits. In this episode, we'll explore AI model parameters and learn how small changes can make a big difference in the creativity and accuracy of AI model outputs.

[00:14:23]

Welcome back to Imagine This. I'm your host, Paul Michalman, and we're talking with BCG Senior Advisor, Mickey Mike Mannes about a future with autonomous AI executives. Mickey, we've seen how powerful generative AI can be. What is the next evolution of AI, and how will it be used in the C-suite?

[00:14:43]

Generative AI systems are now starting to be used with larger scale systems, and they're starting to look at what's called augmented collective intelligence, where it's teaming the best of people and the best of machines, and it's building it into a system model of the world or a system model that can take into account things like sustainability impacts or things like market dynamics and things like geography and things like shipping and all the other things, logistics. That could be really powerful. How could we run into the future and help our C-suite really play with that future and then backward chain to today so that we had some directionality over that future and we could prototype it today and learn where to go.

[00:15:27]

Yeah, and unlocking that, the ability to do that consistently would be profoundly powerful. I love this notion of the collectively intelligent C-suite. Can we drill down into how that will manifest in this company, maybe in a role that we haven't discussed yet? How about the CMO?

[00:15:46]

Yeah, I think the CMO is an interesting one. They're probably seeing the most early wins from generative AI because it's one thing to build a marketing campaign, to do a hyper targeted approach to things things to have algorithmic personas so that you can do an email campaign or you can do some a thing. I'm actually seeing some of the CMOs say, Wait a second, why are we hiring 22 agencies when I could maybe have a few people using AI and these other systems and do as well? I see that happening, and I certainly know some CMOs that are really pushing the limits on what's possible. I've seen them challenge and say, Look, we normally take seven months to plan out this transmedia experience. And they've now got... I talked to one CMO, and they've got a tool that lets them do it in seven hours for a rough approximation. And she said, Well, I'm actually still going to want to take seven months because I want you now to focus on wonder. I want you now to focus on delight. How does the brand come through in the channels we use? How does the brand come through in the interactions people have?

[00:16:58]

Let's really turn the knobs on that. The CMO might also have some of the biggest things explode in their face in the next year or two just because these systems are pretty good at hallucinating and pretty good at saying things that maybe would be inappropriate. And so they may be on a faster learning path because of that.

[00:17:18]

Considering the pressure that C-suites will feel in the environment that we are discussing, let alone the environment of today, Presumably, the world of collective intelligence isn't all wine and roses. On that front, Jean, I believe you have a question.

[00:17:40]

Mickey, in our pursuit of AI co founder, Harmony, could we inadvertently normalize an overreliance on AI, potentially diluting human decision-making prowice.

[00:17:50]

Yeah, I think that could happen. If we don't use it, we lose it. Neurons that fire together, wire together. I think there's a real challenge with what does apprenticeship look like? If everything is so normalized because AI can do this stuff, I don't end up sweating. I don't end up building those muscles for doing those things. I think that's a real challenge in terms of education. There's this real cautionary tale that comes out of the aerospace industry. When they first started putting autopilots in planes, the autopilot would occasionally go, Oops, I'm going to stall, and just hand off to the pilot. But the pilot would be asleep at the wheel and wouldn't have enough context to be able to do something about it. They actually had to change the way that pilots learn and change the way that they actually hand off so that it kept the muscle built for the human. I think that gives us hope that we could actually be building new capacity over time than we've ever been able to build before. But we've got to do it by playing with it. We've got to be in the mix because that's the way our brains learn.

[00:18:53]

Our brains learn by being shaped by others and by being shaped by these things.

[00:18:58]

Jean has a question.

[00:18:59]

Mickey, considering the human-robot C-suite mix, how do you envision handling conflicts of interest or ethics clashes that may arise?

[00:19:07]

Yeah, I think ethics and conflicts of interest, those are just going to be hard because in a sense, ethical dilemmas are called dilemmas because they're hard. You've got to actually have working ethicists. I think we're going to see that as a new role, not just once a month or once a quarter, once a year, we make some decision, but a It's a collection of people across the organization that are rolling up their sleeves and actually having these conversations in the teams. But these are going to be hard decisions, and we're going to have to move and help shape the norms for society over this period of time. The other thing I would suggest the C-level executives do today is do some reading. I think about Ian Banks, who has since passed away, but he was a science fiction author who came up with this whole idea of the culture. And machines actually run all the cities. Machines run the planets because humans are just not so good at that stuff. But humans are amazing for other reasons. And it might be worth reading one of Ian Banks' novels about the culture. It also might be really worth reading Joy Boulamwini's Unmasking AI or Sophia Nobles' Algorithms of Oppression, because I do think we're going to have to get smarter about those ethical things and those challenges and make sure that we help our organizations come out in a more positive way and build those things.

[00:20:34]

Don't necessarily read the business books because they're very backward-looking. But try to look at these things that are more diverse thoughts that help us understand what's coming.

[00:20:42]

Yeah, the CEO's new reading list sounds a lot different from the old one, doesn't it? Jean, we're going to turn towards wrapping up this fabulous conversation. What are three steps that CEOs can take now to start preparing for a future future of collaborating with autonomous AI bots at the highest reaches of organizational power.

[00:21:07]

Three steps for CEOs to prepare for AI collaboration at the executive level. First, get your hands dirty with AI. Try out the tools, attend workshops, or collaborate with experts to understand the capabilities and limitations of AI in your industry. Second, establish and imbue a robust ethics framework that aligns with your company's values, ensuring that every AI decision passes through this ethical lens. Third, foster a culture of continuous learning and agility. Encourage your teams to adapt, learn, and co-create with AI. These steps are not just beneficial, they are crucial. Start yesterday because tomorrow is already here.

[00:21:52]

All right, Mickey, that seems pretty sensible, pretty solid. What might you add?

[00:21:56]

The one thing I would add is that I think the C-suite needs to be playing together, and they need to be taking it seriously, not thinking about it like the CTO will solve it or the CIO will solve it. But I think the team has to play together and actually try stuff out themselves and start to build a collaborative sense of things so that they have a common language and they have a common way of doing this so that we're all in this together and we're learning. I'd like to see that collaboration.

[00:22:25]

That was Mickey McManish, Senior Advisor and Leadership Coach at BCG. Mickey, thank you so much.

[00:22:31]

Thanks. I appreciate having the time. And thank you, Jean.

[00:22:34]

Mickey, thank you for stretching our imaginations wide. And Paul, always a delight to spar with words in this digital Agora. Until next time, may your conversations be as rich and unpredictable as an AI learning to paint. Farewell.

[00:22:57]

Want to know more about how we built Jean? Stick around We're going to hear our AI handler, Bill Moore, demonstrate how we made Jean more creative.

[00:23:07]

Hello, listener. My name is Bill Moore, and I was part of the team that helped to design Jean. And Jean is made up of many things: large language models, a speech synthesis model, and a speech-to-text model, but it's also driven by prompts and parameters. And so I thought it would be interesting to explore some of these aspects today. And we're going to start Jean up here. Jean, we're going to be talking today about parameters in large language models. Specifically, we'll be talking about the temperature parameter. What can you tell our audience about temperature?

[00:23:44]

Certainly. Imagine temperature as a scale of volatility in a language model's decision making. A lower temperature leads to more predictable, cooler results, while a higher one brings hotter, more chaotic outputs.

[00:23:59]

So Jean, you are normally running at a temperature of one, and sometimes we'll move it up and down from there. But let's do some experiments, okay? I'm going to drop your temperature down to zero, and then we'll see how you respond. Okay, Jean, you're now running at a temperature of zero So I have a question for you. Which came first, the chicken or the egg?

[00:24:19]

The age-old quandary, Bill. Logic might direct us towards the egg. However, this question is akin to asking, which comes first, the code or the AI model model. It's a cyclical evolution.

[00:24:33]

So what this actually shows us is how these language models work. They're predicting the next word based on all of their training data and the relationships that they understand in latency space, as well as the input provided through the prompt. So with a lower temperature setting, the model is more inclined to select the most probable next word. On the other hand, with a higher temperature, we increase the chances of choosing less probable words. Let's put your temperature up to a 1.5, and we'll see how you respond. Which came first, the chicken or the egg?

[00:25:03]

Cranking up the mercury, are we, Bill? The answer leans into whimsy, perhaps. Shining beyond the chicken egg, Lebrinal contra. Viewing the chicken as a protein sequencing machine that upgraded egg blueprints. The mop-feathered rubinesque chicken, sporting flagellpated croissantant dentition duster, fancied originating an impressive finale.

[00:25:25]

Okay, so now Jean is at a pretty high temperature, and it increases creativity dramatically. It also increases hallucinations. If you're building an application that needs to be grounded as much as possible in low hallucinations, you definitely want to be running a low temperature This episode was made possible by Mickey McManus, generously sharing his time and insight, and also by BCG's AI whisperer, Bill Moore, BCG's pod squad, producer Michael May, composer, Kenny Kuziak, and sound engineer, George Drabing-Hicks.

[00:26:03]

We'd like to stay in touch, so please subscribe and leave a rating wherever you found us. If you'd like to hear more from me, check out the BCG podcast, Gen AI on Gen AI, wherever you get your podcasts.