Transcribe your podcast
[00:00:00]

Tv can come alongside and try to access some of the other channels that may be performing well or not, but also just have a more educated view of every given client's marketing mix and performance to know where tv should be fitting into that, where should it not be and what other channels are at play here so that we have just a more holistic view of their marketing mix and the associated performance from our standpoint, which helps us all be better marketers at the end of the day, which has been really enlightening for us and I think helps us show up smarter and have more holistic conversations with our clients that ultimately helps us all do a better job marketing and growing.

[00:00:36]

Businesses hello and welcome to the Marketing Architects, a research first podcast dedicated to answering your toughest marketing questions. I'm Elena Hingel. I run the marketing team here at marketing Architects, and I'm joined by my co hosts, Rob de Mars, our chief product architect, and Angela Voss, our CEO.

[00:00:55]

Howdy.

[00:00:56]

Hello. And we're joined by two of our coworkers at Marketing Architects, Dan Cleveland, VP of Strategy and Jordan Rossler, director of analytics.

[00:01:05]

Great to be here. Thanks.

[00:01:07]

This is a smart bunch. I was telling Jordan the other day, we have a second Jordan now at marketing Architects, but he is the OG Jordan. He is the original genius.

[00:01:17]

Thank you Matt and Jordan for joining us. We're back with our thoughts on some recent marketing news. Always trying to root our opinions in data research and what drives business results. Today we invited both Dan and Jordan on the podcast because we're tackling a big topic, media mixed modeling, or mmm. MmM is back in style in 2024, and today we'll talk about the history of MMM, why it's popular once again, some of its weaknesses, and discuss whether your brand should invest in a media mix model. But before we jump in, I have a quick article to set up our conversation. This one is by John McDermott from Adexchanger and it's titled why Facebook, Google, and Amazon are embracing media mix modeling. According to McDermott, these investments are unusual because MMM estimates the impact of various marketing channels using statistical models to analyze their sales and media spend. So these platforms are essentially giving credit to their competitors for driving sales when they're using a media mix model. MMM is typically done by brands themselves, but the recent privacy restrictions, like the app tracking transparency, or ATT framework, made measurement so difficult for the big three that they'd rather lose some of their credit than not demonstrate any attribution at all.

[00:02:29]

Meta led the way in this area, releasing Robin, an open source tool that doesn't rely on cookies, pixel data, or any form of personally identifiable information. Three years ago, and just one year ago, Amazon launched an automated MMM application to make it easier to export Amazon ads data for MMM analysis, and Google told Adexchanger they are also looking into building an MMM tool. So this news is not only interesting because these big three are giving credit to other channels, it's also unique to see them using a measurement technique that has been around for decades. Agencies first started using statistical analysis in the 1960s, but MMM fell out of style with the rise of digital advertising because brands could target and measure their campaigns with laser like precision. But with privacy concerns on the rise, MMM is in a revival, and large advertising platforms want a say in how this analysis is conducted. And while mmms have been around for decades, they benefit today from extensive data collected by large platforms like first party retail data from Amazon, Meta's data on billions of users, and Google's data from YouTube, search, Chrome, and maps. These new mmms are attractive for smaller brands who might not have had the money or resources to build one from scratch.

[00:03:43]

Sounds great, right? But the fundamental issue is whether these results are trustworthy. Brands have constantly complained about the big three grading their own homework, and while MMM is supposed to be more objective, the same concerns remain. McDermott concludes the article with this line, if you let someone grade their own homework, don't be surprised when they give themselves an a. So that's the most recent news in MMM and why it's going to be coming back as a marketing buzword in 2024. We have our own experience with MMM since we offer holistic marketing measurement solutions for our clients as part of their tv advertising campaigns. And that is why we invited Dan and Jordan, two leaders at MA who work on our measurement teams on the show today. So, Dan, Jordan, thank you again for joining us. Jordan, I wanted to start with you. Let's begin by defining our key term for this episode. What even is mmM?

[00:04:31]

It's a good question. Ultimately, media mix modeling, or marketing mix modeling, is a statistical analysis that allows marketers to at least attempt to try to compare all of their marketing channels in terms of performance on a level playing field by drawing all of the spend data for each individual channel into a model, along with any relevant macro or economic factors that would help explain ebbs or flows in revenue or orders for a business. And then there's some fancy regression based modeling that goes on and ultimately spits out what portion of the business's sales, leads, orders, whatever that KPI is, should be attributed to each individual channel. Like I say, it's attractive because it's all getting done at once by the same model, rather than relying on the individual attribution reporting from each marketing channel individually.

[00:05:21]

Thanks, Jordan. So Dan, you have some extensive marketing measurement experience. How have you seen MMM evolve over time? And what do you think its significance is today for marketers?

[00:05:32]

What's not changed is the need to know what's working. As Jordan was saying, and I would say from your article, what is actually brave and daring about these tools coming into market from the people who are building them, is that they're essentially recognizing that their marketers, their customers, can't rely exclusively on Lastclick anymore. And that's the biggest change I think I see in the mindsets of our clients is that they realize that they have to understand these dynamic influences and not just rely on something straightforward and simple like last click. So the thing that has really changed here over the last number of years is the speed and cost that it takes to get these things done. So these tools give marketers and companies especially a chance to do these things in ways that were never thought of before, literally hundreds of times. The reduction in cost and effort to get this done makes it available to really almost any brand that has the wherewithal internally to do it. And that's a really big win to get all brands being able to understand what's happening and getting away from just last click and understanding that holistic effect.

[00:06:39]

The other thing that's happened over, I'd say even like the last decade or so, that has really evolved is the amount of in house data science that companies have internally and the number of programs like colleges that are spitting out students and graduates that have these advanced statistical knowledge. And they're not just thinking of it from an academic or from a science standpoint. They're actually bringing it into strategy and marketing. And from our perspective, marketing effectiveness very specifically. And that's given companies a lot more room to kind of flex into these spaces and to move away from being strictly more ops centric and being more strategy and analytics centric. So this opportunity to delve into MMM has become a real eye opener for many brands and for the clients that we're working with. They see this as a very intriguing area to start understanding and to compare against all the other models that we work with.

[00:07:34]

So right away, mmm sounds like something we'd like, because we love bringing science to marketing and we do not like last click attribution models, I'm guessing. But Jordan, why did we market architects at tv agency decide to provide MMM for our own clients.

[00:07:50]

Honestly started ultimately as an area of learning and exploration for us, like Dan was saying, because it was more accessible than ever before. We had questions and things we wanted to learn from it, but also we were getting more and more questions from our clients. And so we started to kind of peel back the layers of the onion of several different ways to attack it. And ultimately from what we found, it was landing in a place after enough massaging of the models to try to figure out what works, what doesn't, what are some of those best practices, what are some of the pitfalls? You could get to a pretty good place of an accurate representation, or at least a seemingly reasonable representation of performance for a business that would accent the other models that we and our clients were using on their end. And it helped us have a seat at the table and a voice in the conversation as they explore Mmm as well to help try to educate them along the way. As know, we had done our own education of ourselves to help them avoid some of those pitfalls as well.

[00:08:44]

On one of our most recent episodes, we had Colin Fleming on who is the executive vice president of global marketing at Salesforce. And I think most would agree, like last click, attribution is definitely on the downswing, right? It's been exposed that this is not a true representation of accurate performance. And one of the things that he said was it leads marketers to market to the attribution model. And I just found that interesting. And we all love data, right? We want to have that answer. We want to go into the C suite with confidence on what's performing. But we have definitely found that there's no silver bullet right to what we do. So when you think about how we look at attribution and having multiple models, how does MMM compare to some of the other attribution models that we use here at Ma?

[00:09:33]

I think there's a few different things to talk about here, obviously. Number one, just from a pure logistical standpoint, the data inputs are very different and the outputs are different as well, because we're looking really at the entire business. And so you need the spend data from each individual channel to ingest into the model, which sometimes, as tv agency, is hard to get our hands on. But if you can get that and then you get the output that has a number of performance for every single channel, then you can kind of start to take that step back and look at what is working and what's not relative to each other and have a more holistic conversation about not only how tv can come alongside and try to access some of the other channels that may be performing well or not, but also just have a more educated view of every given client's marketing mix and performance to know where tv should be fitting into that, where should it not be and what other channels are at play here, so that we have just a more holistic view of their marketing mix and the associated performance from our standpoint, which helps us all be better marketers at the end of the day, which has been really enlightening for us and I think helps us show up smarter and have more holistic conversations with our clients that ultimately helps us all do a better job marketing and growing businesses.

[00:10:46]

Yeah.

[00:10:46]

And Ange, just from your question, one of the pieces that it brings that is kind of that long standing challenge brands have is when they look at just single channel measurement, you always have that concern that, am I double counting? When I put the piece together, is it really bigger than the hole that I actually know? So it's one of the few ways to start getting a kind of a cross check or an audit of all of these other reporting systems coming in. So in a way, it's a little bit like a judge or an overseer of some of these other approaches that we like to bring in.

[00:11:18]

Well, I think about the Airbnb story, right. And them looking at and trying to get really clear about what is driving what. I think their concern, right, was that there was over attribution happening, or we're sort of crediting the activity that would have already occurred because maybe we've built the brand or we've got friends recommending Airbnbs to others to digital channels. So not to make this about digital versus top of funnel, but I think there's something there that marketers are growing more aware of and more concerned about.

[00:11:51]

I think, too, one of the cool parts of a lot of the models we've explored is there are actually components of the model that kind of are a variable that's like all the other orders that are coming in outside of your direct marketing efforts from things like word of mouth and the like, where it's like it's not a specific channel, it's not a specific effort, there's not specific dollars going towards it, but your business is going to grow from it. And knowing that a portion of the pie is coming from there and then assigning the rest of the pie the orders and the credit as appropriate, I think helps control for, a some of that double counting and b not taking credit where credit isn't due. It's obviously not a perfect science. You can't ever know exactly, but at least the hat tip towards that happening, I think is a good step in the right direction for getting to true incrementality and more and more accurate results.

[00:12:38]

So as Jordan said, it's not a perfect science. And market attribution is definitely not simple, and MMM is also not simple. There's a lot of options if you decide to pursue MMM as a business. So Dan, I wanted to ask you, we've tested a lot of different options. Which type of mmm model have we had the best experience with?

[00:12:57]

So we've tested both Meta's Robin and Google's lightweight. And I would say Robin is the best fit for us. And part of the consideration there is really our own internal tech stack and our team's experience and their skill sets. So when you're looking at comparing these models, honestly, there's maybe some differences from a features standpoint, but maybe as importantly is the readiness that you are from a skill set standpoint, what your team is willing and able to tackle.

[00:13:27]

One thing I want us to talk about is on this podcast, we try to cover trends and what's exciting, we try to do it through a research first lens, and sometimes that causes us to sound like a little bit of downers, but that's what we're here to do. So what are some of the challenges marketers face? With media mixed modeling, it all sounds great, there's a lot of excitement. But let's be honest here about what are the realities for marketers if they want to invest in mmm.

[00:13:52]

We talked about speed and cost, and that those things are really opening up the opportunity for brands to play in this space. It's really tough to have both speed and cost and quality. So just from a logic standpoint, the quality angle is probably where you'd have the strongest critiques. Obviously, that's where some of the great your own homework piece also comes into it. That is really one of the areas that we've put more time into and really trying to understand some of the downsides of how these tools are approaching, especially top of funnel type of channels like tv. So for us, we've done some digging and some of our own research on ad stock decay or kind of the degradation curve of impressions, and use multiple models to help learn different types of scenarios that we could possibly optimize inclusion in the different tools that we're using. So making sure that we have a level playing field is probably the toughest piece in this instance. And the way that these tools are designed, that's probably one of the weaker areas that they bring. And one of the bigger differences from the customized approaches you get from an outside agency.

[00:15:00]

I would add a couple of things as well. I think the component of time is an interesting one when it comes to MMM number one on the front end. Oftentimes to really have kind of a grade a model, you want at least a year or two of really solid, high quality data like Dan was talking about, to feed the model, to kind of train it and show this is what normal looks like for our business. So now that we're going into the time period that we're actually trying to test something or analyze performance, the model kind of knows a little bit about your business. And so if you don't have either the data or the time or some combination of the two to feed into the model, there's going to be limitations and the quality of the data is obviously going to suffer. And then from an actual testing and learning standpoint, really a best practice around MMM is to not look at things more granularly than probably a month or a quarter at a time. And so it's a little more general, a little more obtuse than really acute learnings about certain days, certain weeks, certain tactics.

[00:15:56]

It's a lot more generalized, which again, a lot of good things there, but there are downsides as well. I think the only other couple things I would quickly mention is obviously mmm doesn't account for the value of brand building at all. It's really just trying to attribute direct revenue, direct sales, leads, orders, whatever that variable is, and not accounting for channels impact to building your brand. And then I think specifically within campaigns, whether it's certain tactics, certain creatives, certain whatever you're trying to test, you're not going to get learnings there. That's where you need to rely on the other models. And so it's kind of, as we've talked about in almost every single analytics podcast here, that's why multiple models are important. And I think you can kind of complement the drawbacks of MMM with other things as well. It's just important to make sure you're actually doing that.

[00:16:42]

The detail about brand, I'm surprised about that because I would think mmm would be a brand friendly type of model. But you're saying that it doesn't account for brand.

[00:16:52]

Yeah. One of the drawbacks specifically for more offline channels like tv, for example, is the model is really trained to look at spike in marketing, spend matches to spike in revenue, and so if you're not seeing an immediate spike in revenue or orders or leads to match the spike in marketing spend, then performance is going to look soft and the model is going to discourage you from spending more money there. And so if you have a channel with, like Dan was talking about, a little bit more decayed response or longer term impact, the model, even if we tell it to try to look a little bit longer term out to see that impact, it's harder and harder for the model to know with confidence that an order a month from now, a quarter from now, or next holiday season was from tv or a billboard or something that happened today that's a little bit more offline, little less directly attributable. And that's where kind of the uphill battle of some of those offline channels that build brand at the top of the funnel can really struggle in an MmM.

[00:17:52]

Yeah. For example, practically, when you look at brand awareness and brand familiarity, we're measuring that every six months. So the granularity of when you're getting a data point doesn't sync up nicely with the granularity of say, daily or weekly spend and sales performance. So to Jordan's point, you don't have a wide enough lens to really see how that impacts, especially a near term campaign that you're really trying to measure a spend difference within a matter, know, months or a year.

[00:18:22]

Jordan, something you said about timing triggered a thought for me. We really think about tv's impact in two buckets. We've got multiple models. But for any product or service, b to b or b to c, we have both in and out of market consumers. Right. And tv does a great job of driving both immediate sales and then also building that future demand. So when you think about an offline channel like tv, how does an mmm take that impact into account? And have we found it to be reliable or actually, are mmms more biased against television?

[00:18:56]

I think that's the blessing and curse of MmM at the end of the day, is that in theory, it does have a more zoomed out view of what an offline channel like tv is doing. But the reality is that for those in market buyers, let's take them first. We know that tv is driving people to channels like paid search or paid shopping, where the relationship between the spend that the client is spending to earn that click or earn that order and the revenue that they're getting from that order is immediate. It's happening in the same day, oftentimes the same minute or two. You spend to get the person on your site, they spend and pay for it all in itself, that's definitely not how tv works. And so a model that's trained on the relationship of how is an increase in spend boosting an increase in revenue. Tv is going to lose some credit to those other channels there, and it's really not going to get the assist in the assist column, like in basketball, like, oh, by the way, tv is really helping in this other channel. That other channel is going to just kind of take all the credit because the correlation is much stronger there.

[00:19:57]

And then for those out of market people where we're kind of planting the seeds but then harvesting them later, the model, even though we can try to tell it, hey, try to look weeks or months in advance or from now to see what is tv doing down the road. It's just much, much harder to have high confidence for us, for any model to know that, hey, an order that is happening 90 days from now was from tv 90 days prior. Those relationships are just really weak. And so it kind of weakens the model and just really fractionalizes the credit that tv is going to get in any sort of model. So it's kind of burning the candle at both ends a little bit as it comes to offline channels, because it's losing credit on the front end and not really getting full credit on the back end for what it's doing, which is just, again, maybe not a reason to not do it at all if you're an offline channel marketer, but just a consideration and reason to kind of tinker with the model a little bit and see if you can try to massage it in a way that gives every channel its kind of full due.

[00:20:51]

And you're complementing it with other models to try to get the most holistic view of what's going on in terms of timing, immediate and lag credit.

[00:20:59]

Fellas, Elena and Angela have been throwing you a few softballs here, so I thought now's a good time to bring some heat. I've got two questions for you. The first one is, why do we call it mmm and not.

[00:21:19]

That's a real question here, Rob.

[00:21:20]

Thank you.

[00:21:20]

We'd like to take that one. The second question I have is, how do we expect AI will impact down the road?

[00:21:30]

Well, the thing maybe lacks a little bit of explanatory power. I'm not sure if everyone would naturally be able to translate that into a data science experiment. But anyways, from an AI standpoint, the piece to realize that MMM is not a senate, forget it. It's going to give you an automated answer and you just execute. Obviously, we have just firm belief in multiple models, but the MMM piece, there's still a fair amount of human interaction and a human decision making process, especially on the back end as you start to bring together all of the data and the insights that it's collecting or generating for us. So from an AI standpoint, you could imagine over time, the AI tools are going to help troubleshoot and better apply these things. So today the human element maybe gets a little bit more automated. Jordan and I, we go back and forth on exactly how do we start to call the final kind of result set down from the masses to the practical pieces. AI probably over time, will be able to assist in that and make some of that more precise and a little bit less human subjectivity driven and a bit more statistical again, or at least using reliability measures and other metrics like that that are embedded in all these outputs to help make that a little more simple.

[00:22:50]

That makes sense.

[00:22:51]

Or should I say, yeah, thoughter.

[00:22:54]

One thing I would jump in and say as well, from an AI standpoint, I think could be on the front end getting both ideas, but also data to throw into the model, in addition to just your own business's data. So asking an AI tool, hey, what else might be impacting my sales that I'm not thinking about? And is there a place that you can pull daily or weekly or monthly data for me so I can put it into my model? And oh, by the way, can you make sure all my data is nice and clean and there's not any massive outliers in here that we may not have caught? And you can kind of do a once over to just to make sure there's no gotchas, no blanks, no zeros, no commas missing, all of that stuff to kind of clean it up, tidy it up so that the model runs more smoothly?

[00:23:37]

This has been really interesting and helpful. I'd love to wrap this up by asking both of you, what advice would you give to a marketer using mmm for the first time?

[00:23:46]

Yeah, I would say to just make sure you're going in with eyes wide open about the limitations and the strengths, just so that you don't overreact to results. One approach that we've had is to try running multiple models in just a lot of different ways. The beautiful thing we mentioned, meadows Robin solution earlier, they're literally running 10,000 models at once and giving you the top five to ten. And what we found is looking at all five to ten, seeing the differences seeing the synergies, maybe even averaging them all together to look at. Hey, we know one model isn't going to be perfect, but the combination of the top five or ten of these out of the 10,000 that we ran is probably starting to be onto something. And so attacking it from that humble mindset of like, hey, we aren't perfect, these models aren't perfect, Robin's not perfect, the data is not perfect. But once we start to see enough patterns here and trends, there's a lot that we can learn there. And it kind of takes away some of the downsides of, mmm, in general when you look at it through both lenses.

[00:24:43]

I like your humility comment there, Jordan. And I would say, just thinking about how some of our clients have reacted to seeing some of these results when they've grew up on a single touch or a last click approach and seeing this multi touch output is they sometimes get a little bit challenged with seeing some of their favorite channels not performing as well as they had or in a way that they've maybe already invested in. So I would say from things to expect, expect, there's a fairly high bar for incrementality in general. As Jarno was mentioning, this whole issue of what's likely to happen without marketing that particular number is sometimes seen as a surprise. And then when you see winners within a multi click environment versus the winners from a last click environment, those outcomes aren't going to be the same or assume they're not going to be the same. And you really need to, from a marketer's perspective, make sure you can keep an open mind and start to think through. And it really challenges you to maybe argue more on the side of some channels that you maybe don't think were driving what you thought they did.

[00:25:45]

The one other piece that I would maybe add that you guys are both too humble to speak to is, I think one of the things that has made us good at what we do, related to just measurement in general, is the fact that Dan and Jordan live on separate teams and they look at data differently. Jordan's team is more focused on bottom up analysis, Dan's is more focused on top down. And I think creating an environment where you can have that healthy debate and there have been some brawls, let me just say, within the business, on performance. But ultimately, it's our job to not just win that next dollar from a client, but ultimately to drive transformational growth for them. And so creating an environment where we can dissent on a topic like measurement and looking at, you know, a minute by minute analysis is really important to ensuring that ultimately you drive the success you're looking for.

[00:26:41]

That's a great addition, Ange. Thank you. Jordan. Dan, thank you for joining us today. Thank you for your big brains. We appreciate it.

[00:26:49]

One thing that Matt, our head of analytics, repeats a lot internally, a lot externally to clients as well, is all models are wrong, but some are useful.

[00:27:01]

That's it for this episode of the marketing architects. We'd like to thank Ayanna Claphockey for producing the show and Taylor de los Reyes for editing. You can connect with us on LinkedIn, and if you find our show valuable, please leave us a rating and review. Now go forth and build great marketing.

[00:27:22]

What a bunch of nerds.

[00:27:23]

Yes.

[00:27:24]

What a bunch of nerds.

[00:27:26]

Whatever you tell all your friends things like all models are wrong, but some can be useful.