Transcribe your podcast
[00:00:00]

You ready to solve everyone's anxieties about pulling? Yeah. Also, hello and welcome to the 538 Politics podcast.

[00:00:19]

I'm Galen Drake.

[00:00:21]

I'm Nate Silver and this is Nate.

[00:00:28]

It's our first post-election model talk. This may be our last model talk for a while. Maybe we'll talk about Georgia and maybe we'll have a future model talk episodes to further unpack the polls once we got more data. But where are you in the recovery process?

[00:00:45]

You know, yesterday started to feel more like post-election. I felt a little bored yesterday. I put it like that. Right? Oh, wow. Whatever other feelings you have during the election, you rarely feel boredom. And you I felt a little bored.

[00:00:58]

I don't think I've hit that yet, but I look forward to the point in time where I will feel bored.

[00:01:03]

Nothing will make you more bored than reading discussion of what network Trump will form. Nothing could be more boring on that topic. And like many people love talking about media gossip, but like, literally just like put me to sleep.

[00:01:20]

OK, well then let's get into something perhaps more substantial or at least more relevant to what we came here to do as of our taping. It looks like Republicans have 50 seats in the Senate while Democrats have forty eight seats, meaning that the two Georgia runoff elections will decide control of the Senate. Of course, Joe Biden has won the election. It looks like his national popular vote margin will be somewhere between four and five points. That's a bit off from where, of course, the polling average had him before the election, which was at about eight and a half points.

[00:01:58]

And then when it comes to the House, while polls were pointing in the direction of Democrats picking up some seats, it looks like they'll lose somewhere between six and nine seats, but retained their majority overall in the chamber. So we got a lot to talk about here.

[00:02:13]

We can talk about polling, we can talk about modeling. And a lot of different people have shared their takes on polling so far in that cycle. So I'm just going to ask you broadly, how do you think polls did in twenty twenty?

[00:02:26]

They're OK if you end up with the three or four point polling year for the presidency and national polls and in key swing states. I mean, the simple fact is that polls miss on average by about three points. So for three and a half or four, then that's pretty normal. I mean, I know there's a careful balance to strike. If the polling industry said, OK, well, three or four points is a big deal. You know, let's just keep doing what we're doing, then odds are that you would have whatever problems they had this year.

[00:02:55]

Plus, on top of that, new problems that in polls might get really, really far off. Right. So if I were a pollster, then I'd say we have to look at what happened here. Why why weren't we exactly on the mark as someone who analyzes polls and models the uncertainty? This was very normal. It's very common. It's not really like we're we're not even on the edge of the distribution. Right. We're not saying, oh, my gosh, we also happened once and every 20 years.

[00:03:18]

Right now. It's just like very much in the thick part of the distribution. I think part of what pollsters do is they and again, if you read the story on 538 right now, I mean, it's very sympathetic to the difficulties of doing polling. I think it's a communication strategy. They sometimes want to make it seem like their instruments are more exact than they really are, because there are many things that make polling difficult. And so you just hope to get close.

[00:03:41]

And they usually do get close, including this year for the most part. I mean, some states were worse than others, but we've found this premature and now we've fixed it. I think they didn't fix the problem of education waiting. Now you have the correct part of population that's non college educated, but there are a whole bunch of new problems that crop up that you didn't necessarily anticipate ahead of time, some of which may be permanent and which may be temporary, some of which may be related to covid or Trump or male voting or whatever else, you know, like.

[00:04:06]

I mean, first of all, obviously, if I thought you have to cover the story right. In the grand scheme this election, the polling A was not that bad. And B is like the fifteenth most important story. And so the fact that there's this big thing about the polls in an election year, the polls more or less got direction. The outcome, right. Not fantastic. But like at some point, people just like to beat up on polls and the pollsters, I think, who are like too willing again, I'm not a pollster who, like, fall on their sword to be like, oh, woe is me, woe is me.

[00:04:35]

You know, it's like that's not going to help. You can't let other people define the questions, right? You can't let other people, like, define the term.

[00:04:43]

Yeah, OK, that's totally fine. But first of all, we're here to cover polling and there's tons of other things regarding this election that we'll talk about another podcast that we have already talked about and that other formats will cover. But I think the question here is there was an underestimating of Trump's support in 2016. There was probably an even bigger underestimation of Trump's support in this election. And so people are curious if they're. It's something that has to do with the way that polls are conducted, the way that Trump's support is structured, that means that polls aren't doing a very good job of capturing his support.

[00:05:20]

And I think that's the question that pollsters are trying to answer right now. And I wouldn't characterize that as people falling on their swords are just say people are curious, because if they have a sense that there's something wrong with the methodology that they're using or the instruments that they have to gauge public opinion, they want to get it right. And so I think this can be like an honest and open and curious conversation. Do you right now have a sense of why polls might be underestimating Trump's support beyond just educational waiting?

[00:05:48]

I think there are a lot of different theories. First of all, I think a problem this year is that they actually underestimated support for Republicans in Congress by more than Trump, which might steer you away. So the one theory that I think is least persuasive is one form of the QAI Trump voter theory where voters are failing to express their preferences because they're afraid of it socially unacceptable. So they support Trump, right. You know, when Susan Collins beats her polls by a lot more than Trump, I don't think it's socially unacceptable to say that you support Susan Collins.

[00:06:22]

You know, this very moderate establishment Republican. Right. That's one theory that I wouldn't say throw it out, but like, is less credible. I mean, look, one theory that I think is relatively persuasive is that, like, it's just harder to get certain types of people on the phone that phrase social trust is used, meaning that people who trust in institutions less trust the media less, because people who answer polls are kind of freaks like, you know, if only eight or 10 percent of people answer polls, you're kind of a freak.

[00:06:50]

If you take 15 minutes, please do it. I guess if you're out there in a reader, but like maybe people who are less trusting of like a stranger calling them are less likely to answer the phone and they're also more likely to vote for Trump. Republicans, too, there's a correlation between lower social trust and voting for Trump. And you're also not answering the phone as much then that can be hard to wait your way out of, I think, four polls.

[00:07:15]

Is there a counterbalance there where people who are more likely to vote for Democrats and are more likely to trust the media or be enthusiastic about responding to polls, end up voting more for Democrats?

[00:07:28]

Yeah, I mean, that's the flip side to frame it, right? It's maybe not a shit Trump voter, but a gregarious, excited Biden voter where, you know, there's some theories. Two people are at home during the pandemic. They have more time to answer polls. They're not out as much or they'll take a phone call more often. So like in these states, like South Carolina, where it looks like Lindsey Graham is going to consider to his polls.

[00:07:49]

Right. It's a state where, like Democrats were way more enthusiastic, but they just aren't enough Democrats to elect a Democrat in South Carolina in most years and see really lopsided turnout. So it may just have been the Democrats were very charged up and they were answering polls at unusually high rates as opposed to Republicans, an unusually low rate, which could be the two together.

[00:08:08]

Right. They could work kind of in tandem. Yeah.

[00:08:10]

It's so keeping in mind that in twenty eighteen the polls were extremely good, which a is kind of a problem if you have this demise of polling narrative. And B, that was a year where you kind of had by most measures, equal enthusiasm on both sides. So maybe in a year like that then you don't have a problem. I mean, the thing is like enthusiasm. People always like, well, you know, that's what the polls say, but this side's more enthusiastic, so they might do better.

[00:08:35]

You know, actually, this gets more enthusiastic, often underperformed its polls because those people are more likely to answer polls in 2012. There's whole debate about like are the Romney voters more enthusiastic than the Obama voters? And guess what? Obama beat his polls by a fair amount in 2012. So maybe you want to hedge against this were enthusiastic supporters who are more likely to answer surveys. So that could be some issues in who is or isn't actually responding to polls.

[00:09:01]

And what's important here, we talk about weighting by education. Right. This is a little bit different in the sense that are the non college educated white people that you're reaching on the phone, a representative of the broader population of non college educated white people. And if they aren't, you can wait as much as you want, but you're not going to fix the problem. So that's what we're talking about there. In terms of it mattering, who responds to polls?

[00:09:24]

I'm not sure.

[00:09:25]

You can't fix the problem. You just have to probably use fancier statistics. I'm not sure it's unsolvable.

[00:09:31]

Oh, not that it's unsolvable, but waiting alone doesn't necessarily fix the problem. Right. If the population of non college educated white voters that you're reaching in a poll is just more likely to support Biden than the non college educated voters who are out in the broader public because waiting just makes that sample that you reach on the phone a larger chunk of the ultimate poll result. But do you want to go further into that in terms of what kind of statistical methods do you think might be able to address the problem?

[00:10:01]

It's a little hard to describe. I mean, if you like, can make imputations of interest. So you could, like, wait for different things if it's social trust. You can ask questions like it's a little rude to ask him this maybe, but like how many friends do you have? And if you find that the population that you're surveying has more social connections than the actual American population on the whole, then you can wait for that. You can ask things like how many rallies have you gone to, how much you participated by donating money.

[00:10:28]

Right. And if it turns out, as most polls find, that people are way over participating in these activities, how much to do the American population then? You can also wait for that to people who have lower participation rate than we're heavily. So I think there are ways around it requires some creativity and ask a few more questions than I had before. But yeah, I mean, as part of the problem here, also just that if leadership within the Republican Party or Trump in particular trashes polls, then that candidate's supporters just won't be interested in responding to polls, period.

[00:11:00]

And so as part of the equation, just I guess, trying to get the broader public to trust polls or care about polls, maybe, I don't know.

[00:11:08]

That seems pretty conjectural. And again, we didn't have any problems in the twenty eighteen midterms. So, yeah, I mean, it's interesting to look, if like do polls that attribute themselves to a big news organization, ABC or The New York Times or whatnot, they do worse than other polls. It's worth testing, I suppose. Yeah.

[00:11:25]

And again, we mentioned this multiple times, but we're still waiting for better exit poll data and verified voter surveys and things like that.

[00:11:32]

Yeah, the exit polls are like the AP exit poll and the exit poll, like don't really tell the same story about what happened, the election, which is kind of a big problem. Yeah, yeah. And all these people who are like on the one hand, they're like, oh, polls are dead. And they're like, oh, well, you know, there's crusting out of college educated white women in Nebraska from some exit poll has all types of issues.

[00:11:51]

I mean, look, I think one of my special powers is that I can distinguish good faith versus bad faith attacks. And I think polls, if I ever get out of it. Let's talk about the polls for now. There are a lot of bad faith factors that come into play whenever there's an opportunity to criticize poll or a more objective, quote unquote, methods, you can tell who some of the bad faith actors were because they were the ones who were very quick on Wednesday morning to get their stories up about other polls have failed when, like actually many states were going to turn out to be right that based on that blue shift worth voting for.

[00:12:31]

Right. As of Wednesday morning at midnight. But like eventually with Pennsylvania, Wisconsin, Michigan, Georgia, those for so many people that were in quickly with their takes, it's usually a sign of of bad faith. And so on the one hand, it's important. I mean, I was very curious about this, but there is a degree of like very impossible expectations where people expect polls to be. Exactly. Right now, it's not even right enough to get the winner.

[00:12:56]

Right. You have to be like exactly right on the margin now to which. OK, fair enough. It like but there's a lot of bad faith criticism.

[00:13:02]

Well, I think what people are reacting to more is, yes, it got the winner quote unquote right on the presidential level, but doesn't seem to have necessarily on the Senate or House level.

[00:13:13]

So I think it the winner right in our modularly, two upsets in the house. We saw congressional district polling that was pretty out of whack. I haven't looked at the house much, but in the Senate, there are only two upsets so far, and those are pretty mild upsets. Tillis was like a two to one underdog and we barely had getting favored in Maine. Now, that is is some fundamentals with the polls. So the polls themselves may have cast a slightly heavier underdog, but we're not talking like necessarily there being major upsets.

[00:13:41]

It's just that in a close election, oftentimes the toss ups break in one direction. And that's kind of what happened in the Senate. Part of why I get annoyed to Gaylan is that like one of these years, hopefully after I've hung up my polling spikes and doing something else, and that's some exotic resort playing poker or eating sushi or something. Right.

[00:14:02]

Hopefully I'm not going to be. You're taking the whole 538 team with you when you go to your exotic resort to play.

[00:14:07]

I don't know where this place is. Yeah. What you're going to have like a seven point pollinger or something. Right. But we're kind of still winding up in the thick part of the probability distribution. And people are like freaking out. It's like, oh, my gosh, one party when most of the toss ups. Well, that's really normal. It's like what normally happens in like of three or four people here as normal. So if you think this year.

[00:14:26]

There's an unacceptable degree of error in polling that don't look at the polling because you're going to have this happen all the time and it's happened all the time in the past. And in fact, sometimes it's worse and it's better. But like, if you think it's unacceptable, either you got no information of the polls, even though they kind of told you who's going to win the election, then. Yeah, they don't look at polls anymore. That's OK, because you're not going to have like you're not going to fix problems where they're magically super precise every year.

[00:14:50]

They may have a pretty good year at the time. And then a quarter of the time they miss way off one direction of court time. They mess with the other direction or something like that is kind of in between.

[00:15:00]

Well, is part of this maybe false expectations of how accurate the polls are based on the arts? And Midhat, essentially like the average polling error, if you go back to the 90s, I think we've looked at for the national polling error is somewhere in the range of two to three points, whereas if you include more data, then it gets quite a bit bigger. And of course, the polling error in this election on the national level was somewhere in the range of four points.

[00:15:28]

And so that really there was a larger polling error this year. If you compare it to just recent data, that's fair.

[00:15:34]

I guess, although there was a polling year in 2016, in 2012, actually there's a bigger polling than people realize. Yeah. If you go back and look all the way back to nineteen thirty six, which is kind of what we do, then it's look I'm saying because like for some years we only have Gallup and because you only have one poll that could lead to more error, that you have a polling average, but we kind of crunching all those numbers figure that like an average polling errors, like three point two points or something like that is our estimate.

[00:16:02]

And in fact, this year we kind of added a provision because of male voting. We thought it was going to be higher. So we thought the average error would be higher than usual. So that's what, like four points, basically. And so, I mean, obviously, you know, I was trying to just open the fourth wall here. Right. Part of it, too, is I feel like, boy, you know, if I thought I did a really good job of anticipating where the polling error would be and the weird ways in which it's kind of correlated to some degrees between different states.

[00:16:27]

So it's like if your job is estimating polling error, then I feel like we need a lot of good decisions. Right. Including one decision being that we are calibrating all this longer history of polls and not just off more recent data. And that's based on my view that polling is difficult in this epic. My view that when response rates go down, you're going to have more issues. Despite that, you could have a fair amount of confidence that Biden is going to win because you have a pretty big screw up and Biden would still come out ahead here.

[00:16:57]

You had a pretty big screw up, maybe actually just kind of a touch below. I mean, it's kind of weird because there all these states were Biden's going to win by like half a point or a point. But there are several states where he won by that amount. So it's not like he was put on just one state you'd have to, like, have lost three of these four states, Wisconsin, Pennsylvania, Georgia, Arizona. But I'm not sure Pennsylvania will end up being that super close.

[00:17:19]

These provisionals are coming in pretty strongly for Biden. But that notwithstanding. So it's a little bit weird to say, like exactly how close this election was. It wasn't that close. The popular vote, it's not that close in the Electoral College margin. It is close to the tipping point state, although there is more than one state at a tipping point.

[00:17:34]

So you mentioned earlier on that the covid pandemic could have some effect on the accuracy of the polls. And if you think about what's different in twenty twenty compared with twenty eighteen when the polls were pretty accurate, of course, the pandemic is one of those things. If you look at one state that is in the midst of a really bad outbreak in Wisconsin, that's one of the states where polling was least accurate off by a somewhere in the range of seven or eight points.

[00:18:01]

Is there any credence to that? And what exactly about covid would make polling less accurate?

[00:18:06]

So there are a couple of things. One could be that covid has creates differentials in who is physically at home or in a position where they're able to respond to polls. So here's one stereotype, right, is that Democrats are knowledge workers who can work from home and Democrats are more careful about what activities they do under covid. Therefore, it's easier to beat a Democrat on the phone than a Republican who may be out and into a restaurant or may have a working class job where they're not allowed to do it from home.

[00:18:35]

That could be a big issue. Another issue would have to do with mail voting where a some votes get spoiled or lost in the mail. It's not a huge amount if you have like, let's say, a one percent ballot spoilage rate and a one percent loss rate of votes that are cast by mail, then maybe you have issues that can cause challenges for likely voter screens, because maybe if you count all these people who voted early as having voted, which you should, but maybe you kind of exclude some people that's going to turn out on Election Day, you're likely voter model gets screwed up.

[00:19:04]

So there are various challenges with respect to covid.

[00:19:06]

Yeah, and I should say that pollsters did say that once the current pandemic hit, they were having higher response rates. So it did seem like they were having more luck reaching people once people started working from home or spending less time doing whatever kind of recreational activities they might do in non covid. Times, so there is some evidence that actually lends credence to the idea that covid did change the polls in some way. You mentioned turnout and covid may be one reason that turnout models could have been off, but Republican turnout was massive in this election, just like Democratic turnout was.

[00:19:45]

What can we say at this point about turnout models and how real turnout compares with what kinds of expectations that were going into the election?

[00:19:53]

So turnout is very in line with you know, I'm currently showing that turnout is going to wind up being one hundred and fifty eight million. I think it may be exactly the number that our final forecast predicted is like 157 or something. I mean, one issue actually is like polls could be more explicit, I think, about kind of what turnout they expect that might be used for effecting 140 million or 160 million, 180 million or what. So it's a little bit not as transparent as it could be.

[00:20:18]

Yeah, I wonder a lot about lower propensity, meaning less vote history, Hispanic voters and maybe also black and Asian voters where we know that Hispanics that have turned out in the past few elections are mostly voting Democrat, you know, 70 percent or whatever. But unlike some counties in South Texas, which are 95 percent Hispanic, you have huge booms and turnout. And almost all those new voters seem to go to Trump. I'm sure there's a vote switching there, too.

[00:20:45]

It's not just a matter of turnout. So if you have a pool of Hispanic voters, black voters, Asian voters that haven't always voted, that aren't nearly as reliably Democratic as the ones that have and you have trouble reaching them on the phone, then you can have problems because you might say, OK, well, it's hard to reach Hispanics, but we have these Hispanics and there are a few that we're going to exclude with are likely voters who haven't voted before.

[00:21:06]

Right. And then you kind of you're using this kind of non representative sample of Hispanics and black and Asian voters to represent the entire population. And that can be a problem. And the fact that pollsters are so focused on the white working class and not thinking very much about non-white voters, voters of color, something that's been a source of error in the other direction. Right. Where polls underestimate how well Democrats do among voters of color and underestimate Democrats.

[00:21:30]

But like but if there is just so much kind of focus on the white working class and Hispanic working class or the Asian-American working class in these districts in Los Angeles and California. Right.

[00:21:39]

Then you might have problems. Doesn't seem like the polling error was correlated in this election. I mean, we saw like polling was pretty accurate in Minnesota, but they're not very accurate in Wisconsin. It was very accurate in Georgia and then not very accurate in Florida. And so when you're looking from region to region, you don't seem to find answers in just looking at geography to explain polling error.

[00:22:01]

Yeah, and Biden like reform to his polls and won Georgia, probably where his underperformed in Florida and lost Florida. That's actually pretty normal, though. And I remember there's like a discourse when 538 Humala came out in August, we were like, oh, my gosh, all your maps are too weird, right? Your maps are too weird. Why do you have a map where Biden wins Georgia, not Florida? Don't you know that there's a uniform swing?

[00:22:25]

No, it's actually kind of normal that you have several different vectors where maybe on the one hand, polls are underestimating white working class support for Trump, maybe Hispanic working class support for Trump. Maybe there are some covid effects in some states, maybe their effects with respect to male voting. In some states, you kind of wind up with like a patchwork where overall the polls underestimate Trump and there may be some regional factors like in the southwest in general, Colorado, New Mexico, Nevada, Arizona, the polls were fine.

[00:22:50]

California, in New England, the polls were mostly fine, except in northern Maine. But you have these different factors that create this messy correlations with the way our model actually works. When you do wind up with weird maps, sometimes it's why our model actually gave Georgia a high chance, like a four or five percent chance of being the tipping point state, even though that seems counterintuitive to people. So, yeah, I mean, it's all very normal.

[00:23:11]

I mean, again, part of my impatience here, I guess, is like the things that we spend a ton of time looking at and the late nights with the Red Bull and the early mornings and whatnot for the model. So, you know, I felt very prepared for an outcome like this. And I also feel very prepared for an outcome like we saw in 2016. And we work hard to have models that describe the plausible universe of things that can happen.

[00:23:36]

And this is not that we are not really one question I have about all of this. Looking back, is, does this also mean that throughout the Trump presidency, or at least this year when we were looking at polling data about Trump's approval rating or how Americans viewed the social unrest in American cities over the summer, or how Americans viewed Trump's response to covid or his handling of the economy, does that mean that that polling was also likely off by some figure and underestimating Trump in those metrics as well?

[00:24:11]

Probably. Actually, one thing that I think is not persuasive, you sometimes hear pollsters say, well, horserace polling, you have to estimate turnout. And so maybe going to the adult population. Right. But the turnout might. I mean, maybe that's like one of like 10 explanations. The polling and I think in general, if the horserace polling is off course where you are accountable, right, you could test the horserace polling against actual results so that polling is off.

[00:24:37]

You don't get some pass in your other polling. So please don't make that argument, which it's not persuasive and I will criticise you for it. But there are a couple of things to keep in mind. One is that Trump actually did get an uptick in his approval rating toward the end of the race, which was a little weird. I don't know why he says uptick in approval and not an uptick for him in the horse race. It's interesting just leaving that there is a placeholder to his.

[00:24:57]

Trump did appear to close somewhat strongly with undecided voters. If you look at AP vote cast their exit poll or the Eddison exit poll, Trump won late deciders by a fairly healthy margin. Now, it's not very many people in my belief in the electorate, but that is worth if you do the math half a point, maybe a point where like half a point to Trump. And so if the error was three and a half points, we can explain half of it with undecideds breaking a trump.

[00:25:26]

And you have three more points to explain. But there maybe was a bit of a shift in the Senate races. I think you actually did not have as much polling in states like Maine and South Carolina and Montana in the stretch of the campaign. So maybe after the Amy Barrett confirmation, there were some shifts or maybe voters said, OK, well, maybe I think we need Flexibles. Biden is going to win. So maybe there were some shifts that were late in the Senate and that might explain some of the gap there, I think potentially.

[00:25:51]

And so, I mean, should we, to put it more pointedly, reconsider our conclusions about the way that covid shaped Americans' views of Trump's presidency or the ways that the racial justice protests and some of the unrest in American cities shaped Americans' views of Democrats and Republicans and so on. So whether it's a polling area, we have to be kind of open minded about what happened, why Trump or Republicans were underestimated. Should we also have that same open mindedness, looking back earlier this year, thinking about how Americans viewed everything that's happened in twenty, twenty, maybe part of the premise that we should establish here compared to what else?

[00:26:31]

Right. Polling compared to like some Bret Stephens column or something like that, I would take polling that's five times less accurate. That is a misses by 15 points.

[00:26:38]

Well, no, I just mean still based off the data, but thinking like, oh, OK. Like maybe Americans were not as eager to take covid precautions over stimulating the economy, which is what it originally seemed. Or maybe Americans are not as supportive of the Black Lives Matter movement or as open to Biden's relative hesitancy to condemn protesters or rioters or whatever.

[00:27:03]

I mean, let's talk about covid if people who are taking more precautions about covid are staying at home and therefore more reachable on the phone. And that could definitely skew covid polls, whether it skews troubles or not, maybe less so, maybe some, but that could definitely skew covid polls, as could social desirability bias if it's seen as virtuous to which it is. Right. Because if you're not going out and about, you're not protecting yourself, but also making less risk of transmitting it to others.

[00:27:34]

I mean, there's obviously kind of a giant gulf between people's preferences about covid in polls, which is very kind of pro lockdown, and they're revealed preferences based on their behavior. I mean, we're kind of like all literally in our bubbles. But like I you think there is like some gap between the way people are behaving and the way that polls are capturing what they say they prefer about covid, which may not actually reflect people being hypocritical. It could reflect the fact that you're getting the homebodies who are backing down and not getting the people who are going and doing outdoor dining or whatever.

[00:28:09]

Could we say anything similar about support for the Black Lives Matter movement and Americans reaction to things like defund the police, etc., which is now becoming a debate within the Democratic Party over whether or not that hurt them in this election?

[00:28:24]

I don't know if you can quite put that in the same. But I mean, the thing about Kobe, I mean, there's there is social desirability questions about that, that you want to be seen as not being racist or what not. And sometimes questions like those, you actually see more of an effect than you do from the actual the top line, the presidential race, because you say you support Trump, you can have lots of reasons to support Trump.

[00:28:42]

Right. But I think Colbert is just in a unique category because it literally affects someone's availability to take an interview, which, you know, I mean, in theory, if you're out protesting, you have the opposite problem. You're not going to be reachable on the phone if you're at a protest. But with Kobe, we it kind of profoundly affects people's behavior and some people are living life as quote unquote normal as people are, quote unquote, locking themselves in life, that's pretty profound, potentially affect some sort of research.

[00:29:12]

All right. Well, we're going to get more data. And as we get it, we'll talk more about it and share that with our listeners. Before we go, I do want to get to some listener questions.

[00:29:22]

But first, today's podcast is brought to you by ESPN U. Radio on Sirius XM. ESPN Radio on Sirius XM is where you need to be if you live and breathe college football.

[00:29:34]

We're talking live games, the latest rankings, conference and team news analysis from experts like Mark Packer, Greg McElroy and EJ Manuel, all leading up to championship specials and more. Take Sirius XM with you and always have the latest college football info you need. Listen right now on the Sirius XM app or online at Sirius XM Dotcom. Listen, ESPNU, that's ESPN you. Today's podcast is brought to you by Neutra full 80 million men and women in the US experience thinning hair.

[00:30:09]

Yet it's still not really openly talked about, which can make going through it feel scary or stressful.

[00:30:14]

And that just adds to the problem in a time when self care is more important than ever. Every day is an opportunity to skip damaging styling tools and chemicals and focus on better hair growth from within.

[00:30:25]

Visit neutrophil dotcom and take their hair wellness quiz for customized product recommendations that put the power to grow thicker, stronger hair in your hands. When you subscribe, you'll receive monthly deliveries so you never miss a dose. Shipping is free and you can pause or cancel any time. In clinical studies, neutrophil users saw thicker, stronger hair growth with less shedding. In three to six months, you can grow thicker, healthier hair by going to neutrophil dotcom and using promo code five three eight to get 20 percent off their best offer anywhere 20 percent off at neutrophil dotcom spelled Nuti R.A.F., AOL Dotcom Promo Code five three eight four hair as strong as you are.

[00:31:08]

All right, we got plenty of questions, a lot of them about polling accuracy, which we've talked about quite a bit, and we laid out some theories there. Nobody should take this conversation as the final word on any of this. We're all still curious. And as I already mentioned, as we get more data, we'll share it with you. But we're going to answer some questions that were not necessarily pertaining to the accuracy of polling, but broader questions, some about the model, etc.

[00:31:33]

. So the first question is, which they ended up being the tipping point state?

[00:31:38]

We don't know yet because it's close enough. I think it'll wind up being Wisconsin, actually. After all, Pennsylvania probably will get up to one point five ish points for Biden.

[00:31:50]

There's an outside chance that whatever absentee ballots in Georgia are left could make Georgia the tipping point state. But I think probably Wisconsin. Next question comes from Jesse.

[00:32:00]

It's a common aphorism in scientific modeling is, quote, All models are wrong, but some are useful. He goes on to say, there has already been some discussion about whether 538 election model was right or wrong. And it seems clear that the outcome of the election was within the range of possibilities the model predicted. But is the model useful considering the range of polling error seen in recent elections and relatively large range of outcomes, the models shown as plausible?

[00:32:28]

Can the model be precise enough to provide meaningful conclusions questioning our raison d'être? I think these models are more necessary than ever. It's always true that there's polling error. People just kind of ignored it. Or you had a couple of lucky years, like in twenty eight or whatever. It's absolutely necessary to measure and quantify how wrong polls can be and be robust to different outcomes. I mean, the fact is the basic story of the election that we emphasize time and time again is Biden is a fairly heavy favorite, not a cinch, but a fairly heavy favorite because you can withstand a 2016 style polling year or perhaps a bit larger, and he still comes out ahead.

[00:33:06]

That's kind of exactly what happened. So I think it's very important to quantify polling error in ways that are rigorous, in ways that are empirical, in ways that are thoughtful about how the polling error can correlate in different states or not. I think the worst thing in the world would be to like, not probabilities. That's what your knowledge comes in. That's the value that the models at that polls lack, really.

[00:33:27]

Do you think it would be useful if polls even just listed a range instead of an actual number as stupid?

[00:33:33]

No. You know, people have to get used to the fact that, like, I'm tired of all this gamesmanship around. Like, if we kind of show people I mean, we work really hard to contextualize information and we're aware that people can some people learn from visualizations, that people learn from numbers. You could learn from words, that people learn from stories and narratives and examples like let's compare this to 2016. But at some point, like a number is a number.

[00:33:54]

We can't be phobic of numbers. We just have to like, understand there's a number that's inexact. I think people kind of understand that in their daily lives for the most part, you know, when you say, oh, well, we've got to replace our radiator, how much is it going to cost? A few hundred bucks. You know, no one is going to be like, oh, my gosh, you know, you just set an exact number.

[00:34:14]

I can't handle it anymore. I mean, people have to not be so numbers phobic. And again, a lot of the hard work is in measuring under which circumstances polls are relatively more or less uncertain when you have complicated mathematical things like Electoral College, how robust the candidates lead might be. So, no, you know, I would love it if, like people like Upwell can't do polls anymore. It's going to go right to my island and play poker and eat sushi.

[00:34:36]

But unfortunately, I think that what musicians like five 538 eight do are pretty freaking important to the public's understanding of public opinion. Understanding the uncertainty and the models and the probabilities are not the only way or even the main way necessarily that we cover the story. Covering a lot differently is one of the main ways, right? It's all part and parcel, though, of an overall coverage plan. And like if you start being like, oh, if we don't show the problem, I mean, one of their model, a group of models think, oh, we're not sure the probabilities.

[00:35:03]

Like, first of all, it's just chicken. Second of all, like that's where you actually really do continue to the public's understanding. And it's hard work to understand how the public understands probabilities. It's hard work to convey it in the right way. But fundamentally, like, that's the currency that we use to talk about uncertainty. It is where the probability and if you can't talk about that, then the discussion becomes completely incoherent and anti intellectual at some point.

[00:35:27]

Yeah. And I will reiterate what you said earlier, which is when people question polling's usefulness or MODELING'S usefulness, what we're trying to really do here is understand what the broader American public thinks and that's a worthwhile goal. And so trying to make polling better is great. Saying that we should throw polling in the trash seems counter to even democratic values, which is that what people actually think matters and are alternative may be to parachute into Punxsutawney for a week and talk to a bunch of people there.

[00:35:59]

But we're not actually going to get any better of a sense of what Pennsylvanians writ large think if we do that and probably our overall view of what Pennsylvanians thinks. Would be much worse than if we just conducted a poll and so trying to get it right. Awesome. Throw it all in the trash and saying that the scientific method and sampling period is done seems counter to what a curious, open democratic society should want.

[00:36:22]

Well, some people who are powerful pundits, right. They don't want to be held accountable. They don't want to be held accountable for their incorrect opinions. And so they would love it. If polling is so ambiguous, you can never be disproven, I would say, to like although counting wins and losses is not the best way to evaluate polls, you don't look at the margin. If you do have polls that call, quote unquote, 40 or 50 states right in OK, I guess Alabama's easy call, but they call, you know, 12 and 14 swing states correctly.

[00:36:51]

That's clearly quite useful. That's clearly very useful. Is it perfect? No. But like to say that polls were not useful in this election is just kind of crazy, I think. And again, what's the alternative is that we're kind of I don't know. I mean, we try to build models based on fundamentals, so-called. If we had not looked at polling at all, then I don't know what our model would have said. Right. I don't think it would've been better would been probably a fair bit worse on fundamentals or model thought that Trump should tie in the popular vote, which means our fundamentals model probably implicitly had Trump favored to win the Electoral College.

[00:37:24]

So thank God for polling to move us away from that.

[00:37:27]

Yeah, and none of this negates the fact that as journalists writ large, we should be covering policies and candidate behaviors and ideas and a lot of other things that have nothing to do with polls, public opinion or modeling. But like as far as horserace coverage goes, we think this is the best way to do it and we still think it's the best way to do it, regardless of criticisms. Post 20 20.

[00:37:49]

So if you want to get a little bit more detail here. Our presidential model is by the end of the race, like a pretty much a pure pols only model, it really is just taking the polls and taking them at face value and nothing. The uncertainty, our congressional models, blende polls more with other indicators, but at least a classic and deluxe version is that we usually talk about more. And there were a lot of races where, like for the Congress, those fundamentals helped a lot.

[00:38:16]

Like our model was never very bullish. Jamie Harrison or Alan Gross in Alaska, a little bit more on bullet. We never got over 50 percent. So, you know, in terms of like model talk type of stuff, we have this mid-term coming up first. And for the mid-term, we really like the blend that we have in polls with other indicators. One question is, hey, maybe you want to. Hedge a bit more in your presidential forecast, where you're blending polls with other stuff like you do for Congress, that we would have four years to think about that issue.

[00:38:47]

But, yeah, in the race for Congress or polls, we're saying, oh, actually, the polls are a little better for Democrats than most other indicators are. Therefore, there should be some reason to hedge. So that's interesting.

[00:38:57]

All right. We got a little sidetracked there because Chelsea are such a big and relevant question to everything we do here. But for a bit of a lighter question from Yaro, the question is, why did all the networks call Pennsylvania at the same time after one vote drop, that was only a few thousand votes enough.

[00:39:15]

It's lighter. Or look who was the philosopher who is like, OK, if y'all have to pick a time and a place to meet in New York, you know, not to communicate, you have to meet this person tomorrow. You do it at like 12:00 p.m. at Grand Central Station. It's like a focal point in Pennsylvania going out of the point, five percent recount march and kind of became a focal point.

[00:39:38]

And that recount margin was half a percent. So basically that one vote drop got Pennsylvania to half a percent in Biden's favor, and that's why all the networks called it. But sorry, go ahead with the philosophy.

[00:39:48]

Yeah, look, there is a degree of public perception, if you want to call it politics here. Clearly, Pennsylvania was callable twenty four hours sooner than it was called, if not. Even a little bit before that, clearly, we knew that, like, these male votes are going to come in very, very heavily for Biden and we knew roughly how many male votes that were clearly once he pulled ahead in Pennsylvania the previous night, then, you know, it was pretty safe to say that he would expand his lead.

[00:40:19]

But, you know, I mean, the networks are striking a tough balance here. On the one hand, Trump is making these fantastical and incorrect claims on their hand. You don't want to look like you're. Making a call, it's premature based on a statistical projection instead of actual votes being counted, even though that's kind of what they do for other states. You certainly don't want to have even the one in 1000 chance that you'll wind up being wrong.

[00:40:41]

So it was called that, but in the end of the day was an appropriate time to call it. I would have probably called it a little sooner if I were running a decision desk. And I think there's not much doubt about Pennsylvania. Other states Arizona got called to early by Fox and AP. I think they'll live to talk about it. Right. But like that was a premature call for sure. The Pennsylvania calls a little bit late, but look, it was interesting because like the fourth wall between decision desks and the rest of the media, it's not as tall as it once was.

[00:41:12]

I mean, the reason why these walls exists is because of two thousand when there was kind of a race, the networks to call Florida and therefore the election. Right. And they prematurely called Florida twice. I mean, that's a colossal screw up. And so they are giving this more independence. They're not worried about what other networks are doing. But at some point on Saturday morning, probably before that, probably midday Friday, the main story about the election became why hasn't the race been called yet?

[00:41:38]

If it came about the decision desks and the networks, and there's no way that they're not reading Twitter or getting messages from their friends or emails. There's no way they're not. Understanding now that their timing on when to call a race is itself a story, you know, I don't know if there was coordination between different shots. I assume this will get reported out at some point. I kind of deliberately do not contact like the decision desk at ABC, so I am walled off from it.

[00:42:11]

Sometimes you kind of hear things through the rumor mill or whatnot or through intermediaries. But like, I'm trying to like, respect their process. But I'll put it like this. Pennsylvania was callable for a long time. People needed an excuse to call it as these moves very slowly trickled in and the recount was as good an excuse as any.

[00:42:32]

All right. Well, I'm just going to ask a couple more questions here. And we did get a lot more listener questions than that. So maybe we can not retire a model, talk quite so soon and continue this as we get more data and keep answering those questions. But we did get several questions about incumbency advantage. This one is a little more nuance. Gets asked a question for the pod nerds. Is all the talk of how unusual it is for incumbent presidents to lose re-election justified, considering the rather small number of data points?

[00:43:02]

So I think after this race ended up being closer than some expected people are talking about. Oh, well, it's really hard to beat an incumbent anyway. So let's ask, is that good analysis?

[00:43:12]

That's a great question. We in our model actually go all the way back to 1880 to look at how fundamental incumbency is. And if you go back to 1880, that incumbency is maybe not as powerful as it was during parts of the mid 20th century, the United States. In fact, we assume that there's only like a two point advantage from incumbency or something like that, also in races for Congress to come and see if it has been on the decline, because there you have a lot more data points to measure it.

[00:43:38]

So it is real, but less than it used to be, although this year you had a lot of incumbents who charge back and won at the end. But yeah, no, I'm not sure that incumbency is that powerful an advantage anymore, which I guess means that Joe Biden can't sleep so easily about getting re-elected in twenty, twenty four if he is to run in twenty twenty four. One other thing to note, though, both Obama and 12 intrepid 20.

[00:44:01]

The last two times you had an incumbent on the ballot beat their polls. So could there be a pro incumbent polling error? Maybe because people for the Challenger party are more geared up and excited. That's a possibility.

[00:44:16]

The lastly, is there a way to prove or disprove some of the voter fraud conspiracy theories that we've seen using statistical analysis?

[00:44:27]

But I haven't seen anything kind of rise to the level of being worth people's time to disprove.

[00:44:35]

Fair enough. Just to reiterate, a lot of the litigation is based on nothing, which is what the courts have basically said so far.

[00:44:42]

Yet I'm a big believer in like be careful about what battles you pick and you can dignify and the Streisand effect, that's when, like, look at the Streisand effect. If you don't know it, you show more attention to it. It's a very shitty analysis by trying to debunk them. So I'd be careful with that. I do think that want to look, by the way, though, for the polls and overall, like what the effects of male voting were, how many ballots were rejected, how many ballots were lost, because it has implications for what the parties tell their voters going forward.

[00:45:09]

And it may explain a little bit of polling potentially, too. So, you know, looking at the effects of male voting, I think is is worthwhile. All right. Well, we've been going for a while, so let's leave things there and we can figure out what we want to do this again, maybe before too long. But that's it for now. So thank you. Thank you for entertaining a lot of my questioners, our listeners questions and hopefully they help answer a lot of what's been on people's minds in terms of polling and forecasting.

[00:45:38]

All right. Thank you, Galen. Talk to you soon. Talk to you soon. My name is Galen. Tony Chao is in the virtual control room. Claire Bennett, Gary Curtis is on audio editing. You can get in touch by emailing us at podcasts at five thirty eight dotcom. You can also, of course, tweet us with any questions or comments. If you're a fan of the show, leave us a rating or review in the Apple podcast store or tell someone about us.

[00:45:59]

Thanks for listening and we'll see you soon.