Transcribe your podcast
[00:00:00]

I just wanted to let you know that given it's Friday, I wore my orchestra just for you. Oh, thank you. Which our listeners on YouTube will be able to appreciate it. I'm a big fan of the orchestra.

[00:00:19]

Hello and welcome to the 538 Politics podcast. I'm Gaylan Drink. I'm Nate Silver and this is my little talk.

[00:00:30]

I love that enthusiasm. We've now got just twenty five days until Election Day. How are you feeling? I don't know how to answer that question. Are you sleeping? Some days I sleep in.

[00:00:43]

Most days I can't. Most days, even if I don't have a commitment. Early morning I. My brain wakes me up. There are new polls and stuff there. New headlines right today. I kind of slept in a little bit. All right, that's good. But yeah, I don't know. I mean, from like a forecasting standpoint. This is one of the less stressful elections, right, either. The polls are really wrong or they're not, you know what I mean?

[00:01:06]

And if they're not really wrong, then Biden's probably going to win. And we can't really do anything about the polls being really wrong except quantify the chance that they'll be really wrong. Now, there are other things that are outside the scope of our model, like what if there is massive attempts to throw out votes from being counted? Right. We've talked about this stuff and I think people can get a little paranoid about it. But all the things that, like a regular citizen would worry about are the things that I would worry about.

[00:01:31]

But they're kind of not really in the scope of our forecast to start things off.

[00:01:35]

I'm going to lay out where things stand according to our forecasts, and then we can dig into what's changing and why and also answer some listener questions. We've got a ton of listener questions. So our presidential forecast shows Biden with an 85 percent chance of winning the election. And our national polling average for the presidential race shows Biden leading by 10 points. And as of today, that's the first time our national polling average for the presidential race shows Biden leading by double digits.

[00:02:05]

Our Senate forecast shows Democrats with about a 70 percent chance of winning that chamber, depending on which forecast model you look at. And then our newly launched House forecast shows Democrats with a 94 percent chance of maintaining their control of the House. So a lot has happened recently. We had the debates. We've had Trump's covid diagnosis and the general White House outbreak. We had Trump's announcement that he would not try to pass the stimulus package before the election. Just yesterday, the FBI foiled a far right terrorist plot to kidnap Governor Gretchen Whitmer and start a civil war.

[00:02:43]

Before all that, there was the Trump tax return story and the Supreme Court vacancy.

[00:02:48]

There is so much going on. Is it possible to discern what's responsible for the movement towards Biden and Democrats as of late?

[00:02:57]

Well, if all the news happens on top of one another, then it becomes hard to be super precise about what's causing it. But there's reason to think that both the first debate and Trump's covid diagnosis were harmful to Trump for the debate. We say that because, first of all, there were polls of debate watchers saying, what did you think? And most polls of those debate watchers found people thought Biden did better, some by substantial margins. So you might expect the winner of the debate to gain ground in head to head polls to gain a point or two.

[00:03:28]

Typically on top of that, you have lots of polls showing that Americans think that Trump didn't do an adequate job of taking precautions around covid even after he's gotten sick. He's been pretty cavalier about some things like whipping the mask off when he returns to the White House and stuff like that. So that seems to be hurting him as well. That's a bit less certain. But, yeah, I mean, look, because the polls have been so stable, then it's very dramatic.

[00:03:53]

I guess when you go from a seven point Biden lead to a ten point Biden lead by a three point swing is not very big. If one in every seventy five Americans roughly switches from Biden to Trump than Biden's lead grows by three points. Right. It's also like one out of seventy five people. It's pretty small. So these cataclysmic events are like, oh my gosh, double digit number. It's not really shifted that much.

[00:04:14]

Is there any precedent for a candidate losing a 10 point lead in three weeks?

[00:04:22]

No, not in the presidential race, but we don't have that larger sample size and there are other precedents that are relevant. So in 1980, polls at this point were very close in the presidential race, with Ronald Reagan only narrowly ahead and Reagan might up winning by 10 points. So it's a big swing. That would be enough if it went in Trump's direction to maybe give the Electoral College in nineteen forty eight. I believe the final Gallup poll had Dewey up by five points or so and Truman won by four points.

[00:04:51]

So that's a nine point swing. So this happened. They're pretty rare, right? You don't have a case where everything lines up perfectly and a candidate blows a 10 point lead. You know, look, we don't have that larger sample size. So the models trying to infer from a handful of data points and say, OK, we can't be that literal about something, has kind of blown a 10 point lead. I mean, Americans have had a 10 point lead to blow.

[00:05:11]

You mentioned the Electoral College, which brings us to a question that we've spent a lot of time talking about over the past couple of years. Biden has a 10 point lead nationally, of course.

[00:05:21]

Are we seeing similar movement in state polling? What's the gap like between national polls and the likeliest tipping point state at this point?

[00:05:30]

Yeah, I'd say we have not actually seen quite the spectacular numbers in state polls for Biden that we've seen in national polls. We've seen certainly very good numbers. And there are exceptions like the Quinnipiac polls that had Biden up double digits in Florida, in Pennsylvania. But there are still a lot of state poll results that look kind of like they did before, which were consistent with a seven or eight point lead to our polling. Average of national polls has Biden up 10 nationally.

[00:05:58]

Our forecast has Biden projected to win by eight because it's not a snapshot, it's a forecast, and there are a couple of important differences. One is that it's still hedging a bit. The model based on an economic and incumbency prior, which would point toward a closer race, too, is that it hedges a bit after polling, just after debate. So it waits a little bit to see if that is sustained. We're now getting the period where it kind of has been sustained, but that matters a little bit.

[00:06:22]

And third is that it mostly is based on state polls, not national polls. The state polls might point toward more like eight or nine, I would guess, point Biden lead, which we're discounting to eight instead of a 10 point lead, which is not trivial, I guess, in the scheme of things, but also that large difference.

[00:06:40]

Yeah. To what extent does the model take into consideration that the opposite happens, that instead of reverting a bit to the mean based on some of the fundamental indicators that the bottom falls out for Trump?

[00:06:53]

Yeah, so here's one way to look at it. Our model currently shows a thirty five percent chance of Biden winning by double digits. So since right now he's on the cusp at 10 points in national polls, although again, we think it's probably more like nine points based on state polls. It's kind of saying, hey, there's a thirty five percent chance that things get even worse for Trump and a sixty five percent chance that things tighten a bit. Or again, since that's based on state polls, not national polls, maybe that say it's 40, 60, 60, 40, 40 percent chance that things get worse chance to get better for Trump is the model's interpretation of the data.

[00:07:27]

We've had a lot of listeners notice that in our projected possibilities for the election that there's kind of a significant bump in the likelihood of Biden getting around four hundred plus electors to the Electoral College. Is that kind of the threshold where Biden starts winning Georgia and Texas? And how likely do you consider that? Is that that. Forty five percent chance we're talking about?

[00:07:53]

So there's a group of states that kind of conventionally agreed upon to be competitive states of those states, the ones that are probably hardest for Biden to win are Georgia, Ohio, Iowa and Texas. If Biden were to win all of those states, then he has four hundred and thirteen electoral votes, along with all the states that are already kind of in the lean Biden category. So that includes like Florida, Arizona, whatnot. Right. But if Biden wins all the kind of conventional competitive states, he gets to four hundred and electoral votes.

[00:08:24]

If he wins all of them except Texas, he gets three seventy five. So that's why you see kind of a cluster of outcomes. Either three seventy five or four, 13 are fairly likely outcomes. If Biden is having a good night nationally. Once you get past Texas, then there's kind of a big gap between the next states that Biden could win, although to the extent you have states, they're kind of small. Every state small electoral vote states like South Carolina is nine electoral votes.

[00:08:51]

Alaska is three. Kansas is six. Right. So it's four thirty one if he wins all those and after that gets even harder. But that's kind of like where we are. It is not going to get we don't think five hundred and something electoral votes you kind of cap out even with a landslide in the low for hundreds probably.

[00:09:09]

I mean, what is the likelihood of that? Is that the thirty five percent chance that you were talking about before or is that less likely than thirty five percent before 13. No.

[00:09:17]

Well I mean if Texas is the key here, Biden has a 30 percent chance, three zero of winning Texas and then South Carolina has a 15 percent chance. Kansas, eight percent, Alaska. Twenty two percent know. So maybe I mean, if you take Texas as the marker of a true epic landslide, it's a 30 percent chance if you take Georgia as the marker that George is actually a tossup right now in our forecast is 50 50.

[00:09:43]

So I'm a little bit buried the lead for today's model talk. And that's perhaps because while it's new, it's not maybe worthy of the lead. But we watched our house forecast this week and it was not all that suspenseful. You know, Democrats have, according to the model, a ninety four percent chance of holding that chamber.

[00:10:03]

But does our forecast think it's likelier that Democrats will gain seats or lose seats in the House?

[00:10:10]

So actually, our model says it should be very close to the current composition on average. So it forecast Democrats to have two hundred and thirty five seats after the election. They currently have two hundred and thirty two, although note that there are five vacancies. So basically it would predict no change on average. One thing actually about the House is that as compared to twenty eighteen, we had this very, very broad map with lots and lots of competitive races.

[00:10:34]

It's a narrower map this year. The GOP has not made that many efforts to put seats in play, potentially. On the other hand, Democrats won a lot of the races that were close that were plausible to win. They won those, generally speaking, in twenty eighteen. So it's not as expensive a map, at least as far as we know. The other hand, in a in a presidential year, you don't have as much polling of House races.

[00:10:57]

So there can. Always be races that are competitive, but we don't realize it. I mean, the model tries to assume it says if there's not polling in a race, it knows there's more uncertainty. So it's smart in that respect, but there could be kind of some races flying under the radar.

[00:11:10]

What's the likelihood that Democrats win a trifecta of the presidency?

[00:11:16]

House and Senate, by the way, we're going to make some of this information publicly available. We can see the joint probabilities, but it's around 60, six zero percent currently. If Democrats have a seven percent chance of winning the Senate, most of that is in cases where they have won the presidency. And they are pretty big favorites to to win the Senate conditional on winning the presidency. So the effect is like 60 percent chance, roughly.

[00:11:36]

All right. I want to get to some of our listener questions because we got literally hundreds of listener questions since our last model talk.

[00:11:44]

And let's begin with this, probably one of your favorite questions, Nate, but something that we've heard about a lot from listeners. So the question is, how does Biden's position right now compare with Clinton's in 2016? Because Clinton did get up to an 85 percent chance of winning the election in October of twenty sixteen, which is where Biden is at now. So what's the difference?

[00:12:08]

Well, there is no difference in the sense that Trump will win 15 percent of the time. So there isn't a difference. But there are different reasons why Clinton was at 15 percent versus Biden at 15 percent. One is that our model does actually assume that there is a bit more uncertainty this year. If you remember, we look at factors that include things like economic uncertainty. We look at how many major headlines there are in The New York Times. Right.

[00:12:33]

And because there's so much like crazy news, you know, the model still does assume that things could flip around a little bit. We don't know which direction. Number two, the model assumes that because of male voting, that that creates a bit more unpredictability and turnout. Unpredictability and turnout also implies unpredictability in the margins. So there are reasons to think that there's a bit more error this year, potentially. At the same time, Biden's lead is much larger than Clinton's was.

[00:13:01]

She led by six or seven points? He leads now by 10 points. You know, the model is discounting that Biden lead, by the way. It's saying that, hey, we really think it's going to be an eight point win and a 10 point win if we kind of get another couple of weeks and Biden sustains a 10 point lead in national polls. And that 10 point lead translates in a clear way in state polls in a way that it maybe hasn't yet, then the model will get more confident.

[00:13:24]

It might get Biden up to ninety five percent. But yeah, it's a little bit of an apples to oranges comparison, I think.

[00:13:32]

Actually, I think relatively given what you just said, we got another question that asks, is there a minimum chance the model would give to President Trump like the situation which you mentioned? Or if polls keep moving towards Biden, is there a baseline probability that Trump would win beyond which the model will not go?

[00:13:51]

I mean, technically not like I think we give Biden a 100 percent chance of winning Washington, D.C., for example. So the model has what we call fat tails, which sounds like something weird doesn't fit to I don't know, it sounds like some I think our listeners are exactly what it sounds like.

[00:14:10]

Instead of having like a normal distribution where the tails of the bell curve get very, very thin, it uses a student's t distribution where there is some unpredictability, standard deviations away from the mean. You know, instead of having a one in one thousand chance of a four standard deviation, error might be one in one hundred or something instead. This is a very technical explanation. What it basically means is like, yeah, once you get up to like ninety seven or ninety eight percent or something, then the model, it's pretty hard to get beyond that.

[00:14:42]

I don't think we're quite yet in the fat tail part of the curve. If Biden's at eighty five, if he gets to ninety five, it's not quite that either. But look, it's a pretty conservative forecast we think in certain ways at least, provided that the kind of assumptions the model are satisfy, which importantly include that, hey, there's some like reasonable effort to count everyone's votes. But I don't know, we're not going out on a limb here, I don't think at all, because he's ahead by 10 points.

[00:15:07]

Our fat tails are what make us conservative. Yeah, ten points.

[00:15:11]

We're, by the way, modeling this on elections, going back to nineteen thirty six. If you based it on recent elections, you'd be a bit more confident. That includes Dewey versus Truman, whatever else. But like Biden is way, way, way ahead in the polls. Can Trump do something to make the race tighter. Actually our model kind of assumes that the race will get a little tighter. I don't know if I personally have a feeling either way.

[00:15:31]

Right. I mean, I think Trump is in a little bit of a downward spiral right now. Are you saying a lot of crazy things and voters seem to have discovered that these things are crazy, but we'll see. Maybe he is very good in one of these debates. Maybe I saw only as one debate at this point and, you know, maybe a covid vaccine is announced, although I don't think people would trust Trump to be the bearer of that news.

[00:15:52]

I mean, I don't know. It's not easy to imagine him winning. But, you know, part of doing a model is to say, hey, you know, we're not going to try. OVERTHINKS is too much, and we think after having thought about all these things very carefully, that there is a 15 percent chance that he wins somehow despite all his problems.

[00:16:05]

We've got a question from Katherine, which is an oldie but goodie and something that we should reiterate because it gets asked a lot. How does the model take into account early voting, if at all? And in the past, we've gotten this question, but we haven't seen quite as much early voting this year. By some estimates, 60 percent of the vote will be cast by November 3rd. So is there any way that we're trying to take into account early voting this time around?

[00:16:33]

No. And it's a little hard to know what effect it might have on the polls, right? I mean, in theory, polls should account for early voting. But how do you do that? I don't know. In a poll, for example, should you take someone who has already voted as being more certain to vote than a quote unquote, likely voter? I don't think polls do that for the most part. You could argue they could because some people who are likely voters don't actually wind up voting on the their hand.

[00:16:57]

You can have ballot spoilage from mail votes at a higher rate than from in-person votes, though not early in-person. You don't have any ballot spoilage, more than four late in-person, if anything, and more time to correct a defect. The polling place is less crowded. But no, I mean, we basically rely on polls to account for early and mail voting. If millions people have already voted that in theory, a late shift is less likely. On the other hand, the people who vote early usually are pretty strong partisans.

[00:17:25]

The people who are undecided tend to wait. And so therefore you're kind of maybe locking in votes you already had. But the short answer is, is no, we don't do anything specific for early voting.

[00:17:35]

And perhaps a related question, Alex asks, is there an adjustment based on current vote by mail rejection rates by state or district? We've heard a lot about certain states rejecting mail in ballots already, particularly broken down by demographics. There was a lot of attention paid to in North Carolina, a higher rejection rate for black voters than white voters. Does our model take into account any of that information?

[00:18:00]

Well, let me put it this way. It does in a way, we assume that because of mail voting that there is more uncertainty. But let me explain why I think it's a bad idea to assume that necessarily hurts Democrats. So generally speaking, when you talk to campaign operatives, including Republican operatives, they want people to vote by mail or at least they don't mind it. They might probably prefer voting early, in-person, but generally they want to encourage mail voting.

[00:18:28]

The reason why is that if you get your voting by mail, it's more convenient. And there's no chance if something comes up on Election Day that results in you not voting right. So you blow a tire on your car or you get sick, especially in time of covid or your kid has an emergency or you just kind of you know, you're hung over and you forget about it right on a Tuesday because you've been drinking heavily on a Monday night.

[00:18:50]

The line is too long at the polling place. There's a problem at the polling place. They don't recognize you. You go to the wrong polling place. Right. So generally speaking, even though there is a higher rate of ballot spoilage by mail, guaranteeing that you vote is a good trade off. Right. So maybe two percent of the ballots get spoiled by the mail, by the way, half a percent or something gets spoiled in person anyway. But if you're five percent more likely to vote or something, then it's a positive tradeoff on balance.

[00:19:16]

And in states that switch to mail voting, generally speaking, turnout increases. So there's a chance of an error in either direction. Right. You're going to have some states problem where there is a lot of ballot spoilage, a lot of people voting for the first time. Right. And that could cause Republicans to beat their polls. Could also be true, though, that like you already have these locked in Democratic votes and among likely voters who are mostly voting in person, five percent of them don't show up.

[00:19:39]

And so therefore, Democrats over perform their poll. So it's a source of uncertainty in the middle does account for that. But it isn't necessarily like you have to add a point to the GOP in every state. It could be the reverse. Again, you could look at different state states that have more cumbersome rules like like Pennsylvania. Maybe if you have a naked ballot and you have to put your ballot in a secrecy envelope and then put in the other envelope that you send back to the board of elections, where you send it in Pennsylvania.

[00:20:03]

Right. That could be an issue. It's something that I kind of I worry about. But like there's also downside here for Republicans and all these kind of banked Democratic votes. And if you have now a less than spectacular Election Day turnout, then it becomes a big problem for the GOP.

[00:20:18]

So, Jay, one of our listeners notice that there seems to be an odd disconnect between President Trump's approval ratings and his poll numbers recently. I mean, in the past couple of days, his approval rating has started to take a downturn as well. But as Biden's lead has been growing in the national polls, Trump's approval had actually been narrowing and improving somewhat. What's up with that is essentially Jay's question.

[00:20:43]

Well, I think when you get in the heat of an election campaign, that approval and the ballot become the same thing. No one's going to say, oh, I disapprove of Trump going to vote for him anyway or vice versa. Whereas kind of early in the election, people will say, you know. I'm a Republican, but there's this thing that I would like to see the president do better on, so there are other models that use approval rating as an input.

[00:21:03]

We think it's a mistake. For that reason, we think approval rating is actually like kind of more of a lagging indicator of the vote preference than the vote preference itself, obviously. And it's happened in 2012 to where Obama's approval rating flipped in the final month or two to kind of match his margin in head to head polls versus Romney. One other thing, there's actually like a also a bit of a gap between approval and favorability, where there are a handful of voters who have a negative personal impression of Trump, even though they approve of his presidential conduct.

[00:21:34]

Those voters actually seem mostly to prefer Biden. There aren't a lot of them, but there is a little bit of a gap there, which Trump sustainability rating is lower than his approval rating by a point or so.

[00:21:44]

So essentially the approval rating and the national popular vote polls are converging essentially.

[00:21:52]

So, yes, his approval has been going up, but it's matching basically where the polls are. So we got a question from Joseph that he's in on a specific Senate race, and that is in North Carolina, Cunningham versus Tillis. And the question is, has that race changed at all since the sexting scandal? And of course, Cunningham, the Democratic candidate there, acknowledged that he was sexting a political strategist. He's married. You know, I think in past forecasts we've had like a lever that we can pull when there's a scandal in a race that makes things uncertain or handicap someone.

[00:22:28]

How is our forecast and how is the polling processing that sexting scandal?

[00:22:33]

So there's no lever to pull, but we actually do have, as part of our fundamentals, forecast for congressional races a scandal variable. And if a candidate is involved in a scandal and this clearly qualifies, then that affects the prior that we use for the race. And in this case, actually, Cunningham did fall a couple of points based on that scandal. News emerging, however, in races with a lot of polling, the model most of the toward polling anyway.

[00:23:02]

Right. And polling there has not shown a big impact. We didn't make too many changes to the midterm model or the congressional model this year. But one thing we did find is that actually the impact of scandals is less than that than they used to be because of hyper partisanship. But in that race in particular, there have been a number of polls conducted since the sexting scandal. And I mean, they look at the polls beforehand, there's some polls that show a toss up, some polls that show Cunningham ahead.

[00:23:27]

But that's kind of been the story all along. Doesn't seem to have shifted things that much.

[00:23:31]

We have a more esoteric question next, which is Henry writes, I know that 538 evaluates its models by looking at many races over time and seeing if the models are well calibrated, meaning when we give a candidate a twenty five percent chance of winning, if we look at all of our forecasts over time, does that happen? Twenty five percent of the time and we've talked about this on the podcast before, in general that is the case.

[00:23:56]

Our models are well calibrated. But when we ask the specific question, how would you define success or failure for these twenty twenty models in isolation, is there a way of doing that without just kind of adding it to our whole purpose of forecasts?

[00:24:11]

Not really. I mean, for the House, for Congress and you have races that are a little bit more independent from one another. So you can kind of see, hey, are your house probabilities well calibrated in all 435 districts, although even there there are correlated errors. So, no, you can't. The only way you can really tell very much from one forecast is if you have an extremely confident forecast that doesn't come true. So the models that had Trump with ninety nine percent chances of losing in 2016, you can pretty categorically say and you can I can present a more kind of Bayesian experience people want.

[00:24:46]

Right. But like a model that for the first time has ever conducted since 99 percent and the one percent happens, then the odds are that model was wrong. But apart from that, you just can't learn very much from one year's worth of a forecast, which is frustrating, but it just the truth.

[00:25:04]

Next question, why are undecided voters assigned evenly in our forecast? So essentially what this question gets out is we forecast the popular vote on Election Day in specific states nationally as well. And in order to do that, we have to make assumptions about how undecided voters will break. And I guess this listener is asking, why does it seem as though they're assigned evenly instead of breaking in one direction or another?

[00:25:35]

Because empirically, that's what works the best. There we go. Love a short answer. Next question.

[00:25:41]

If Jamie Harrison isn't nearly even in polls, why does the model still have him has a big underdog? And this relates to the South Carolina Senate race where Jamie Harrison is challenging Senator Lindsey Graham.

[00:25:56]

So. It has him as a twenty five percent chance, I wouldn't call that a big underdog. So a couple of things. There's just not the slightest tiniest little Lindsey Graham lead in the polling average. I mean, not by much, really, maybe a tiny lead, but there are other factors. We look at the fundamentals in a state like South Carolina, even in what may be a very Democratic year, it's still a pretty red state. And so if using any dose of kind of the fundamentals at all and you can kind of see races where it seems like, you know, Democrats are coming close and then they just don't get there.

[00:26:26]

And these red states have been a lot of Senate races like that in recent years. Also, you know, we use expert forecasts in the so-called deluxe version of our model, which is a default version. Those have shifted some I think some forecasters now have the race is a toss up, but others still have it as lean Republican. So, yeah, basically the model is saying, you know what, polls are very close here, maybe the slightest Lindsey Graham Edge.

[00:26:48]

The fundamentals are more strongly for Lindsey Graham. So it winds up being a three to one favorite for Lindsey Graham. I will say, if we keep getting polls showing a toss up there than the model will await those polls more and more and the Pryors less and less. But yeah, in a red state, sometimes Democrats have trouble sealing the deal. And likewise, in a blue state for Republicans, if you look back in House races and also these states sometimes don't have terribly accurate polling as compared with with swing states, I really like this question and that we got from Theo.

[00:27:18]

It is. Which data, in your experience do political pundits most overvalue and undervalue in terms of its predictive power in forecasting the election? So if we're watching cable TV, folks are talking about different indicators, explaining why their candidate is going to win or lose if their activists are promoting one party's candidate over another or just general analysis in the media ecosphere what's over and underutilized.

[00:27:48]

So let me talk about a couple of pet peeves that things start to get cited at this point in the campaign that I think are overutilized. I think people overutilize. This gets to a question from earlier data on early voting. I think people try to pass that in ways that isn't always justified by the evidence. You know, look, Democrats have a huge lead in early voting based on states that break that numbers down by partisanship. They also have a huge lead in early voting in the polls.

[00:28:17]

And so it's not telling you something that the polls don't show. Another category of things, I mean, reports of what internal polls say don't add, in my view, a lot of value provided you have public polling. Right. If you don't know anything from the public polls. And it would give some credence to reports of internal polls. Why we in races for the House, for example, we have to use internal polls luks there's not much public polling in House races, but kind of the average internal poll that is publicly released is usually biased toward the party who conducts it.

[00:28:48]

And it's not as accurate as nonpartisan polls. You also sometimes hear reports or rumors of internal polls. And, you know, that's often just spin framed to gullible reporters or not. You know, if you hear reports of internal polls, someone wants you to see that report and you have to understand their motivations. Sometimes it's transparent, sometimes it's not. But I don't want to have to play poker when I'm looking at polls and decode somebody's motivation. I'd rather just look at non-partisan public polls that it's more straightforward or at the very least actual numbers that are internal polls that are at least publicly.

[00:29:23]

We do use us in our model, but we know empirically roughly how much to adjust them.

[00:29:28]

Here's another good question, which is kind of fun. If we lived in a country like Australia with turnout rates of over 90 percent, how much less uncertainty would there be in the model?

[00:29:40]

That's an interesting question, because we would be relying less on pollsters turnout models, because you don't have to decide who's a likely voter.

[00:29:47]

They almost all are. Yeah, I know. So in general, the lower the turnout in an election, the harder it is to predict because then your predicting are you accurately capturing two dimensions now? Right. Are you both capturing the dimension of who you vote for and you don't have any bias in your sample? And also who's going to turn out? So, yeah, it would be a little bit easier. I mean, turnout is sort of high, somewhat high in American presidential elections.

[00:30:06]

But, yeah, I would make things I don't know, it might reduce the uncertainty by 10 percent. To throw a number out there just at random would be my intuition. Maybe some Australian readers can tell us how accurate polls are in Australia.

[00:30:16]

Yeah, we have periodically received requests that we cover an Australian election. We've meandered across the pond to UK elections. We haven't gotten to Australian elections quite yet. But maybe someday we do have a good amount of Australian listeners. We get emails a lot from Australia. So shout out to everyone in Aussie land, we love you.

[00:30:37]

Let's take some rapid fire questions before we wrap up here.

[00:30:43]

First question at this point, how much of the president's odds to win in the forecast is tied to his polling versus the amount of time until Election Day?

[00:30:51]

So I haven't recently run the forecast. When I tell it it's Election Day. My guess, though, is that given Biden's current lead, that Trump would have around a five percent chance in an election held today, according to our model. So if he has a 15 percent chance, that means two thirds of it is from the chance that the race will tighten versus the remaining one third from there being a massive, massive polling error.

[00:31:17]

Next question, what kind of results what are 2016 style polling error produce at this point?

[00:31:24]

You would still get a pretty big Biden win if the state polls were as wrong as they were in twenty sixteen in exactly the same states. Right. So you have a big error. In Wisconsin, for example, Biden still wins with three hundred and nineteen electoral votes and not even that close particularly. You know, that would be enough for Trump to claw back victory in North Carolina. He'd win Ohio. He'd win Georgia, he'd win Iowa. He's ahead in Texas anyway, so he'd win Texas.

[00:31:49]

But Michigan, Pennsylvania, Wisconsin would hold for Biden even with a twenty sixteen style Pollinger, which states are being under polt?

[00:31:58]

Actually, one great thing about this election is there's been a lot of good high quality polling. You know, I wouldn't mind seeing another poll in Wisconsin. It's actually kind of emerged as a tipping point state. I mean, it kind of was thought to be one for a period of time. Biden's polling was worse in Pennsylvania than Wisconsin. That's now kind of reverse itself. Certainly, Nevada is never all that robustly polled, you know, but they're also some of these more exotic states.

[00:32:23]

Right. I would like to see a high quality poll of Alaska, Alaska.

[00:32:26]

Everyone always wants to see an Alaska poll or love in Alaska poll New Hampshire, I was going to say.

[00:32:32]

But New Hampshire, we actually have had have had several polls recently. I'm a little bit curious about Texas, where there's a lot of polls of Texas. It's not always like the highest quality pollsters, a lot of like partisan polls, I mean by that. Right. So I wouldn't mind like seeing what would happen if the upshot, CNN went back into Texas or if ABC News Washington Post went into Texas, I think you probably have some people you might be able to call if you want to see an ABC News Texas poll.

[00:32:58]

We've already decided which states are going into you. I don't don't remember which is which.

[00:33:02]

You know, I mean, Iowa doesn't get a ton of. Polling a look at some, and it's not that important in the tipping point calculations, so I don't know, I feel pretty good about the amount of polling. I mean, it just kind of like we are a little overweighted in general toward national polls as opposed to state polls. Right. These state polls do not convincingly seem to show a 10 point Biden lead. So just like a random assortment of state polls would be would be interesting.

[00:33:27]

All right. Final question here is simply what is up with Rasmussen? That's the extent of the question. But I assume that what they mean is historically, Rasmussen has put out really good polls or relatively to the rest of the polling, really good polls for President Trump, oftentimes showing him tied nationally. They put out a poll this week showing President Trump down by double digits. I think it was 11 points. Is that correct? Yeah, eleven or twelve or something.

[00:33:54]

So I guess the question is, what's up with Rasmussen? Why did they put out that poll? Are they just trying to cover their ass in case there is a blowout? They can point to at least one poll before Election Day.

[00:34:05]

I don't want to diagnose it too much. I mean, the mere fact that, like, we're trying to psychologize it's not Scott Rasmussen anymore. He's a decent pollster, a decent guy. I don't want to try to psychologize it too much. You know, maybe they're trying to cover their asses. Maybe they're trying to train people for Big Trump comeback. It's a weird polling firm. Obviously, their polls in general have a strong GOP trump leaning bias now and then they'll put out a poll that's like kind of way the other direction.

[00:34:32]

I don't want to diagnose why there is a little different than like Trafalgar Group also tends to have very GOP leaning polls. But it's in a very predictable way. It's like take the 538 polling average and add six points for Trump. And that's the Trafalgar poll. Like always with Rasmussen, it bounces around more and I try not to outthink this stuff too much.

[00:34:52]

All right. Well, we'll leave things there. Do you have any final thoughts? What's weighing on your mind as we approach the three week until Election Day?

[00:35:01]

Mark Ionno, Gaylan. I know and we kind of have this election anxiety on top of anxiety still about the pandemic and what winter is going to look like in the northeast where it's cold. Usually I would be looking forward to some type of vacation. I don't know if there's going to be a safe place to go on vacation this year.

[00:35:19]

I'm sort of thinking about that, especially now that we don't have to go into the office after Election Day. We can abscond to Mexico. Mexico will let us in. Right.

[00:35:27]

I think they're one of the only countries that will let us in. Yeah. Thank you. Mexico shout out to Mexico. Friends of the pot. Mexico, Mexico. Giggling I don't know, man.

[00:35:36]

It's uncertainty multiplied by anxiety wrapped up in an enigma of stress or whatever. But it is true that, like, there's not a lot of ambiguity in the polls about who's winning right now. Yeah. And I will say that as pertains to election administration and essentially the different policies that are being litigated on the state level over how people can vote and what kinds of ballots will be considered and for how long after Election Day coming through the mail, things like that.

[00:36:06]

538 is going to be starting next week, essentially a live blog that's going to be rolling through Election Day, looking at some of these cases and the intricacies of how people can vote in different states around the country. So I would be on the lookout for that. I think that's going to be a really interesting project to spend time on and to observe over the coming weeks. You know, especially if this polling doesn't change all that much, I think we can still expect a lot of activity on that front.

[00:36:34]

And then, of course, we also have Supreme Court nomination hearings next week. So lots to cover, but we'll leave things there. So thank you, Nate. Thank you, Galen. My name is Galen. Tony Chao is in the virtual control room. Claire Better Gary Curtis is on audio editing. You can get in touch by emailing us at Podcast's at five thirty eight dotcom. You can also, of course, tweet us with any questions or comments.

[00:36:56]

If you're a fan of the show, leave us a rating or review in the Apple podcast store or tell someone about us. Also, go subscribe to us on YouTube. Thanks for listening and we'll see you soon.