Transcribe your podcast
[00:00:06]

Hello and welcome to the 538 Politics podcast, I'm Galen Droog, I'm Nate Silver, and this is not idle talk.

[00:00:18]

All right. A bit less enthusiasm than usual.

[00:00:23]

It's been a few weeks since we've gotten together to dig into the details of the presidential forecast. And we also just got a whole bunch of new polling. Listeners also have a lot of questions. So we're going to dig through it all. Now's the time. As of Thursday afternoon, the forecast shows Biden with about a 70 percent chance of winning and Trump with a 30 percent chance. And our national polling average shows Biden leading Trump by seven point three points.

[00:00:52]

On Monday, we said that we didn't have enough recent polling to fully characterize the race since the conventions and the violence in Kenosha and Portland, what did we learn from the most recent slate of polling?

[00:01:05]

There were almost 20 different national polls, I believe, that have been released since the conventions. And so there's a lot of data and they show that Trump did not get much of a bounce, maybe a little bounce. So Trump is currently behind by seven point three. He was behind by eight point four before either convention. Biden moved up a tiny bit and then it shifted back a bit toward Trump. But overall, Trump has maybe a point closer to Biden nationally than he was before, roughly seven and a half instead of eight and a half.

[00:01:39]

And to have only gained a point after your convention when you were down eight and a half is like not great. We've seen past conventions where, you know, in 2008, John McCain and Sarah Palin really pulled into a tie with Obama after their convention in 2016. Hillary Clinton got a pretty big bounce from her convention, went from like a three point lead to like a seven point lead. So this is really on the low end as far as a convention that shook up the race in either direction.

[00:02:08]

And how has that affected our forecast model, if at all?

[00:02:13]

Not very much, in part because our model number one, it kind of builds in an adjustment for the convention bounce. And also it kind of hedges a bit. Someone explain each one. We adjust what works by looking at conventions historically and trying to guess how big the convention bounce will be. And they've gotten smaller and smaller over time with increased partisanship. We also assume for this year this is one of the things that you have to make an educated guess about that because of the virtual nature of the conventions, that you might not have a traditional typical convention bounce.

[00:02:48]

So we just said, well, it's kind of halfway between a real convention and not, and therefore we're going to assume the convention bounce will be half as large as before.

[00:02:57]

So how big of a bounce did the forecast assume the candidates might get?

[00:03:03]

So the model assumed it would bounce. If you start at eight, it would assumed it would bounce up to around 10 for Biden and then fall to like seven. So it's actually very close to what's happened.

[00:03:16]

Look at you. Biden did not quite get up to 10. He got up to nine point something. But yeah. So just kind of guess that there would be this kind of half convention bounce seemed about right.

[00:03:26]

The Oracle got it right. I mean, sometimes, but in any case.

[00:03:31]

So the forecast more or less saw this coming. So does the forecast not change as a result of that? Basically, yeah.

[00:03:39]

Biden's bounce was like a little bit smaller than it thought. And so it dipped down a little bit when Biden got maybe half a point or a point only in the polling average, but then it wound up in the same place it expected it to wind up. So kind of, you know, I mean, these are all pretty small shifts, right? They went from like Biden at 72 percent to 67 percent. Now it's back up to 70 percent or something pending, constantly arriving new polls.

[00:04:05]

I mean, even if it had gone from like seventy two to sixty seven and steam there, we're not talking about a major change.

[00:04:11]

But from what I understand, the forecast basically says we expect the candidates to get a convention bounce. And so we're not going to take that into consideration in terms of who ultimately is likely to win unless it's a durable bounce. So it has to wait maybe two weeks or something like that to show that the movement in the polls is real and not just a fleeting effect.

[00:04:32]

So there are kind of two defense mechanisms that the model has won is actually to adjust the polls for the anticipated convention bounce, which is what we've been talking about. And to two is to just hedge right where it basically says here is the snapshot that we had before the DNC and we're going to use that snapshot as part of our forecast, even though we have newer data, because actually in the convention, sometimes polls. Change and then revert back to where they were before us.

[00:05:02]

It uses some of the old preconvention polling average in this case, again, kind of the old average and the new adjusted average kind of winds up being the same. Right, because if it says OK, trumps minus seven point three, we adjust that to eight point three. And the previous average was eight point four. Right. It kind of doesn't matter what the formula was anyway, but that also serves as a hedge. That other hedge, by the way, will also apply after the debates.

[00:05:32]

So if Trump has a great first debate and all of a sudden the polling average goes from Biden plus eight to Biden plus four, then the model will hedge a little bit, not a ton, but a little bit until it knows that gain Trump has made has been more sustained.

[00:05:49]

So we've gotten limited state polling, but in particular, a Monmouth poll from Pennsylvania came out yesterday showing Biden's lead down to three points. There are average shows, him leading by around four points in the state. And given that our forecast at this point sees Pennsylvania as being the likeliest state to be the tipping point. What does that say about the expected popular vote, Electoral College gap at this point in time and the beginning of September?

[00:06:17]

So I think it's always a mistake to focus on any one poll in general. Biden's gotten some, let's call them good, but not great state polls. You get a good for him, said of state polls in North Carolina, Arizona and Wisconsin from Fox News, an OK result from Monmouth in North Carolina. And then, frankly, pretty mediocre poll from Monmouth in Pennsylvania and Monmouth Media claiming that Biden would want to be up more than two points among likely voters.

[00:06:46]

Whatever the poll has, and it's pretty close, but it is one 400 person poll in the totality of data. It's a little perplexing that Biden keeps getting these kind of mediocre polls in Pennsylvania and then pretty good polls in Wisconsin. You would think that Wisconsin would be the harder state to win back. But, you know, the polling continues to show Wisconsin having a larger Biden margin than Pennsylvania. But look, the reason why Trump has a 30 percent chance and not a 15 percent chance or a 10 percent chance is because of the Electoral College.

[00:07:17]

There is still a pretty big gap between where the tipping point states are polling and where the popular vote is. Right.

[00:07:24]

So you see Biden with leads of, let's call it four to six points, the tipping point states versus seven and a half points in our national average. That's pretty different, right?

[00:07:35]

If it's really like a four or five or maybe a six point race, then Trump's not out of it.

[00:07:41]

Trump could win that from doing well in a debate. And then maybe if the polls are a little bit off and then you have a second term for President Trump. And so the Electoral College is so important, unless you're like kind of doing that mental math and you can be like, OK, well, Biden's up seven or eight. That's like bigger than or equal to Barack Obama's margin. That's like Bush over Dukakis roughly pretty close to it, not far from Reagan over Carter was nine point seven points or something.

[00:08:05]

Right.

[00:08:06]

But still, while eight point win would be a blowout, a landslide by most people's terminology, all of a sudden at five points, it's a competitive race because the Electoral College. Right, there are scenarios where Biden could win the popular vote by four and a half points and still lose the Electoral College after about five points. It becomes mathematically more difficult, although not impossible. But, you know, that's keeping Trump with much livelier odds than he would have otherwise.

[00:08:36]

Yeah, it seems like there's a thin line between what's a blowout and what is a tight race, given what we're seeing in the Electoral College. I want to read the numbers that you posted yesterday in terms of what percent chance Biden has of winning given his national polling lead. So you said Biden leading zero to one point. He has just a six percent chance of winning the Electoral College, one to two points, a twenty two percent chance of winning the Electoral College, two to three points.

[00:09:03]

Forty six percent chance. So still below a majority chance, three to four points is where he gets a 74 percent chance of winning the Electoral College, four to five point eighty nine percent chance, five to six point ninety eight percent chance. And it goes up from there. And so it's really that divide you see between maybe three points to five points where you go from. It's a really tight race. Biden is not favored necessarily to win to.

[00:09:29]

All of a sudden you could have something looking like a landslide in the Electoral College. Why exactly is that? Can we tease out that thin line between the tight race and the electoral blowout?

[00:09:40]

So let's look at it this way. Right now, Biden is leading in our national polling average by seven point three points, which is exactly the margin. I think it was seven point two or seven point three that Obama beat McCain by in 2008.

[00:09:54]

So one way to look at is to say, where is Biden doing better than Obama?

[00:10:00]

Because by definition, mathematically, the places where he's not doing better, he has to be doing worse to kind of counteract that.

[00:10:06]

So one group of states where Biden is doing better is very blue states. He is ahead in our California polling average by like thirty two points instead of twenty five. He's ahead in Massachusetts by like thirty three points instead of twenty five. New Jersey and Connecticut, Maryland, Washington State. All of a sudden it's a longer a 15 point lead, but a 20 something point lead, you know, 25 or 30 percent.

[00:10:31]

Are these really blue states where Democrats are winning by gargantuan margins? That helps them not at all in Electoral College.

[00:10:38]

So kind of wasted votes. The other group of states where Biden is doing better is kind of in the Sunbelt. So Biden will almost certainly do better than Obama in Texas. He may win Texas. It's pretty close. You know, Biden will do better than actually Obama did OK in Georgia, I think. But, you know, Biden will very likely do better than Obama in Arizona, for example. You know, Georgia obviously is trending more purple, but those states don't actually help Biden all that much either unless they actually become what we call the tipping point state.

[00:11:09]

So, you know, again, Texas is very competitive. It would not be a surprise if Joe Biden wins Texas, although our model has Trump as a slightly clearer favorite. There is some other models, too.

[00:11:20]

But if Biden wins Texas, he's probably already won Arizona and Florida and North Carolina. And so it's not at the tipping point. It's superfluous.

[00:11:30]

Now, if Texas shifts further in a couple of years, then that becomes more of a problem for Republicans and you start to erode the the current Electoral College advantage they have.

[00:11:40]

But like a vote in Texas is actually less important than a vote in the average state or model figures.

[00:11:48]

It's more important to vote in New York or Washington, D.C., but there are a lot of people there and it's not very good return on investment, although it's better than it used to be. So basically all that also Biden is gaining. It's a small thing. Democrats have gained ground with Mormons. You make up, make up a little ground. And like Utah and Idaho, you're not going to win those states, though, probably.

[00:12:10]

And so it's something like Biden has to win by, say, three or four points in order to hold the Midwest. But then all of a sudden, if he's winning by six or seven points, he's past the point where it's close and he's actually winning some of those Sunbelt states which put him into something like Blow-Out territory.

[00:12:29]

Yeah, I mean, six or seven or eight. That's when you start to win Georgia and North Carolina, Arizona, you probably already won at that point, but you start to win Georgia, North Carolina, Texas. You might also win back Ohio and Iowa.

[00:12:46]

So, yeah, there are a bunch of states that are like polling currently with by with a seven point national lead at somewhere between Biden plus to North Carolina to Trump plus two in Iowa. That whole group, if Biden beats his polls by a couple of points, that whole group probably goes Biden. And all of a sudden you have a really impressive looking map. But they're not really tippingpoint states except maybe North Carolina to come out on the fringe.

[00:13:13]

We focus a lot on the polling here when it comes to the fundamentals in terms of the economy or the trajectory of covid cases. How is that shaping our forecast? At the moment?

[00:13:24]

covid cases themselves only figure into our model in a very indirect way, having to do with how the states are correlated. So leave that aside for now.

[00:13:34]

The economic news has generally been pretty decent relative to this obviously unprecedented downturn that we took when the economy closed under covid. But yeah, one thing that's helping Trump in our forecast is that our model assumes that the race will tighten because economic conditions will not necessarily look that bad by November. And a jobs report coming out on Friday morning that you may know the results by the time you're listening to this. You know, that will figure into our forecast, the stock market figures into our forecast.

[00:14:06]

Right. But in general, if you were assuming that you're going to have some type of typical. Recession. We have a slump that lasts for months or years, that's not what's happening here, right? You don't usually get this kind of very sharp uptick just a couple of months later.

[00:14:27]

And so we think, by the way, if you look at approval rating polls for Trump on the economy, his numbers are pretty decent. They're like. Forty nine percent approve, 48 percent disapprove, something like that.

[00:14:37]

So we think that the economy is perversely maybe more a strength for a Trump than a weakness that voters buy the argument that, like, things are going great until covid came along, they may blame Trump for not handling covid.

[00:14:55]

Well, that's one reason why he's down seven or eight points. But they buy his excuse that covid is not an indictment of his economic management. Persay.

[00:15:05]

Also, remember, people got a lot of money in their pockets back in the spring that is starting to wear off. That could have an effect potentially. But if you give people money, they actually in some cases we're making more income than they had before, then that will affect economic perceptions as well.

[00:15:19]

All right. I want to get to some listener questions, but first, today's podcast is brought to you by Draft Kings. The football season is coming up with the reigning champs set to take the field to kick off the season. There's no better way to get in on all the action than with Draft Kings. The leader in one day fantasy sports to celebrate week one of the football season draft Kings is putting you in the center of the action with two shots at a one million dollar top prize draft Kings easy draft or team stay under the salary cap and pile up points for yards, touchdowns and so much more.

[00:15:52]

With all this cash up for grabs, there's no better place to get in on all of the action than with draft kings. Download the top rated Draft King's app now and use promo code five three eight to get a free shot at a million dollar top prize and for a limited time, get your share of one hundred million dollars and prizes once you enter Draft King's Free Survivor pool.

[00:16:12]

That's promo code five three eight to get in on all the action over at Draft King's Sportsbook minimum five dollar deposit required other terms and conditions and eligibility restrictions apply. See Draft Kings dot com for details. Today's podcast is brought to you by call 20, 20 has been a lot, and we could all benefit from less stress and more sleep in our lives, it's so important to take care of ourselves and invest in our well-being during times of anxiety. Calm is an app designed to help you ease stress and get the best sleep of your life.

[00:16:45]

And when you relieve anxiety and improve your sleep, you feel better in every part of your life. Calm has a whole library of programs designed for healthy sleep, like landscapes, guided meditations, and over 100 hundred sleep stories narrated by soothing voices like Stephen Fry, Kelly Rowland and Laura Dern for listeners of the show. Com is offering a special limited time promotion of 40 percent of AKAM premium subscription at Calm Dotcom five three eight. That's 40 percent off.

[00:17:14]

Unlimited access to columns, entire library and new content is added every week. Get started today at calm dotcom five three eight again. That's calm. Dotcom five three eight.

[00:17:28]

The numbers, not the letters. We have been inundated by questions from listeners. Lots of really fantastic questions. And we're going to get to as many as we can. But of course, we'll be back with more model talk in the future. If we don't get to your question today, hopefully we will get to it in the future. And given that, we'll try to keep these answers relatively concise. But let's start with Chris, who asks, what would the narrowcast our IP say today?

[00:17:54]

Essentially, the now cast was our forecast that used to say basically what the model would project if the election were happening right now. So essentially, if the election were tomorrow, what would our forecast say? I don't know, because we put the forecasts in kind of a broom closet and don't talk to it anymore.

[00:18:12]

I, I would think it would be in the vicinity of Biden being at 90 percent to win the Electoral College or something like that. I mean, look, a seven and a half point lead, even given the possibility of polling error and Trump's Electoral College advantage, a seven and a half point lead in national polls on Election Day is reasonably solid. It's not rock solid. But if Biden is up by seven 1/2 points on November 2nd and we're doing the last version of we'll talk, right.

[00:18:44]

I think we'd say, OK, there can have to be a pretty big screw up over the Electoral College. I'm just going to have to fall just right for Trump.

[00:18:51]

And it's possible a one in ten chance wouldn't be nothing that's decently high, but that's more in the realm. You're getting a little bit more on the tail.

[00:19:00]

Speaking of polling errors, we got a question from a reporter who asked that her name not be used. But I'm so happy that we got a question from a reporter. And the question is, can you explain weighting by education? How is weighting by education factored into the current polls that we're seeing?

[00:19:18]

So to back up a little bit here, one dirty little secret about polling is that if you just like kind of randomly call people on the phone, you will not get a truly random sample. Women are more likely to answer the phone than men, older people, more than younger people, white people, more than black and Hispanic people. So you have to wait your poll to population demographics to basically say, OK, we know we only got five percent of black people in a state where they're going to be twelve percent of turnouts.

[00:19:49]

Therefore, let's count every black person two and a half times. Right. It's kind of basically what happens now.

[00:19:55]

One of the variables that also kind of distinguishes who answers a poll and who doesn't is based on kind of news consumption and education levels that if you are more highly educated, you read more news, you're more likely to be interested in taking a survey.

[00:20:11]

You're also a bit more likely to turn out to vote. But still, there is a bias in polls toward people who are more bigger news consumers, more educated in the sense of having a college education. The education of life might be different, but, you know, have more formal education. And it used to be that education was not very predictive of which party you would vote for. But now it is where the more educated people vote Democratic primarily.

[00:20:39]

So if you don't wait based on education or some proxy of education, then you risk having too many Democrats in your sample. Many pollsters do wait for education now. They are kind of a few did beforehand. If you were made aware of it by 2016, some other pollsters do not wait for education, but have other mechanisms they use to try to avoid the bias I think would be introduced by not doing that. And then a few pollsters are kind of oblivious and are doing what they always did, probably at the risk of overrating Democrats again.

[00:21:19]

Now, is this reason to think that there would be a Democratic bias in the polls? I don't necessarily think so. Why is that you may have some pollsters that kind of wind up overcompensating in the other direction, so they will correctly wait for education, but maybe they will also use like a really different likely voter screen that actually isn't helpful. And they're like, well, we got to make sure we're not going to underrate Trump again. So so they do three things when they only need to do one thing and wind up missing the other direction.

[00:21:49]

There are also, frankly, like some automated polls, IVR polls that are very dodgy methodologically. Some of them don't call cell phones. There's one pollster, Trivago, a group that tries to make a shy Trump voter adjustment, which is very dodgy.

[00:22:07]

So it may be a case of like you have a few straggler live caller polls that don't think about this education thing.

[00:22:16]

And then you have a few spammy, crappy robo polls that are kind of Republican leaning and don't even bother to fix that. And it kind of winds up canceling out.

[00:22:28]

And the follow up question here was, does this have any relationship to QAI Trump voters? And I think the answer here is no. Weighting by education takes care of people in the population who are difficult for pollsters to reach, whereas QAI Trump voters usually refers to people who support Trump but are actually unwilling to tell pollsters not. Right.

[00:22:49]

I mean, the notion that like like if you actually just randomly dialed numbers and made no demographic weighting whatsoever, you probably wind up with a really Trump.

[00:23:01]

Sample like a lot of old white people who sit by their phone. Younger people are hard to get on the phone, even if you make repeated contact. Right. But like older people with landlines are more likely. It's just kind of been used to be in this country. Right. You pick up the phone, you answer a stranger's phone call. So it's not a matter of shy per say. Right. It's a matter of like there are different biases and how you get on the phone.

[00:23:26]

And even if everyone's perfectly honest, then you're not going to get a totally, truly random sample. And you have to do several things to try to weight your sample to true population demographics.

[00:23:38]

And just to get this out of the way, because people have been curious, is there any reason to expect that respondents wouldn't be honest with pollsters?

[00:23:45]

Not particularly. There is some evidence of social desirability bias where if you believe something which you think that a stranger on the phone might be offended by, then you might not say it. But generally speaking, people aren't embarrassed by their presidential choices. They are happy to talk about them. And, you know, so there's a big theory that, like in 2008, that a lot of people would say they were voting for Obama but wouldn't they were racists, but they didn't want to be mean to this pollster who thought, OK, this person is not going to go for the African-American.

[00:24:24]

And Obama kind of hit his poll spot on or actually outperform them a bit. So it was kind of one example of social desirability bias, not actually having an effect. I mean, part of what's weird is like, number one, the idea that, like Trump, voters are shy.

[00:24:40]

I don't know. I have you met some Trump voters? I don't think they're particularly shy about their fandom for Trump. If anything, there may be more demonstrative about it. So maybe they're a shy Biden voters. Also, the idea of like, OK, you're going to go ahead and take this poll and and lie to the pollster, but then you're going to give your honest answer in the voting booth. I mean, it's just like not actually much evidence for this.

[00:25:04]

We've also looked at because the U.S. it's a small sample, right? You haven't had a candidate like Trump before international, though. There have been many nationalist parties throughout Europe, for example. And if you look at all the polls of Europe involving right wing nationalists, you know, racial identity politics, parties, they do not do any better than polls show, even though this kind of constant myth that this right wing party in Germany or Denmark are going to beat their polls, you know, it's just not true over a pretty large sample.

[00:25:35]

One other thing, too, is like a lot of polls now are not done by the phone anyway.

[00:25:41]

They're done online. So are you also not going to reveal your real opinion online? Right. And there's not much of a gap between what the online polls say and what the telephone polls say.

[00:25:53]

So the shy Trump voter theory is under, evidenced under your theories, kind of paints a weird view of Trump supporters, I don't think really kind of matches any anecdotal or empirical evidence. And although we need to be very aware of the possibility of systematic polling error, meaning the polls are off by the same direction in every state, all the swing states, the assumption that has been much more robust over time is that that can go in either direction. It's absolutely possible that we wake up on election morning and trump exit polls and wins again.

[00:26:30]

Right. It's also absolutely possible that, like we wake up and the day after election. Right. And Biden has won by 12 points instead of eight and won South Carolina or something. And we're like, what the hell was that? And it turns out that maybe pollsters weren't doing enough to capture black or Hispanic voters. Right. Or maybe they were kind of being very careful because of 2016 and actually wound up overcompensating. Right. So I don't know if you come up and talk to me about QAI Trump voters as a thing as opposed to one of many possible hypotheses.

[00:27:03]

But if you're like, oh, well, Trump's going because of QAI Trump voters, I tend to think that you have a sophomoric view of elections where you read just enough to think that you have some proprietary knowledge, but you don't actually read the really knowledgeable people who have looked at this pretty carefully and said there's not really much evidence that it's a thing.

[00:27:24]

Let's get on to some more listener questions. The next one is, do you ever worry the models tales might be too fat? And the person that some of the edge cases are pretty far out there. We got multiple questions about this. One person noted that there's a variation on our forecast homepage where New Jersey goes for Trump and then every other state goes for Biden because we show some of the examples that the forecast simulates. And so essentially, people just want to know, like WTF is up with some of those edge cases I don't know about.

[00:28:01]

New Jersey one, I mean, in developing, we actually had like a bug where like one in every one in a thousand simulations, like a blue state would become red. So that New Jersey one sounds weird, but all the other stuff is is very deliberate. So No. One, it used to be that it was pretty hard to predict how things would shake out state to state. Now, since like 2000 onward to 2016, you had kind of the same map.

[00:28:28]

But, you know, that's actually a bit unusual historically. And we want our model to be robust to different regime changes. Right. Some year there's going to be a realignment. It may or may not be detected ahead of time. And so we need our model to be robust, to not just assuming, because if you assume if you assume, OK, let's calibrate our model off of 2016. The states are very predictable and the polls are pretty good.

[00:28:57]

You had some issues in 2016, obviously, right. If you calibrate off just the recent data, then maybe Trump would be eighty five percent to lose or something instead of 70 percent. But we don't think that's a good idea. Right. It's a very small sample, number one.

[00:29:12]

Number two, we are living in somewhat unprecedented conditions. And it's true that so far the polls have been stable. And if the polls remain stable, like I said, Biden's chances will probably go up to 85, 90, 92 percent or something.

[00:29:26]

But we think especially before the conventions, I mean, a bit less so now, it's hard to make too many presumptions about what the things will look like. And he kind of just go ahead and say, OK, let's look at all the data we have. Right. We have national polls go back to nineteen thirty six and state polls going back to 1972 or 76. Right. You know, if you look at all the data we have, then it would compel you to be a little bit cautious.

[00:29:53]

But again, like it's also like because you look at this big national lead that Biden has.

[00:29:59]

I mean, Biden is only ahead by by three or four points projected, not in the polling actually projected in the tipping point states. Right. If the model expects the race to tighten by a point or so, then all of a sudden he's up, you know, four or five that becomes three or four. So to say, OK, a three or four point effective projected Election Day margin on September, early September has about a 70 percent chance of holding up.

[00:30:27]

That's what you get empirically the way we do our model. I think intuitively that like that kind of holds up between the possibility things change further in Trump's direction, in the polling error on Election Day, right. About a seven percent chance that Biden win seems like a pretty reasonable answer.

[00:30:43]

So our next question comes from Chloe, who actually went to college with So shout out to Chloe.

[00:30:49]

She asks, what is Nate's biggest concern about the model? And then we got a similar question. That was, what do you think is the model's biggest shortcoming? Another way to put it might be if you had unlimited time and resources. What would you change about the model?

[00:31:06]

I don't have too many concerns about it before that. Well, I mean, OK, if you want to be honest, like you can kind of like no election that I can remember before the model has. We're kind of pulling back the curtain a little bit. Right.

[00:31:22]

There's a humongous amount of running room that the model has between on the one hand. Like prediction markets have, the race is a toss up, which is ridiculous, and either hand. Other models that have been 85 to 90 percent, which we think is maybe not ridiculous, but like we don't believe, reflects modeling best practices and we don't believe reflects the kind of environment. It's almost like a conditional forecast where if your Europe is true about these other things, then, OK, I can see how you can get there.

[00:31:54]

But we don't know about those things in advance in an environment where you have a small sample size taking place in unprecedented conditions. And by the way, you know, the models weren't so accurate in 2016 either. Right? If you think, OK, now we've got to solve this riddle. Well, OK. How come you had this mediocre performance for the polls and projections in 2016? Not necessarily a great fact. Right.

[00:32:13]

So, like so because a lot of running room in there and there's a lot of running room where, like, you're concerned about your forecast.

[00:32:20]

How does it make you concerned? Well, it makes me less concerned, right. Because like because the question was, what are you most concerned about?

[00:32:26]

Not what are you least concerned about?

[00:32:29]

I'm not really that concerned. I mean, I'm concerned for the actual election itself. I'm concerned for what happens if you have a close result and it's disputed or not so close result. And Trump refused to concede. What if there is some big snafu with with mail balloting? Would you try to account for some of that?

[00:32:46]

We do look at like I mean, we're assuming that turnout and therefore the margins are less predictable because a covid and mail voting. But I don't know. I mean, I would not I don't think the model is the most important thing for the country right now. You know what I mean?

[00:33:01]

I think it's all right. Fair enough. Let's move on that.

[00:33:05]

I want to get to some rapid fire questions. So in a word or a sentence, let's run through these questions.

[00:33:14]

Does the model account for comi letter of, like, events or other unforeseeable October surprises? Sure, yeah.

[00:33:22]

I mean, the KOMY letter is kind of a normal October surprise. And like, look, you know, ordinary news events is even somewhat extraordinary. News events is part of why that Biden's at 70 and not and not higher right now.

[00:33:36]

Next question comes from worry. Does the model run every X amount of hours or when there's a new set of polls, how frequently does it refresh?

[00:33:46]

It runs whenever we enter a new poll. And then it also will run at seven p.m. every night just to make sure it's won at least once per day. But now there's several polls a day, so it usually runs constantly. You know, there are also other inputs, like technically speaking, every tick of the S&P 500 technically affects the model, but we don't update it for economic data alone unless there's some major variable that has been updated. So mostly we're running at eight or ten times a day when we're entering a new poll.

[00:34:12]

Plus once for the early evening crowd. Next question, which state is most likely to flip from Democrats to Republicans in a scenario where Biden wins the presidency? I'm guessing New Hampshire and Minnesota, is that correct?

[00:34:27]

Yeah, because he can you know, they were pretty close last time and Biden can afford to lose New Hampshire and Minnesota. Well, actually, let me take that back. I think probably states that could flip the most are New Hampshire and Nevada, because Nevada is kind of like a one off, right? Depends on the exact math. But if Biden kind of wins back those Midwestern states and he can afford to lose, I think it is either Nevada or New Hampshire, but not both.

[00:34:53]

There are some maps that produce a 269 nine tie. So it comes down to like, does he also win congressional districts in Maine and Nebraska? But New Hampshire, in Nevada are weird states in that they're kind of march their own drummer a bit Minnesota. If Biden loses Minnesota, then, well, did he win Wisconsin then, did he win Michigan? That gets a little bit dicier, right? That's a case in which, like the Sunbelt ended up being super strong for Biden and the polls were pretty mismatched in expecting that the upper Midwest would be closer than the Sunbelt.

[00:35:28]

Right. There be a pretty sharp reversal. Yeah. All right. Next question. Where would you most like to see new polls from right now?

[00:35:36]

I mean, we just got a lot of new polls, but I guess this is our particular state that you want to see polling from.

[00:35:41]

I'd love to see some high quality polls in Minnesota, Nevada. I mean, the thing is, we actually I mean, it's kind of a chronic issue now, right? It's kind of a chronic lack of of high quality state polls, period. So it's like never like you even get to a point now where you're like, oh, there are five high quality polls. The state I feel saturated.

[00:35:58]

You know, I would still take more polls of Pennsylvania and Florida just because they're so important to our model everywhere, really.

[00:36:06]

I was going to say it would be fun to see a poll of kind of Texas there a bunch of in the year. There have been fewer recently. But like Texas is actually not likely to be the tipping point state. So maybe it matters a bit less. But I don't know. I mean, we actually have like a fair number of polls, Pennsylvania just kind of like it's such an important state. And it's kind of still, to me, hard to reconcile why Biden is like doing surprisingly well in Wisconsin and not very well in Pennsylvania.

[00:36:30]

Just kind of weird. But if there were one if there are two such a request, I'd say Minnesota and Nevada. All right, Nevada, next question is, how are you considering the impact of third party candidates in the forecast? And I was just looking at the forecast from twenty sixteen other day, and we actually had Gary Johnson listed with his chances of winning the Electoral College. Of course, it was very low. But, you know, people who are looking at the forecast this year can tell that we're not projecting the odds of a libertarian or Green Party candidate winning the Electoral College.

[00:37:09]

Is it still affecting the forecast?

[00:37:11]

Not exactly, we have what we call a named third party candidate, right, which is if a candidate is polling, if they're a polling it in the mid single digits nationally, B, are included in most polls and C, are on the ballot in most or all states, then we will model them explicitly. Gary Johnson met those criteria last time around. Nobody does this year.

[00:37:38]

So the model will reserve some vote for other candidates, but it doesn't do anything particularly fancy with that. Now, keep in mind, if a Libertarian candidate or Kanye West or whatever is on the ballot in certain states, then a pollster can ask about them. And so if Kanye West be on the ballot in some swing state, hurts Biden by a point, which I think, by the way, is far from clear. It could be either way then that would affect Biden sending in the polls.

[00:38:06]

And so therefore the model would account for that implicitly. But, you know, third party candidates are not a big factor this year, by the way. They're also not like any really highly relevant third party candidates in races for Congress this year either.

[00:38:19]

Would you ever consider making the model open source? I know the answer to this.

[00:38:24]

It's no because this is how he makes his money. Come on, guys.

[00:38:27]

I mean, we so we provide a very detailed methodology, provide all the inputs to the models and provide all the outputs. Right.

[00:38:35]

You can you know, let's be perfectly honest, right. There are people that have reverse engineered versions of the 538 model without having the code. But no, I mean, the code is code is proprietary. Last question here.

[00:38:52]

When are the House and Senate forecasts coming out?

[00:38:55]

They're coming out soon. I am kind of finishing up a initial version of them today. So one thing about the House and Senate is like there is like just a ton of data that we used to House and Senate model. In some ways, it's a much more like rich data, big data exercise and a presidential election where you just have 12 examples, right. For races for Congress, you have like four hundred and seventy races every year, which are somewhat independent from one another.

[00:39:24]

Right. Lots of information you can use. And so it takes a while to actually wrangle the data. We're going to do a couple of things. I mean, we really liked our midterm model in twenty eighteen and performed really well and we designed it pretty carefully. So we're not making a lot of changes. There are a couple of things with respect to, like how the model deals with house effects and our polling averages maybe accounting for the effects of partisanship a bit more effectively.

[00:39:49]

There are a couple of things around the margin where we may introduce some changes, but they'll be they'll be pretty minor. We'll also assume for Congress. As for the presidency, there's a little bit more potential for error on Election Day because of Kobad email voting, not a ton more, but like makes things a little bit more error prone.

[00:40:09]

And I'll say we did get a number of questions about, for example, five, the new Fox character that helps five viewers understand the forecast.

[00:40:20]

We also got questions about the presentation in general, why there's no map this year, things like that. And I actually want to have a special modeled episode where we bring on some of the five thirty leaders who helped design the forecast. And we can kind of talk through some of those questions then. But I have heard your questions and we will get to them in due time. Our final comment, though, comes from Keenan. Keenan says people should stop hating on Nates under mid century modern masterpiece.

[00:40:49]

So there's no advice from Keenan. Thank you, Keenan. I appreciate it. It's not underlaying. It's lit for people living in the house. Right? It's lit for people who want to be looking outside. Right. It's not lit for four.

[00:41:04]

Podcast television is for YouTube. Fair enough. People should not design their houses so that they're well went for YouTube videos.

[00:41:14]

I really am looking forward to when you have our design interactive people on. I think philosophically. Twenty twenty is. A weird election. It's a weird it's a weird time to be putting out an election forecast and so maybe some visualizations that are a bit more kind of unexpected and surprising and a little bit more kind of non-linear. Right.

[00:41:38]

Like that is intentional to some extent, you know what I mean?

[00:41:44]

Basically, we wanted to reflect the chaos of twenty twenty in our forecast model.

[00:41:48]

We really it's it's been simplified actually, I think. Right. Yeah. But it's telling a little bit more, has a little bit more of a through line in the forecast than kind of just being a dashboard. And we might revert back to more of a dashboard. Look, in 2022 when it's a midterm. You know, frankly, the viewers we have for midterm elections are are different. They're much more interested in all the detail. Midterms themselves are more detailed.

[00:42:09]

Right.

[00:42:10]

By the way, if you don't know, if you go down in our forecast and you can see a section called, like it says, download data. Right. And you'll find several additional files that includes more detailed data than we published on the interactive. And we're adding more files to that all the time. So some of the info that like that hardcore users wanted, it's still there, but it's in a place where you can download a nice CSV file with all that info, in fact, all the info going back.

[00:42:39]

So we keep adding to that. Still, we are going to add a few more elements.

[00:42:43]

Keep in mind, this is like the first time ever where we've tried to forecast the presidency, the House and the Senate in the same year. Frankly, it looks like it may be a bit anticlimactic because based on.

[00:42:57]

Initial returns to the House model, as well as everybody else's House model and the expert forecasters and whatnot, the House does not look super competitive, but we are trying to give you everything.

[00:43:06]

This year, we're skipping governor's races.

[00:43:08]

So sorry, sorry, to Indiana and North Carolina and the weird states, you know, Vermont and New Hampshire, you have your every two year, you know, whatever sorry, Vermont, New Hampshire, no gubernatorial forecast this year, but we are doing everything else.

[00:43:23]

All right.

[00:43:23]

Well, clearly, you're very busy considering everything you've just mentioned, so we'll leave it there. Thank you. Thank you, Galen. My name is Galen Djuric. Tony Chow is in the virtual control room. You can get in touch by emailing us at podcasts at 538 dotcom. You can also, of course, tweeted us with any questions or comments. We will have future model talk episodes. If your questions weren't answered this time around, you know, email them again, tweet them, etc.

[00:43:47]

. If you're a fan of the show, leave us a rating or review in the Apple podcast store or tell someone about us. Also, subscribe to five 30 on YouTube. Thanks for listening and we'll see you soon.