G. Elliott Morris, Allan Lichtman, and Nate Silver.
Photo-Illustration: Intelligencer; Photos: Amazon, AP, Getty
“It’s gotten so bad that people have been telling me that somehow I can’t do predictions because I am wearing a wig,” said Allan Lichtman from his home outside Washington, D.C., where he is enduring yet another election cycle of political scientists and election forecasters deriding him as a fraud and a flimflammer. “So let me just say that of course I am not. If you have seen pictures of me, yes, I am 77 and I have a complete and full head of hair. But somehow critics have fastened on to my hair as if that is somehow relevant to my predictions. I mean, it is ridiculous.”
Like a kind of quadrennial cicada, Lichtman emerges every presidential-election season as the progenitor of “The 13 Keys to the White House,” a predictive checklist that, to Lichtman’s credit, has called the winner of every presidential election but one since 1984, even predicting, nearly alone among presidential forecasters, that Donald Trump would win in 2016. (Lichtman did suggest at the time that Trump would win the popular vote, which he did not; he has since clarified that his forecast only includes the ultimate Electoral College victor. His one miss was 2000, when the Electoral College outcome hinged on a small number of contested ballots in Florida.)
Lichtman’s forecast is unique in that it sometimes comes out more than a year before the election and in that while it relies on some objective inputs, such as incumbency and the lack of a third-party challenger, much of it relies on criteria that come only from Lichtman’s well-coiffed head. Who is to say definitively whether a candidate has charisma or has faced a major scandal?
But if Lichtman, a professor at American University, became even more of a lightning rod this year, it’s not only because of his hair. It is also because, after years of growing strife in the nerdy reaches of election prediction, this is the year there seems to be an all-against-all war among the participants — with name-calling, sniping, reputational attacks, and daily hostile exchanges on social media. Partly because anything related to the presidential election seems to devolve into strife and partly because it has become a high-stakes game and everybody is trying to come away with the crown as the one who has the best formula and got it most right.
Lichtman has always shown a fondness for publicity, but he came front and center over the summer for maintaining — contrary to what polls, election modelers, and anybody watching would say — that Joe Biden would have defeated Trump had he remained the Democratic nominee. The controversial and heterodox assessment led to even more media coverage for Lichtman, an indefatigable interviewee who, in September, cut a video for the New York Times featuring him doing push-ups and running sprints in workout clothes to demonstrate his 13 keys and who ended our call early when a film crew showed up at his home (he promptly called back).
“Ridiculous,” judged Alan Abramowitz, a political scientist at Emory University. “Sketchy. He doesn’t even pretend that it is a statistical model. How do you measure ‘charisma’ or ‘major domestic policy achievements’? You can just score these predictors however you want.”
In 1988, Abramowitz devised his own model, called “Time for Change,” which cut Lichtman’s 13 keys down to three: the growth rate of the economy in the second quarter of the election year, the incumbent president’s approval rating, and the length of time the incumbent party has been in office.
This model has called every election since 1992; in 2016, when it predicted that Trump would win, Abramowitz personally disavowed it. (This year, the model gives the edge to Kamala Harris.)
Regardless of their more recent results, what both Abramowitz and Lichtman do is an improvement on the forecasts of yore, which were practically indistinguishable from magic. There was the Redskins Rule, which stated that if the Washington football team won on the Sunday before Election Day, then the incumbent party would remain in power. It held for 60 years, withstanding even Al Gore’s tempting of fate by rooting for his home state’s Tennessee Titans in their Sunday–before–Election Day game against Washington; the Titans lost, and that Tuesday so did Gore. The Weekly Reader’s clip-and-send poll of school-age children in social-studies class got every presidential winner right but two between 1940 and 2008, and pseudo-sagacious Washington hands used to insist that it was more accurate than actual polls since adults were willing to lie to pollsters but kids just repeated what they heard at home. There have been other indicators that armchair prognosticators have said reveal the true preferences of the American voter: Halloween-mask sales, a Cincinnati bakery that sells cookies with the competing candidates’ likenesses on them and has gotten the winner right every year but one since 1984, and even some macaque monkeys that prefer to stare longer at a photo of the losing candidate.
But Lichtman and Abramowitz are also throwbacks of a sort. The most closely followed models are now built by statisticians who compile vast troves of information — not just economic measures but thousands of head-to-head polls from pollsters with varying track records, approval ratings, state-by-state demographic information, and voting histories for those demographics. The newer generation of modelers builds complex programs that are constantly fed new data throughout the run-up to an election in an effort to create the most accurate possible prediction of the outcome.
Nate Silver, now publishing at his own Substack, Silver Bulletin, helped popularize this approach in 2008, when, using a model (unlike that of Lichtman’s or Abramowitz’s) that relied on actual polls, he predicted how 49 out of 50 states would turn out. In 2012, a year that led Republicans to believe that surveys showing Mitt Romney losing were so wrong that they launched a website, unskewedpolls.com, Silver did one better, nailing all 50 states.
Silver’s success — he now makes more money on Substack subscriptions than he did as the in-house modeler for either the Times or ABC — has spawned a host of imitators. ABC hired the former Economist modeler G. Elliott Morris to replace him. To replace Morris, The Economist hired Andrew Gelman, a renowned statistician at Columbia University. There are also smaller outfits — like Decision Desk, Split Ticket, and CNalysis — that each have their own models as well as aggregators that provide simple averages of public polls.
How a modeler weights any given bit of data — a poll from a given pollster or even the occasional social-scientific factors like the ones Abramowitz and Lichtman utilize — helps determine the overall prediction and contributes to whether he or she is ultimately judged to have gotten the election right. Split Ticket, for example, factors in the likelihood that states with similar demographic and geographic profiles tend to move together. Silver’s model has mostly stuck to its 2008 formula, regardless of what the external noise says at any given moment, and is known for placing more weight on polls with larger sample sizes, regardless of the track record of the pollster. (Of course, these are just probabilities, so even the question of what right means here is a slippery one. If you predict that the roll of a die will produce a number less than or equal to five and the roll is a six, is the model wrong or did you just get unlucky? If another model predicts a six, is it right or just lucky?)
In the hothouse atmosphere of this presidential election, each of the non-Silvers is looking for their own breakout moment, hoping that nailing the election could lead to the kind of fame (or, in the eyes of some, notoriety) and riches that Silver has found — and which he consolidated in 2016 when he gave Trump considerably better odds of winning than any of his direct peers. That might be difficult, though, given how bunched the models currently are. In the past couple of weeks, nearly all of the well-regarded models have shown the same thing: that Harris is favored over Trump but only very slightly — somewhere in the 52 percent to 60 percent range. As of Tuesday, Silver has Harris as a 55 percent favorite to win, a smidgen above a coin toss.
There were moments over the summer when there was more separation. Before Biden dropped out, Morris’s model at ABC (which still retains the name Silver gave it, FiveThirtyEight) gave the president a 50-50 chance, while other models were showing that Biden was doomed. This was because the FiveThirtyEight model relied at that moment more heavily on what are known in political science as “the fundamentals” of an election — the state of the economy, for example, or the power of incumbency — and downplayed polls. Fellow modelers, in particular Silver, found this explanation ridiculous since it ignored the fact that Biden’s poll numbers were cratering in the wake of the June debate. FiveThirtyEight took its model offline entirely for a while, and when it was back up and live, it gave Harris a slightly better chance to win than it gave Biden.
That FiveThirtyEight had to take its model down was seen by most political modelers as necessary because “Nate basically bullied them into it,” in the words of one polling aggregator. “The FiveThirtyEight model literally was so broken that they had to take it offline for a month. And so I don’t think you can trust anything from that forecaster,” Silver said recently on Vanity Fair’s Inside the Hive podcast. “I don’t think you can publish a model, have it be broken for a month, and everyone on the internet — all these smart nerds are like, ‘Yeah, this model is broken.’ And the guy refuses to fix it … I think they should have to be ignored this cycle.”
Morris declined to participate in this story, but Lichtman has been amping up his own beef with Silver, writing on X that Silver “doesn’t have the faintest idea how to turn the keys” and that “he’ll be wrong again in trying to analyze the keys.” To which Silver, sounding like a B-movie villain, responded that the historian was playing with a power he does not fully grasp. “Allan let’s just say the little tricks you’ve played with the Keys in the past will come back to haunt you!”he wrote on X. “The Keys shall be respected: they will outlast this little rivalry of ours.”
After the Democratic convention, other models showed Harris with a solid lead, while Silver’s found that Trump was 64 percent the favorite. Silver explained that it was because Harris did not get the typical postconvention polling bounce. A more technical reason Silver’s model gave Harris the odds it did at that moment is because of how it weights polls — leaning toward recency and not automatically filtering out methodologically fraught and partisan pollsters if their findings have a history of accuracy. “I don’t see how Kamala Harris could be winning in the polls but losing in Nate Silver’s model,” quipped Tom Bonier, a Democratic political strategist. “You’re wrong as always, Tom,” Silver replied to Bonier last month in another of their frequent back-and-forths on X.
These findings — at least temporarily — made Silver a hero on Fox News and with Elon Musk, who shared Silver’s findings with his nearly 200 million followers. Democrats accused Silver of being in the pocket of Peter Thiel, the billionaire Republican donor who has invested in the prediction marketplace Polymarket, which brought in Silver as an investor. That Silver does talk openly about betting, and betting on political races specifically, does make for at least the appearance of a conflict of interest, but one that has little to do with his role advising a betting platform and more to do with the fact that he can move betting markets with a tweet.
If Silver is a lightning rod, he is more inclined to turn up the heat in the political hothouse than turn it down. In response to an academic paper that took aim at his theory of election-modeling, he wrote on X that he was “always happy to bet for real $$$ against boring academics who can’t model for shit and say stuff like that election forecasting is impossible.” (“Silver’s response to our paper was not thoughtful,” Justin Grimmer, a political scientist at Stanford and one of the authors of the paper, told me. “He didn’t seem to even have read it.”)
The fact that Silver or any political modeler is even necessary has mostly to do with how polling has changed in a hyperpartisan, highly technologized era. Arguably, modelers fill an important new market niche by providing a way to give consumers of political media a tool, or a device, that purports to capture all the key new information about a race, in a world where there is always a lot of new information.
Not that Lichtman is swayed. “I don’t follow the polls at all. The polls are a snapshot; they are not a predictor,” he said. “They are abused as predictors by clerks like Nate Silver, who is clear that he does not have a theory of how politics works.”
Source link