7 Comments
Jul 29Liked by Andre Cooper

Saw this reported a couple months ago in the Postrider too (https://thepostrider.com/allan-lichtman-is-famous-for-correctly-predicting-the-2016-election-the-problem-he-didnt/), glad to see more are catching on to the fact he's actually not been right but still tells everyone he has.

Expand full comment
Jul 26·edited Jul 26

Good piece, but still lacking some details for a complete argument. Whichever way you cut it, electoral or popular, 9/10 of the most recent elections were predicted by the model. Yes the model should choose and stick with one way of determining winner, but it is better than any other seemingly non-lucky system I’ve seen. It didn’t mispredict any election when the winner won the popular and electoral votes in the last 10 elections (since the model was created).

In terms to its subjectivity, one example you bring up is charisma. Sure, this is subjective and it would be hard to make a program or machine answer this question. However, if you pay attention to the feeling and excitement that Obama created from his “change”, “believe” campaign (it *felt* like he generated this optimism for the nation). I think while this is a feeling, it is a feeling that enough people in the country got (I didn't like him at the time but felt it, though I was a young teenager). He was also the first black president and that alone drove so many new people to vote. I would count some of the impact from that as charisma. And it is fine to “sneak in” polls, because it isn’t sneaking them in! It’s just using hard evidence to try to see if people think he’s charismatic. Aren’t the types of poll questions for the model to avoid are: “which candidate do you prefer?”, Which candidate will you vote for?”, etc?

If you are looking for a theory that is like physics, good luck — but this is *far* from astrology.

Expand full comment

I agree Obama was charimatic and that he did create a feeling of hope and excitement, but this does not meet the definition of Lichtman's charismatic key, which is often misunderstood.

Expand full comment

Reading this on November 22nd, is very refreshing! Good work!

Expand full comment

The fact that Nate Silver predicted wrong in 2016 was enough for me. People were hanging on to his polling as if it was the word of the Almighty, myself included back then. Now I know: we cannot trust polls or pollsters, no matter how scientific or credible they may seem.

Expand full comment
author

While now all we remember now is "Allan Lichtman said Trump would win" and "Nate Silver said Hillary would win", this obscures critical details. If you go back and read what Lichtman and Silver wrote in 2016 (their predictions, rationales, and post-election analysis of their performance), Silver actually comes across far better, in my opinion.

First off, Allan Lichtman explicitly predicts Trump will win the popular vote, which he actually loses by a staggering margin.

Second, Silver spend the entire run-up to election arguing with the other election forecasters, some of whom gave Clinton a 90%+ chance of winning, that they were severely underestimating the chances of a Trump victory. I wrote a piece on this (https://goodreason.substack.com/p/nate-silvers-finest-hour-part-1-of), and it really is shocking to see everyone attack Nate pre-2016 election for being too bullish on Trump, and him call them "so fucking idiotic and irresponsible" for underrating Trump. Then, partially because we lumped together all the forecasters, when Trump lost, everyone attack him for not being bullish enough on Trump.

But wait, you say, Silver did predict that Trump would lose, right? While he gave Trump way, way better odds of winning (30%) than any other polling-based forecaster, yes, he did say it was more likely that Clinton would win. But he was and is always emphatic that his goal is to estimate the probability of each candidate winning, not simply provide a binary answer. I know, I know: to a lot of people, this sounds like a cop-out. But this is the standard statistics approach to predictions, and it's one that actually helps you quantify your undercertainty.

Lichtman, in contrast, hides any uncertainty along with any bad results. But he still has lots of uncertainty: in 2016, his prediction gives a dozen caveats that he can use as outs in case Trump loses. Where Nate Silver is upfront about his uncertainty, and admits that his prediction was wrong, Lichtman is very uncertain beforehand, then acts like he has total certainty after the fact, and straight up misrepresents that he actually got 2016 wrong.

Expand full comment

Perhaps you need to read his book for a better understanding of his model and the keys, rather than a skimmed down version from Newsweek. When has Lichtman changed keys after an election? He switched to just predicting the winner after the 2016 election, stating recent demographics changes give Democrats an advantage in the popular vote in close elections. As Lichtman states, the keys are predictors, not absolutes, and thus far nothing has outperformed them.

Expand full comment