Consider an argument Michael Lewis makes in his book The Big Short: nearly everybody involved in the mortgage-backed securities market (buy-side, sell-side, ratings agencies, regulators) bought into mathematical models valuing MBS as low-risk based on models whose historical data didn’t go back far enough to capture a collapse in housing prices. And it was precisely such a collapse that destroyed all the assumptions on which the models rested. But the people who saw the collapse coming weren’t people who built better models; they were people who questioned the assumptions in the existing models and figured out how dependent they were on those unquestioned assumptions. . . .I was mistaken in one of my comments below about the "Unskewed Poll" methodology. They do attempt to conform poll responses to a turnout model; they just use an unusual model.
Models and their perils
A good article on RedState on polling methodology, with comparisons to modeling of climate, housing markets, and baseball:
Subscribe to:
Post Comments (Atom)
6 comments:
I yield to none in my respect for Mr. Taleb's work, so let us remember his warning. It wasn't that a better model is needed. It is that there are areas where modeling is inherently unreliable and will remain so.
The question about the polls isn't whether they can be wrong, but why anyone thinks they won't be. They are giving us results that are highly suspect, except that they are explainable by the modeling of turnout assumptions. I would hazard a guess that the models are bad this year. I would expect very high turnout across the board, but I have no idea if it'll end up D +8 or R +2.
Sorry, that's Mr. Nicholas Nassim Taleb, if I was unclear. His critique of modeling is very solid.
I had an argument with someone on Climate Change based on this very premise. I stated that I could remember in the 70's when all the climate "experts" were forecasting a global ice age (brought on by pollution, of course), and that we needed to stop burning fossil fuels RIGHT NOW or we would all freeze to death (and starve, as the breadbasket would ice over and there would be no food for anyone). And now, they're saying "oh, those models were wrong, but they've got it right now!" And his position was "Of course they're right now, computers are better than in the 70's." Which is completely missing the point. They were SURE their models were right in the 70's, but were wrong. To assume they are right now is just more of the same.
Yes, the computers aren't the point. Our models are more complex now, but we haven't made any progress on the general human problem of failing to question inconvenient assumptions. No model is better than its built-in assumptions.
My husband's work used to be modeling temperatures in and around the space station. Management always resisted getting decent data from actual flights, but the modeling engineers knew that their models would be useless without that check. You can build clever models that will match any historical temperature distributions you care to start with, but they won't help the astronauts avoid burns or freezing hands unless the models conform to reality well enough to make accurate predictions. The global warming climate models have done a horrible job of prediction, no matter how good they may be at back-fitting a historical curve.
My questions for any modeler is this: if you go back and put in the actual data, does the model produce the actual results (market, meteorology, animal distribution and reproduction)? No? Why not? The answers I've heard vary from "because we never thought X would happen" to "don't worry about it. Just trust us." Sorry, but this Red is not going to put her money into "Just trust me."
LittleRed1
That's an excellent check for a model. Should be a standard test, but hubris and greed disallow the patience to do the preliminary trials before putting it to use.
Post a Comment