Some of you may have followed
a long discussion at AVI's place on the validity of weather models. I learned quite a bit about what weather predictors think they are doing, and why their models are so bad. I'm not sure it's worth your time to read through it, but essentially they're confident enough in computer modeling in which they are only estimating the initial conditions that they think they can run computer models of weather that are as accurate as computer models of gambling games. The probability model they're using is simplistic and non-Bayesian.
Because it's non-Bayesian, it's impossible to distinguish between 'the model worked, but the unlikely event occurred' and 'the model was full of crap.' If the event you predicted happened, the model was accurate. If the event you predicted wasn't likely to occur happened, the model was still right -- it predicted a chance of something else happening, after all. The models are
never wrong.
7 comments:
"Let's be clear" (with apologies to AVI)
What does it mean for a model to be wrong?
No model can predict the weather down to the level of a city block: temperature, windspeed, rain for every minute of every day. So to measure whether one model is better than another, you have to use some sorts of averages, and some proxies. e.g. What was the average wind and rainfall over 12 hours over the N county measuring stations?
Then you have to figure out some way of weighting the various features of the model: model A got the rainfall average within 25% and B within 45%, but B got the average temperature better than A--how do you compare them?
When you have some measure for distinguishing how much better one model did than another for a particular day, then you have to remember that much is essentially random. To really compare two models you need to compare ensembles of events--how well did this model do on 15 years of historical data compared to some other model?
TLDR; all models are wrong/incomplete, but some are less wrong on the average.
The Texas electric grid regulators, ERCOT, are scrambling to explain why no one thought it was worth the considerable expense of winter-proofing critical elements of the power generation system. One of the explanations is that global warming experts simultaneously advised them that they should be planning for a general trend of warmer weather and a general trend of colder weather. A model that explains everything explains nothing.
Weather forecasting is clearly a lot better now than it was a few decades ago, but relying on it as holy writ is silly, and that's just the relatively reliable short-term forecasts that take into account very large, stable patterns.
To figure out what the weather is, is easy. You just pull the shades and see what the sky looks like. If it's dark and gloomy, maybe rain or other precipitation. Crack the front door will let you know if you need a light sweater or a heavy coat.
If you do those things, you too can be a weather predictor.
James:
"No model can predict the weather down to the level of a city block: temperature, windspeed, rain for every minute of every day."
You can read the discussion if you want to read it; it was, I think, more enlightening for me than for others. Yet as I said therein, all I ask from it is whether or not I'll get wet if I go riding later today. In fact, the accuracy is poor enough that I just never leave without my leathers strapped to the bike in case it rains after all.
Also, as admitted, I do live in a particularly challenging climate: an Alpine rain forest subject, alternatively, to Gulf of Mexico influxes of warm wet weather and cold weather from off the Great Plains and Canada. It rains between 90-140 inches a year here, so you could probably improve the accuracy of the model just by setting the percentage to 75% and leaving it there.
Still, when people want to plan our lives around one or another of these models, I look at things like that Weather Channel map. How good are these things, really? You can't repeat the experiment to see if the percentages were right and a low probability event came up. Sometimes you get things like this map, which looks to have been completely wrong. So how much should we depend on expert models?
Mike:
I always liked Lewis Grizzard's concept of 'the weather dog.' "If the dog comes back wet, say it's going to rain. If he don't come back at all, it'll be windy."
Where we live, the upstate of SC, the weather can be kind of weird because of the confluence of the mountains and the foothills. What ever the weather is in Atlanta, that's probably what we're going to get, but not always. We also have a pretty good delineation line for our weather. It's called the I-85 corridor. During winter, anything north or west(sic) of 85 might be frozen precipitation while anything south or east of 85 will be wet, generally speaking.
For instance, tomorrow, we've got a 75% chance of mixed precipitation. The daytime temp is going to be high 30's to low 40's. Could be snow or sleet...or it could just be a really cold rain, or just a gloomy, cloudy day.
Grim, we're talking about two different classes of models. One is short term prediction, which is confounded by the day-to-day chaotic variations. Their predictions are getting somewhat better, but if you're on the boundary of some activity the prediction can vary from hour to hour--I've seen that; you have too.
You test the short-term prediction model against the history of the area. We've had good instrumentation for the past couple of decades, so you have lots of measurements to test your model against. So you have a _really_ hard problem that you also have lots of data to test on.
Long term predictions are theoretically easier, since you can average out the turbulent day-to-day variations and just look for the longer fluctuations and correlations.
However, we don't have multiple centuries of measurements to test the models against. Go back more than a few decades and you don't have (e.g.) upper altitude temperature measurements at all, and nothing whatever that can serve as a proxy measurement for that. You know what the rainfall was in Europe, maybe, but not in Africa--where's the "global"? In short, you have a potentially much easier problem (but still _not_ trivial!), but almost no data to work with.
And yes, you need centuries of data to be able to understand trends.
And I suppose there's a third class in between, where you try to beat the Old Farmer's Almanack for the general trends for the next year. I don't think they've got a good handle on Nino/Nina yet.
Post a Comment