What's funny is the idea that your average smart Yalie uses something better than rules of thumb to weigh essentially unquantifiable risks for political purposes. If you're of a rigorous turn of mind, you can get a pretty good handle on risks in repetitive situations that are susceptible to statistical analysis. You can't get anything like a rigorous handle on risks from models of the behavior of chaotic systems that have never met the gold standard of predictions confirmed by observations (and no fair back-fitting with previously unidentified critical factors). The best anyone could ever get out of an emerging science of prediction is a gut feel, an instinct for where to focus future research.
Richard Feynman analyzed the failure of the Challenger shuttle. He found that people were sharpening their pencils to an absurd degree and fooling themselves into thinking they had pinpointed risk out to a number of decimal points. In fact, they were piling probability assumption on probability assumption, when no single assumption had a solid empirical basis:
It appears that there are enormous differences of opinion as to the probability of a failure with loss of vehicle and of human life. The estimates range from roughly 1 in 100 to 1 in 100,000. The higher figures come from the working engineers, and the very low figures from management. What are the causes and consequences of this lack of agreement? Since 1 part in 100,000 would imply that one could put a Shuttle up each day for 300 years expecting to lose only one, we could properly ask "What is the cause of management's fantastic faith in the machinery?" . . . There is nothing much so wrong with this as believing the answer! Uncertainties appear everywhere. . . . When using a mathematical model careful attention must be given to uncertainties in the model. . . .
There was no way, without full understanding, that one could have confidence that conditions the next time might not produce erosion three times more severe than the time before. Nevertheless, officials fooled themselves into thinking they had such understanding and confidence, in spite of the peculiar variations from case to case. A mathematical model was made to calculate erosion. This was a model based not on physical understanding but on empirical curve fitting."He concluded with one of my favorite statements, a truly reliable rule of thumb: "For a successful technology, reality must take precedence over public relations, for nature cannot be fooled."
4 comments:
A good rule, although here as everywhere the heuristic is not perfectly reliable. Sometimes a good PR stunt can fool nature!
There doesn't seem to be a publicly available rule of thumb to distinguish the cases where a pronouncement is:
1) Solidly backed by experiment and theory
2) Within the bounds of a current consensus that works OK. It might be wrong, but that would be a big deal.
3) Badly scrambled: either this is speculative and not really in the consensus or the reporter garbled it. http://www.bbc.co.uk/news/science-environment-21499765
4) Within a current consensus that doesn't work worth beans though we pretend it does (some fads in psychiatry come to mind).
5) What a noble mind is here o'erthrown.
That whole last paragraph of the blockquote might as well have been talking about AGW.
There's a reason that the great prophets always fought mightily with great self doubt- it's that very doubt that inured them against taking their own word for how great they were.
douglas -- yes, the current model fad is making me tear my hear out. Is this the first time most of these folks have ever tried to use models?
James -- I loved the link at (3). Fortunately, if we start setting aside public money now, we've got a billion years for it to grow at interest so we'll be able to afford a solution.
Post a Comment