I confess, though, that reading their remarks in context hasn't much cleared things up. The paper was rejected because it pointed out an inconsistency in an important recurring feature of climate models, which the reviewer considered a "false comparison" because rational people always understood that no consistency was to be expected on that point. Sorry, not helping.
Mark Steyn is on the case, as usual, with a fine piece about the "Clime Syndicate," entitled "The Descent of Mann." He has not, to put it mildly, reacted to the Michael Mann lawsuit by describing his adversary with more gentleness or caution.
The rejected paper put its finger on the sore spot: the unjustifiable assumption that CO2, a weak greenhouse gas, has suddenly become a greenhouse gas that dominates even its much stronger cousin, water vapor, because of what is often called "forcing" or "sensitivity," which means an assumption that there is a positive feedback loop that is causing greenhouse warming to spiral out of control. There is no physical explanation of why a positive feedback loop should be present, when Nature abounds with far more examples of negative feedback loops tending to equilibrium. The assumption that the feedback is positive is entirely inferred from historical data, then plugged into computer models to create predictions. The problem is that the historical data don't particularly support the positive sign on the feedback loop: at best they support widely varying estimates of its magnitude, and they can with equal rationality be seen to support a negative feedback loop. Nor does a positive feedback loop assumption make for a predictive model that matches experimental data, particularly during the last inconvenient 17 years, which have seen an inexplicable pause in inevitable warming that is sure to be followed by the apocalypse.
Here are the peer reviewer's comments explaining why a paper pointing out problems with various models' feedback assumptions would be "unhelpful":
COMMENTS TO THE AUTHOR(S)
The manuscript . . . test[s] the consistency between three recent "assessments" of radiative forcing and climate sensitivity . . . . The study finds significant differences between the three assessments and also finds that the independent assessments of forcing and climate sensitivity within AR5 are not consistent if one assumes the simple energy balance model to be a perfect description of reality. . . . . The finding of differences between the three "assessments" and within the assessments . . . are reported as apparent inconsistencies. The paper does not make any significant attempt at explaining or understanding the differences, it rather puts out a very simplistic negative message giving at least the implicit impression of "errors" being made within and between these assessments . . . . Summarising, the simplistic comparison of [forcing ranges] . . ., combined with the statement they they are inconsistent is less then helpful, actually it is harmful as it opens the door for oversimplified claims of "errors" and worse from the climate sceptics media side. One cannot and should not simply interpret the IPCCs ranges for AR4 or 5 as confidence intervals or pdfs and hence they are not directly comparable to observation based intervals (as e.g. in Otto et al). In the same way that one cannot expect a nice fit between observational studies and the CMIP5 models.Oh. Well, all right, then. The silly author expected a nice fit between observational studies and models. Can't be printing unfair criticism like that! Especially if he's some kind of wet-behind-the-ears arriviste or a well-known looney denialist:
For a decade, [the author] was director of the Max Planck Institute of Meteorology. For another decade, he was Director of the European Centre for Medium-Range Weather Forecasts. He's won the Descartes Prize, and a World Meteorological Organization prize for groundbreaking research in numerical weather prediction. Over the years, he and Michael Mann have collaborated on scientific conferences.That's what peer review is for: to elevate the tone.
Well, it is an error to assume that truth and model were supposed to coincide.
ReplyDeleteI am still waiting for full disclosure of Mann's original data, together with disclosure of the manipulations used to derive the information used to support the AGW theories.
ReplyDeleteIn my experience, no researcher loses his original data.
Further in my experience, any published data is supposed to be readily derived from the paper, itself.
The AGW proponent are not acting like scientists.
Valerie
If you were to get full disclosure, you'd get these...people...explaining the logic behind their cherry-picking Siberian tree ring data; they'd explain why their models should be taken seriously when those models can't simultaneously predict the past and the present; they'd explain the logic of substituting data from one weather collection station for those of another that had failed several years prior--a normal thing to do until the failed station can be repaired/replaced--except that the failed station is in the interior of Australia, and its substitute is 700 miles away on the north Australian coast.
ReplyDeleteThey'd also explain how ice core data for CO2, that greenhouse gas harbinger of runaway global heating, taken from cores in Greenland and Antarctica lag global warming by 800-1500 years, thereby actually confirming the increasing health of the planet as life thrives and...exhales CO2. They'd also explain their logic in getting to that runaway heating from increasing CO2 via the cascade effect of increasing the amount of atmospheric water vapor, another greenhouse gas--except that while CO2 has risen, slightly, in the last 60 years (to a level still 1/5th-2/5th the levels of pre-Ice Age warmer times, off and on as far back as 15M years), atmospheric water vapor has fallen over the same period.
Etc.
Fat chance. There's too much money to be made in the Climate Pseudo-Science Charade.
Eric Hines
The AGW proponent are not acting like scientists.
ReplyDeleteThis is my #1 fundamental problem with taking them seriously. The Scientific Method is about achieving repeatable observations of a tested hypothesis by any third party. If you hide your data, or procedures and demand that others "trust you", then you are not engaging in science. And a "peer review" process that only allows selected others to review your work, is once again, not peer review. These people are snake oil salesmen.
Nobody expects a model to match data exactly--but a sneaky, defensive reaction to questions about how to explain the divergences is not a good sign.
ReplyDeleteI'm not asking for an exact match. One that even comes approximately close would be appreciated. But theirs doesn't.
ReplyDeleteI'd be happy(er) with models that could predict plausibly, and that weren't so sensitive to such a broad range of inputs and input values.
ReplyDeleteNearly every input of import (and there are too many for the models to be viable) sits on the cusp of a strange attractor: the values that make the models "work" are themselves far too sensitive to really small changes in value.
Eric Hines