Dedicated to the balanced discussion of global warming
RealClimate – May 15, 2007
RealClimate is a great site. If you are not reading this site regularly, you should. Their articles tend to be much more technical than what I put here on a daily basis and the frequent commentators in the comments tend to really know the science and chemistry behind the weather. I know that a lot of professionals consider RealClimate to be part of their weekly reading regiment. That being said, I don’t always agree with everything that the editors post (what would be the fun in that). They certainly have a theme and a message for their site and one should take that into perspective but since I encourage everyone to have an open mind on these very technical and critical matters, reading a well-written site like RealClimate is part of the process.
Their latest post (as of this writing) is one that is particular dear to my heart. My long-time readers will know that I frequently call for more research and better modeling techniques on the issue of global climate change and global warming. Their article on the analysis of Jim Hansen’s GISS model predictions is very well written and easily understandable by the average person. Please check it out.
In the article, the author describes a variety of different models that were run and then compares them to real world observations. I think public discussion on this subject is very important and encourage more scientists to do this outside of the veil of the scientific community. My concerns about the accuracy of models are confirmed with this analysis – the 3 models that are described in the first analysis are off by 50%, 10% and 25%. The second analysis deals with changes so percentage error is probably not the best gauge but to be consistent that analysis shows errors of approximately 50%, 10+%, and 10+%.
It is interesting that even the observed measurement has some ambiguity in it since there is no “standard” way of measuring global averages. This is yet another way for models to not be precise enough since we don’t have a golden standard to live up to.
It is wonderful that these comparisons are being made as it will only fuel the very important research into what needs to be done to efficiently change the climate, if that needs to be done after all. I repeat my call for more effort and money to spent in this area so that we are sure if we need to spend billions (trillions?) of dollars, those dollars give the highest impact.
There is an often repeated saying “Close enough for government work.” In this case, I think we should hold the politicians to a higher standard and we need to have models that can more accurately predict observed occurrences. Many of you will be familiar with another saying “Close only counts in horseshoes and hand grenades.”
At Jim Hansen’s now famous congressional testimony given in the hot summer of 1988, he showed GISS model projections of continued global warming assuming further increases in human produced greenhouse gases. This was one of the earliest transient climate model experiments and so rightly gets a fair bit of attention when the reliability of model projections are discussed.
In the original 1988 paper, three different scenarios were used A, B, and C. … The details varied for each scenario, but the net effect of all the changes was that Scenario A assumed exponential growth in forcings, Scenario B was roughly a linear increase in forcings, and Scenario C was similar to B, but had close to constant forcings from 2000 onwards.
…the scenario closest to the observations is clearly Scenario B. The difference in scenario B compared to any of the variations is around 0.1 W/m2 – around a 10% overestimate (compared to > 50% overestimate for scenario A, and a > 25% underestimate for scenario C).
Given the uncertainties in the observed forcings, this is about as good as can be reasonably expected.
Firstly, what is the best estimate of the global mean surface air temperature anomaly? GISS produces two estimates – the met station index (which does not cover a lot of the oceans), and a land-ocean index (which uses satellite ocean temperature changes in addition to the met stations). The former is likely to overestimate the true global surface air temperature trend (since the oceans do not warm as fast as the land), while the latter may underestimate the true trend, since the air temperature over the ocean is predicted to rise at a slightly higher rate than the ocean temperature.
The bottom line? Scenario B is pretty close and certainly well within the error estimates of the real world changes. And if you factor in the 5 to 10% overestimate of the forcings in a simple way, Scenario B would be right in the middle of the observed trends.
But can we say that this proves the model is correct? Not quite. …. Is this 20 year trend sufficient to determine whether the model sensitivity was too high? No. Given the noise level, a trend 75% less, would still be within the error bars of the observation (i.e. 0.18+/-0.05), assuming the transient trend would scale linearly. Maybe with another 10 years of data, this distinction will be possible. However, a model with a very low sensitivity, say 1 deg C, would have fallen well below the observed trends.
Hansen stated that this comparison was not sufficient for a ‘precise assessment’ of the model simulations and he is of course correct. However, that does not imply that no assessment can be made, or that stated errors in the projections (themselves erroneous) of 100 to 400% can’t be challenged.
This is only a small part of the original article and you should click through and read the original posting.
Did you know that you can have these articles emailed to you? Click on the “Subscribe to email” link in the upper right corner, fill out the details, and you are set. No one will see your email address and you won’t get more spam by doing this.carbon dioxide, climate models, corn, gold, Greenhouse gas, Hansen, Jim Hansen, NASA, ocean, prediction, RealClimate, satellite, science, scientists, temperature, weather