Dedicated to the balanced discussion of global warming
Strap on your advanced math books for this article. It is fairly deep but I feel that it is important for all to understand. The reason I am discussing this article is the very troublesome statement by the IPCC that the range of possible temperatures is so large (approximately 3 deg C) and other scientists have an even larger range (some say 10 deg C between their high and low estimates). I fully understand that this is not an exact science (hence my problem) but we are not talking about the weather this weekend for my golf outing either – we are discussing an average temperature for the entire planet with a massive amount of temperature inertia.
The quick summary of the article is that the various climate modeling techniques employed today do not have a very good mathematical basis and they are inherently inaccurate at predicting temperature. We need better modeling techniques if we are going to have a reasonable assumption that we understand if global climate change is occurring.
The equilibrium climate sensitivity refers to the equilibrium change in average global surface air temperature following a unit change in the radiative forcing. This sensitivity, denoted here as ?, therefore has units of °K / (W/m2).
Instead of the above definition of ?, the global climate sensitivity can also be expressed as the temperature change ?Tx2, following a doubling of the atmospheric CO2 content. Such an increase is equivalent to a radiative forcing of 3.8 W/m2. Hence, ?Tx2=3.8 W/m2 ?.
The IPCC states (in fact, a dozen times in their third scientific report) that the climate sensitivity is likely to be in the range of ?Tx2 = 1.5 to 4.5°K. Below, we’ll try to understand where this number comes from, why it is uncertain (at least for IPCC climatologists who rely on global circulation models)….
Let us try to estimate the sensitivity of a Black Body Earth. That is, suppose Earth would have been a perfect absorber of visible and infrared radiation (and therefore also a perfect emitter of those wavelengths). What would its temperature be?
In equilibrium, the total absorbed flux equals the radiation emitted, hence,
where S = F0/4 is the average flux received by a unit surface area on Earth.
In reality, Earth is not a perfect absorber, nor is it a perfect emitter. …. The sensitivity now obtained is given by:
For example, suppose we impose a positive radiative forcing (e.g., we double the atmospheric CO2 content). As a result, the global temperature will increase. A higher global temperature would then imply that there is more water vapor in the atmosphere. However, water vapor is an excellent greenhouse gas, so we would indirectly be forcing more positive forcing which would tend to further increase the temperature (i.e., “a positive feedback”).
Next, the higher water content would imply that more clouds are formed. Clouds have two effects, that of a blanket (i.e., reducing the emissivity) and hence increasing the temperature (i.e., more positive feedback). But clouds are white, and thus increase the reflectivity of Earth (increase the albedo). This of course tends to reduce the temperature (i.e., a negative feedback).
Other feedbacks include those of ice/albedo, dust, lapse rate, and even different feedbacks through the ecological system (e.g., see the Daisy World for a nice theoretical example).
The standard way to obtain the climate sensitivity is to carry out a computer simulation of the global climate, namely, to use a global circulation model (GCM). Specifically, the global climate can be simulated under two conditions and compared. For example, one can simulate the global climate under some baseline conditions and then simulate the climate when some additional radiative forcing is present (e.g., with a doubled atmospheric content of CO2). The results of the two simulations can then be used to study the effects of the applied radiative forcing.
There is however one HUGE drawback, because of which GCMs are not suited for predicting future change in the global temperature. The sensitivity obtained by running different GCMs can vary by more than a factor of 3 between different climate models!
The problem with clouds is really an Achilles heel for GCMs. The reason is that cloud physics takes place on relatively small spatial and temporal scales (km’s and mins), and thus cannot be resolved by GCMs. This implies that clouds in GCMs are parameterized and dealt with empirically, that is, with recipes for how their average characteristics depend on the local temperature and water vapor content. Different recipes give different cloud cover feedbacks and consequently different overall climate sensitivities.
The bottom line, GCMs cannot be used to predict future global warming, and this will remain the case unless we better understand the different effects of clouds and learn how to quantify them.
- Earth’s climate sensitivity is not expected to be that of a “black body” because of different feedbacks known to exist in the climate system.
- Although Global Circulation models are excellent tools for studying some questions, they are very bad at predicting the global climate sensitivity because the cloud feedback is essentially unknown. It is the main reason why the sensitivity is (not) predicted this way with an uncertainty of a factor of 3!
- Climate Sensitivity can be estimated empirically. A relatively low value (one which corresponds to net cancelation of the feedbacks) is obtained.
- Empirical Climate sensitivities obtained on different time scales are significantly more consistent with each other if the Cosmic Ray flux / Climate link is included. This is yet another indication that this link is real.
There is much more to read in the original article and the math gets more complicated and more to the point. Please click through to fully understand this problem.albedo, climate models, clouds, CO2, Greenhouse gas, IPCC, science, scientists, temperature, water, water vapor, weather