There’s another comment by Andy Lacis at Climate Etc., and just like the original its deeply under-appreciated by the residents. Indeed it would have been unappreciated by me because I don’t read her posts much less wade into the comments unless someone draws my attention.
Before we go onto AL’s wise words, lets read some very silly ones: Climate and weather model share the same underlying mathematical dynamic. So models are undoubtedly chaotic – this is the kind of stuff that JC chooses to highlight at the top of her posts. You’d be better off with my Oh dear, oh dear, oh dear: chaos, weather and climate confuses denialists and links therein; I remain very fond of Butterflies: notes for a post.
But, without more ado, here is Andy Lacis:
What is it that determines the terrestrial climate and how it changes?
Needless to say the terrestrial climate is the result of complex interactions between the ocean, atmosphere, and biosphere via atmospheric fluid dynamics, thermodynamics, bio-geo chemistry, orbital geometry, and radiative transfer – all processes being driven ultimately by the incident solar energy.
All of these physical processes are modeled explicitly in time stepping fashion in current climate GCMs using a typical spatial resolution of about 1 to 5 degrees in lat-lon, 20 to 50 layers of vertical resolution, and 10 min to 1 hr time resolution. This generates a great deal of time evolution changes in the model-generated wind, temperature, cloud, and humidity fields, accumulated typically in the form of monthly-mean maps of these fields – thus constituting the model generated climate which can be directly compares to similar quantities obtained from global satellite observations as a direct test of climate model worthiness.
Although a climate modeling simulation may typically begin from an initial reference model atmosphere (similar to initiating an initial value weather forecast calculation), the essence of a climate modeling simulation is that of a typical boundary value problem in physics, i.e., the initial starting value does not really matter. For equilibrium sensitivity evaluations, the objective is to reach the equilibrium point toward which the model is being forced independent of the initial conditions. Sometimes model runs are initiated at different points in time to generate ensemble averages, averaging out natural variability effects (which will have different phases for differently initiated runs). Climate models are also used to simulate transient climate change, which then resembles a hybrid between an initial value weather-type model run, but with changing boundary value forcings.
The input solar energy to the climate system has been accurately measured over several decades. Its annual-mean value is 1360.8 W/m2 with an 11-year sunspot cycle variability by about 1 W/m2 (Kopp and Lean, 2011). This puts the global-mean incident solar energy at 340.2 W/m2. However, what actually defines the SW forcing (for a 0.3 global albedo) is the amount of solar energy that is absorbed by the climate system. This, for the sake of this discussion, we will take as being equal to 240 W/m2.
The actual value could be 239W/m2, 242W/m2, or 235 W/m2. The precise value does not matter that much because the climate system’s response is “smoothly continuous” to this SW forcing. While that may be a postulate in need of proof, suffice it here to say that the climate system does not respond like the Mandelbrot fractal set where a small parameter shift in some particular direction might encounter multiple singularity-type responses. The 240 W/m2 is a round number, and is consistent with the accuracy limitations of the ERBE measured value. With SW = 242 W/m2, the climate system would be slightly warmer than with 240 W/m2 (and slightly cooler if SW were 235 W/m2).
The Earth is never in precise SW-LW energy balance equilibrium, but it is always striving to get there. Current climate models exhibit some of the real-world behavior. When models are run for thousands of years with fixed external forcing, they exhibit a natural variability over a broad range of time scales relative to some reference point that can be identified as the global energy balance point of equilibrium. Such behavior is not found in simple 1-D models that can be iterated to energy balance equilibrium to however many decimals required.
In energy balance equilibrium, the thermal energy emitted to space by Earth would be LW = 240 W/m2. If the Earth’s atmosphere were absent, or totally transparent, (but with still the 0.3 global albedo), the surface temperature of the Earth would warm in response to the 240 W/m2 SW forcing until it reached a temperature of about 255 K (with LW = 240 W/m2), at which point Earth would be in SW-LW energy balance equilibrium. But the actual global-mean surface temperature of Earth is about 288 K, and the thermal radiation emitted upward by the ground is 390 W/m2. This global-mean surface temperature difference of 33 K, and the corresponding LW flux difference of 150 W/m2 between the ground surface and top of the atmosphere is a measure of the terrestrial greenhouse effect.
Note that the global SW-LW energy balance at the top of the atmosphere, and the 150 W/m2 greenhouse effect, are described and established completely by radiative means. There is absolutely ZERO convective energy going out to space. Likewise, there is ZERO convective energy represented in the 150 W/m2 greenhouse number. Thus it is radiative transfer modeling that completely describes the SW and LW fluxes as well as the 150 W/m2 strength of the terrestrial greenhouse effect.
So, if radiation accounts for all that, where then do the atmospheric dynamics effects come in? Atmospheric dynamics effects are key to establishing the atmospheric absorber-temperature structure that is used by the radiation model to calculate the SW-LW fluxes and the greenhouse effect. If there were no atmospheric dynamics to establish the (convective/advective) atmospheric temperature profile, radiative energy equilibrium could be calculated for the existing atmospheric absorbed distribution. But, the global-mean greenhouse effect for such a radiative equilibrium atmosphere would be about 66 K instead of the present radiative/convective value of 33 K. This demonstrates clearly the importance of why accurate rendering of both the radiative and dynamic climate system processes is so essential.
The key point of all this is to note that the dynamic processes of the climate system are many orders of magnitude slower than the radiative processes. Thus the radiative calculations can be performed on an effectively static temperature-absorber structure without any loss of generality, totally independent of whatever the atmospheric dynamics may be doing.
Assuming for a moment that the radiative calculations can be performed with 100% accuracy, the GCM calculated response to a radiative forcing (say, doubled CO2) should then be representative of the Earth’s climate system response (to the extent that atmospheric dynamics of the GCM simulation can produce a GCM-generated climate that closely resemble that of the Earth).
Given the “smoothly continuous” response of the climate system, whether SW = 242W/m2, or 235 W/m2, or that there happen to be small to moderate difference in the GCM generated cloud fraction, cloud heights, or water vapor distribution, relative to Earth’s current climate distributions, the calculated response to doubled CO2 should then closely resemble that of the Earth’s climate system response.
The radiative part of the climate system processes is a much easier process to model compared to the dynamic processes, so much so, that it is not preposterous to be thinking in terms of 100% accuracy for computing the radiative heating and cooling effects for a specified temperature-absorber distribution. To this end, Mie scattering theory is an exact theory for calculating radiation scattering by spherical cloud droplets, and similarly, line-by-line calculations using the comprehensive HITRAN absorption line database provide the means for calculating gaseous absorption by atmospheric gases with a great deal of precision and accuracy. While line-by-line calculations are numerically too intensive to be included in GCM radiation models, the correlated k-distribution treatment of gaseous absorption can closely approach the line-byline accuracy, as illustrated in my 2013 Tellus B paper http://pubs.giss.nasa.gov/abs/la06400p.html
Clearly, most of the climate modeling uncertainties reside in our inability to model the atmospheric and ocean dynamical processes with sufficient accurately. Those aspects of modeling climate change that depend for the most part on radiative processes are going to be far more certain than those that are associated more directly with atmospheric dynamics, and especially ocean dynamics.
As a result, the radiative effects arising from the different climate forcings, their effect on the strength of the terrestrial greenhouse effect, and the attribution of the relative strengths of climate forcings and feedbacks, are aspects of global climate change that are mostly radiative in nature. Accordingly, these quantities have a significant robustness that stems from very basic physics with little dependence on arbitrary assumptions or parameterizations.
Regional climate changes, on the other hand, are very dependent on the horizontal energy transports by dynamical processes which must necessarily include significant parameterizations to account for the unresolved sub-grid eddy transport contributions. For the longer time-scale variability, current ocean models are only barely able to simulate some El Nino-type variability, with no skill for decadal-scale variability. But note that this form of natural variability consists primarily of oscillations about a zero reference point, and thus does not produce a bias to the steadily increasing global warming component. Also, the radiative effects listed above become more robust in the form of global averages because the horizontal energy transports must by definition average to zero globally, thus averaging out any regional differences associated with differences in regional climate change.
The one really big advantage in modeling radiative process effects over dynamic processes is the feasibility of attribution. Although the modeling of radiative transfer effects is straightforward and simple in concept, it is not so simple as to be preformed on the proverbial back of an envelope – capable computer is required.
As described in Table 2 of my 2013 Tellus B paper, attribution analysis was performed on this nominal 150 W/m2 measure of the atmospheric greenhouse effect. Where actually does the 150 W/m2 come from? It is not simply the flux fraction that gets absorbed by the atmosphere – that is an oversimplified and erroneous assumption – rather, this 150 W/m2 is a combination of layer-by-layer absorption and emission that occurs throughout the atmosphere. If all absorbed are removed from the atmosphere, the greenhouse effect goes to zero. If all contributors are in the atmosphere, it is 150 W/m2. We show explicitly what happens to the LW flux difference when the absorbers are inserted into atmosphere one-by-one, or removed one-by-one.
Those results are summarized in Table 2 of the Tellus B paper. They show that of the total terrestrial greenhouse effect, water vapor accounts for about 50% of the effect; clouds contribute 25%; CO2 accounts for about 20%; and the other minor greenhouse gases like CH4, N2O, O3, and CFS account for the remaining 5%. Now we apply a little physical reasoning. Water vapor and clouds are FEEDBACK effects – meaning they can’t stay in the atmosphere on their own power; they condense and precipitate out; their equilibrium concentration in the atmosphere is strongly limited by the Clausius-Clapeyron relation. (See Section 3 of my Tellus B paper; water vapor and clouds are fast-acting feedbacks; if perturbed, they return to equilibrium distribution in only a couple of weeks).
CO2 and minor greenhouse gases are all non-condensing at current climate temperatures – meaning, once you stick them into the atmosphere, they are not going to condense and precipitate out; they are going to stay in the atmosphere and perform their radiative effects essentially forever, or until atmospheric chemistry finally does them in. These non-condensing gases constitute the radiative FORCINGS of the climate system.
The definition of climate sensitivity is f = (forcing+feedback)/forcing. What this means is that the climate sensitivity derived just from the current climate temperature-absorber structure of the atmosphere is: f = (0.25 + 0.75)/0.25, or f = 4. Given the Hansen et al. no-feedback global surface temperature change of 1.2 K for doubled CO2, this analysis gives a “structural” climate feedback sensitivity of 4.8 K for doubled CO2.
A bit too high? But note that this is not a “perturbation” type of feedback sensitivity evaluation, so it is completely missing the negative lapse rate feedback (which is about 1.2 K according to Hansen et al., 1984). This gets us to 3.6 K for doubled CO2. There is still a further small reduction (for which I don’t have a precise value at this time) that is needed to account for the fact that when all of the non-condensing greenhouse gases are removed, water vapor doesn’t actually go all the way to zero, being supported at about a 10% value relative to current climate by the Clausius-Clapeyron relation.
The net result of these adjustments is that a climate feedback sensitivity of about 3 K for double CO2 is obtained just from the current climate atmospheric structure, which is in good agreement with paleo-climate reconstructions and direct climate GCM modeling results. This implies that the 1 to 2 K climate sensitivity inferred for doubled CO2 in some studies is not going to be self-consistent with the current climate temperature-absorber distribution.
These deductions based on the radiative transfer analysis performed on temperature-absorber structure of the atmosphere are fairly robust and self-consistent. Their principal certainty/uncertainty is directly constrained by how well the GCM generated atmospheric structure resembles the real-world, keeping in mind the “smooth continuity” that is expected for the climate system response for both the real-world and climate GCMs.
It is also clear form this analysis that atmospheric CO2 (being the principal non-condensing gas in the atmosphere) does indeed perform as the LW climate control knob. That is clearly demonstrated in rather complicated Figure 13 of my 2013 Tellus B paper, where the equilibrium response of the climate system is evaluated for different concentrations of atmospheric CO2 ranging from 1/8x (snowball Earth) to 256x (uninhabitable hot-house).
What stands out in Figure 13 is that it is the exponential nature of the Clausius-Clapeyron relation dependence on temperature that makes water vapor, driven by atmospheric CO2, a very formidable cause-and effect combination that could take the terrestrial climate to extremes that we would rather not think about. Cloud feedback effect does not appear to be a major player since the cloud SW albedo effect is largely counteracted by the cloud LW greenhouse effect.
Humans have had the means at hand to self-destruct for decades. Fortunately, they have refrained from dropping H-bombs to quell every pesky brushfire as they frequently erupt. Now, by burning all of the available carbon resources in the coming decades, humans would appear have another option available to achieve their self-destruction.
Comment by me
On the “initial value problem” vs “boundary value problem” issue: there’s a question of terminology, and discipline. Technically, GCMs (weather or climate) are both integrated forwards from an initial state; and in that sense are IVP. But as said, the climate of the GCM doesn’t actually depend on the initial state3 and so its natural to thing of it as a boundary problem, with the various forcings as the boundaries.
That was a sensible comment. More amusing is this from JC’s: Michael Larkin: “Would some kind soul please explain, in layman’s terms, what “initial value” and “boundary value” problems are?” curryja: Hi Michael, try these links
http://en.wikipedia.org/wiki/Initial_value_problem http://en.wikipedia.org/wiki/Boundary_value_problem… Michael Larkin: “Thank you, Dr. Curry. I’ve checked those out, but found them a bit inscrutable because they immediately leap in with talk of differential equations…”. Ah, you get quality commentators at JC’s.
3. Well, that’s certainly true for atmosphere-only GCMs. For atmosphere-ocean GCMs its clearly not true over the 100 year timescale, as shown by the “cold start” problem; but its true-ish.