July
2004 HOME
for
printer
Basic Radiation Calculations
The foundation of any calculation of the greenhouse effect was a description
of how radiation and heat move through a slice of the atmosphere. At first
this foundation was so shaky that nobody could trust the results. With
the coming of digital computers and better data, scientists gradually
worked through the intricate technical problems. A rough idea was available
by the mid 1960s, and by the late 1970s, the calculations looked solid
for idealized cases. Much remained to be done to account for all
the important real-world factors, especially the physics of clouds. (This
genre of one-dimensional and two-dimensional models lay between the rudimentary
qualitative models covered in the essays on Simple
Models of Climate and the full three-dimensional General
Circulation Models of the Atmosphere. See these for developments after
ca. 1980.) Warning: this is the most technical of all the essays.
"No branch of atmospheric physics is more
difficult than that dealing with radiation. This is not because we do
not know the laws of radiation, but because of the difficulty of applying
them to gases." G.C. Simpson(1)
Simple numerical models helped scientists feel out the basic physics
of climate. The first steps were "energy budget" (or "energy balance")
models of only one dimension. You pretended that the atmosphere was
the same everywhere around the planet, and looked at how things changed
with altitude alone. That meant calculating the flow of heat and radiation
up and down through a vertical column that rose from the ground to
the top of the atmosphere. You started with the radiation, tracking
light and heat rays layer by layer as gas molecules scattered or absorbed
them. This was the problem of "radiative transfer," an elegant and
difficult branch of theoretical physics. |
- LINKS -
|
You could avoid these difficulties (while encountering different problems) by taking another
approach. You ignored the differences with height, and used the average absorption and
scattering of radiation in the column. The result could be a "zero-dimensional" calculation for the
Earth as a whole, or a calculation that varied in other dimensions, for example latitude.
| |
Such studies had begun in the 19th century,
starting with crude calculations for the energy balance of the whole
planet, as if it were a rock hanging in front of a fire. A more sophisticated
pioneer was Samuel P. Langley, who in the summer of 1881 climbed Mount
Wilson in California, measuring the fall of temperature as the air
got thinner. He inferred that without any air at all, the Earth's
temperature would be lower still a direct demonstration of
the greenhouse effect. Langley followed up with calculations indicating
that if the atmosphere did not absorb particular kinds of radiation,
the ground-level temperature would drop well below freezing.(2) Subsequent workers crafted increasingly refined calculations.
|
<=Simple
models |
In 1896 Svante Arrhenius went a step farther, grinding out a
numerical computation of the radiation transfer for atmospheres with differing amounts of carbon
dioxide gas (
CO2). He did the mathematics not for just one globally averaged
column but for a set of columns, each representing the average for a zone of latitude. This
two-dimensional or "zonal" model cost Arrhenius a vast amount of arithmetical labor, indeed far
more than was reasonable. The data on absorption of radiation (from Langley) was sketchy, and
Arrhenius's theory left out some essential factors. On such a shaky foundation, no computation
could give more than a crude hint of how changes in the amount of a gas might
affect climate.
|
<=>Simple models
|
The main challenge was to calculate the radiation going in and
out of the planet as a whole. That would tell you the most basic physical
input to the climate system: the planet's radiation and heat balance.
This was such a tough task that all by itself it became a minor field
of research, tackled by scientist after scientist with limited success.
Through the first half of the 20th century, workers refined the one-dimensional
and two-dimensional calculations. To figure the Earth's radiation
budget they needed to fix in detail how sunlight heated each layer
of the atmosphere, how this energy moved among the layers or down
to warm the surface, and how the heat energy that was radiated back
up from the surface escaped into space. Different workers introduced
a variety of equations and mathematical techniques to deal with them,
all primitive.(3*) |
|
A landmark was work
by George Simpson. He was the first to recognize that it was necessary
to take into account, in detail, how water vapor absorbed or transmitted
radiation in different parts of the spectrum. Moving from a one-dimensional
model into two dimensions, Simpson also calculated how the winds carry
energy from the sun-warmed tropics to the poles, not only as the heat
in the air itself but also as heat energy locked up in water vapor.(4*)
Other scientists found that if they took into account how air movements
conveyed heat up and down, even a crude one-dimensional model would
give fairly realistic figures for the variation of temperature with
height in the atmosphere. E.O. Hulburt worked out a pioneering example
of such a "radiative-convective" model in 1931. His one-dimensional
calculations supported Arrhenius's rough estimate that doubling or
halving the amount of CO2 in the atmosphere would
raise or lower the Earth's surface temperature several degrees. However,
hardly anyone noticed Hulburt's rudimentary model.(5*) Most scientists continued to doubt the
hypothesis that adding or subtracting CO2 from
the atmosphere could affect the climate. They believed that laboratory
measurements at the turn of the century had proved that the CO2
and water vapor in the atmosphere already blocked infrared radiation
so thoroughly that adding more gas would make no difference. |
=>CO2 greenhouse
<=CO2
greenhouse |
In 1938, when G.S. Callendar
attempted to revive the theory of carbon dioxide warming, he offered his own simple
one-dimensional calculation (he apparently didn't know about Hulburt's work). Dividing the
atmosphere into twelve layers, Callendar calculated how much heat radiation would come
downward from each. With admirable clarity, he explained that if more gas were added to the
atmosphere, so that each layer became a bit more opaque, then the radiation that reached the
surface of the Earth would come from lower layers. Those would be warmer layers, "whilst the
amount from the cold upper layers is still further screened off." So the net effect would bring
more warming of the surface. This was the true physics of the misleadingly named "greenhouse
effect." (Another, equivalent way to explain the effect became popular later. The Earth must
radiate back into space as much total energy as it receives, to stay in equilibrium. Adding gas to
the atmosphere moves the site of this emission to higher levels, which are colder. Cold things
radiate less than warm ones, so the system must warm up until it can radiate enough.) Callendar
figured the effect could bring a degree or so of warming as humans put more gas into the air.
|
=>CO2 greenhouse
=>Simple
models
|
The calculation was plain enough, but it was obviously grossly oversimplified, and it failed to
convince anyone. Callendar himself pointed out in 1941 that the way
CO2 absorbed radiation was not so simple as every calculation so far
had assumed. He assembled measurements, made in the 1930s, which showed that at the low
pressures that prevailed in the upper atmosphere, the amount of absorption varied in complex
patterns through the infrared spectrum. Nobody was ready to attempt the vast amount of
calculation needed to work out effects point by point through the spectrum, since the data were
too sketchy to support firm conclusions anyway.(6)
| |
Solid methods for dealing with radiative transfer through a gas
were not worked out until the 1940s. The great astrophysicist Subrahmanyan
Chandrasekhar and others, concerned with the way energy moved through
the interiors and atmospheres of stars, forged a panoply of exquisitely
sophisticated equations and techniques. The problem was so subtle
that Chandrasekhar regarded his monumental work as a mere starting-point.
It was too subtle and complex for meteorologists.(7)They
mostly ignored the astrophysical literature and worked out their own
shortcut methods, equations that they could feed through a computer
to get rough numerical results. What drove the work was a need for
immediate answers to questions about how infrared radiation penetrated
the atmosphere a subject of urgent interest to the military
for signaling, sniping, reconnaissance and later for heat-guided missiles.
|
|
The calculations could not be pushed far
when people scarcely had experimental data to feed in. There were
almost no reliable numbers on how water vapor, clouds, CO2,
and so forth each absorbed or scattered radiation of various kinds
at various heights in the atmosphere. Laboratories began to gather
good data only in the 1950s, motivated largely by military concerns.(8) |
<=External input
|
Well into the 1960s, important work continued
to be done with the "zero-dimensional" models that ignored how things
varied from place to place and even with height in the atmosphere,
models that calculated the radiation budget for the planet in terms
of its total reflectivity and absorption. Those who struggled to add
in the vertical dimension had to confront the subtleties of radiative
transfer theory and, harder still, they had to figure how other forms
of energy moved up and down: the spin of eddies, heat carried in water
vapor, and so forth. A reviewer warned in 1962 that "the reader may
boggle at the magnitude of the enterprise" of calculating the entire
energy budget for a column of air but, he added encouragingly,
"machines are at hand."(9)
|
=>Simple models
|
Digital computers were indeed being
pressed into service. Some groups were exploring ways to use them to compute the entire
three-dimensional general circulation of the atmosphere. But one-dimensional radiation models
would be the foundation on which any grander model must be constructed a
three-dimensional atmosphere was just an assembly of a great many one-dimensional vertical
columns, exchanging air with one another. It would be a long time before computers could
handle the millions of calculations that such a huge model required. So people continued to work
on improving the simpler models, now using more extensive electronic computations.
|
<=External input
=>Models
(GCMs) |
A pioneer was the physicist Gilbert N. Plass, who had been doing
lengthy calculations of infrared absorption in the atmosphere. He
held an advantage over earlier workers, having not only the use of
digital computers, but also better numbers, from spectroscopic measurements
done by a group of experimenters he was collaborating with at the
Johns Hopkins University. Military agencies supported their work for
its near-term practical applications, but Plass happened to have read
Callendar's papers, and was personally intrigued by the old puzzle
of the ice ages and other climate changes. |
|
Most experts stuck by the old objection to
the greenhouse theory of climate change in the parts of the
spectrum where infrared absorption took place, the CO2
plus the water vapor that were already in the atmosphere sufficed
to block all the radiation that could be blocked. Therefore a rise
in the level of the gas would not change anything. During the 1940s,
new measurements and an improved theoretical approach to spectra had
raised doubts about this argument. In fact the old measurements made
at sea-level pressure had little to say about the frigid and rarified
air in the upper reaches of the atmosphere, where most of the infrared
absorption takes place. In the early 1950s precision measurements
at low pressure, backed up by lengthy computations, showed that adding
more CO2 really would change how the atmosphere
absorbed radiation. While the total absorption might not change greatly,
the main site of absorption would shift to higher, thinner layers.
And as Callendar had explained, shifting the "screen" in the atmosphere
higher would mean more radiation going back down to warm the surface.
|
<=>CO2 greenhouse
|
Plass pursued this with a thorough set of one-dimensional computations that
showed that adding or subtracting CO2 could seriously
affect the world's radiation balance.(10) From that point on, nobody could dismiss
the theory with the simple old objections. However, Plass's specific
numerical predictions for climate change made little impression on
his colleagues. For his calculation relied on unrealistic simplifications.
Like Callendar, Plass had ignored a variety of important effects
above all the way a rise of global temperature might cause the atmosphere
to contain more water vapor and more clouds. As one critic warned,
Plass's "chain of reasoning appears to miss so many middle terms that
few meteorologists would follow him with confidence."(11) |
<=Government
=>CO2 greenhouse
= Milestone
|
Fritz Möller attempted to follow up with a better calculation,
and came up with a rise of 1.5°C (roughly 3°F) for doubled
CO2. But when Möller took into account the
increase of absolute humidity with temperature, by holding relative
humidity constant, his calculations showed a massive feedback. A rise
of temperature increased the capacity of the air to hold moisture
(the "saturation vapor pressure"), and the result was an increase
of absolute humidity. More water vapor in the atmosphere redoubled
the greenhouse effect which would raise the temperature still
higher, and so on. Möller discovered "almost arbitrary temperature
changes." That seemed unrealistic, and he took recourse in a calculation
that a mere 1% increase of cloudiness (or a 3% drop in water vapor
content) would cancel any temperature rise due to a 10% increase in
CO2. He concluded that "the theory that climatic
variations are affected by variations in the CO2
content becomes very questionable." Indeed his method for getting
a global temperature, like Plass's, was fatally flawed. |
|
Yet most research begins
with flawed theories, which prompt people to make better ones. Some
scientists found Möller's calculation fascinating. Was the mathematics
trying to tell us something truly important? It was a disturbing discovery
that a simple calculation (whatever problems it might have in detail)
could produce a catastrophic outcome. Huge climate changes, then,
were at least theoretically conceivable. Moreover, it was now more
clear than ever that modelers would have to think deeply about feedbacks,
such as changes in humidity and their
consequences.(12*) |
=>Simple models
= Milestone
|
Clouds were always the worst problem. Obviously the extent of the
planet's cloud cover might change along with temperature and humidity.
And obviously even the simplest radiation balance calculation required
a number that told how clouds reflect sunlight back into space. The
albedo (amount of reflection) of a layer of stratus clouds had been
measured at 0.78 back in 1919, and for decades this was the only available
figure. Finally around 1950 a new study found that for clouds in general,
an albedo of 0.5 was closer to the mark. When the new figure was plugged
into calculations, the results differed sharply from all the preceding
ones (in particular, the flux of heat carried from the equator to
the poles turned out some 25% greater than earlier estimates).(13*)
Worse, besides the albedo you needed to know the amount and distribution
of cloudiness around the planet, and for a long time people had only
rough guesses. In 1954, two scientists under an Air Force contract
compiled ground observations of cloudiness in each belt of latitude.
Their data were highly approximate and restricted to the Northern
Hemisphere, but there was nothing better until satellite measurements
came along in the 1980s.(14)
And all that only described clouds as currently observed, not even
considering how cloudiness might change if the atmosphere grew warmer.
|
|
Dear reader: You have made your way
into one of the most difficult corners of this experimental site,
and it would be very useful to know why. Would you take just three
minutes to answer a few questions (even if you already answered for
another page, in fact especially if so)? Please click
here |
|
Getting a proper calculation for the actions of water vapor seemed
all the more important after Möller's discovery that a simple
model with water vapor feedback could show catastrophic instability.
No doubt his model was over simple, but what might the real climate
actually do? Partly to answer that question, in the mid 1960s Syukuro
Manabe with collaborators developed the first approximately realistic
model. They began with a one-dimensional vertical slice of atmosphere,
averaged over a zone of latitude or over the entire globe. In this
column of air they modeled important features such as how the altitude
of cloud layers would affect the way each layer of air trapped radiation.
Most important, they included the way convective updrafts of warm,
moisture-laden air carry heat up from the surface. |
|
That was a big step beyond trying to calculate surface temperatures
the way Möller and others since Arrhenius had attempted, just
by considering the energy balance at the surface. Greenhouse gases,
in the first place, block radiation from the Earth's surface from
escaping into space. So energy leaves the surface mainly through convection,
in the rising warmed air, and especially as latent heat-energy in
the water vapor carried up by the air. The energy eventually reaches
thin levels near the top of the atmosphere, and is radiated out into
space from there. If the surface got warmer, convection would carry
more heat up. Möller's model, and all the earlier calculations
back to Arrhenius, had been flawed because they failed to take proper
account of this basic process.(15*)
|
|
In the numbers printed out for Manabe's model in 1964, some of the
general characteristics, although by no means all, looked rather like
the real atmosphere.(16) By 1967, after further improvements
in collaboration with Richard Wetherald, Manabe was ready to see what
might result from raising the level of CO2. The
result was the first somewhat convincing calculation of global greenhouse
effect warming. The movement of heat through convection kept the temperature
from running away to the extremes Möller had seen. Overall, the
new model predicted that if the amount of CO2
doubled, temperature would rise a plausible 2°C.(17*) In the view of many experts, this
widely noted calculation (to be precise: the Manabe-Wetherald one-dimensional
radiative-convective model) gave the first reasonably solid evidence
that greenhouse warming really could happen. |
=>Models (GCMs)
=>CO2 greenhouse
<=>Models (GCMs)
= Milestone
|
Many gaps remained in
radiation balance models. One of the worst was the failure to include
dust and other aerosols. It was impossible even to guess whether they
warmed or cooled a given latitude zone. That would depend on many
things, such as whether the aerosol was drifting above a bright surface
(like desert or snow) or a dark one. Worse, there were no good data
nor reliable physics calculations on how aerosols affected cloudiness.(18) One attempt to attack the problem came in 1971 when S.
I. Rasool and Stephen Schneider of NASA worked up their own globally
averaged radiation-balance model, with fixed relative humidity, cloudiness,
etc. The pioneering feature of their model was an extended calculation
for dust particles. They found that the way humans were putting aerosols
into the atmosphere could significantly affect the balance of radiation.
The consequences for climate could be serious maybe a dire
cooling although they could not say for sure. They also calculated
that under some conditions a planet could suffer a "runaway greenhouse"
effect. As increasing warmth evaporated ever more water vapor into
the air, the atmosphere would turn into a furnace like Venus's. Fortunately
our own planet was apparently not at risk.(19)
|
=>Aerosols
=>Simple
models
|
By the 1970s, thanks partly to such one-dimensional
studies, scientists were starting to see that the climate system was
so rich in feedbacks that a simple set of equations might not give
an approximate answer, but a completely wrong one. The best way forward
would be to use a model of a vertical column through the atmosphere
as the basic building-block for fully three-dimensional models. Nevertheless,
through the 1970s and into the 1980s, a number of people found uses
for less elaborate models. |
<=Models (GCMs)
|
For understanding the greenhouse effect itself, one-dimensional
radiative-convective models remained central. Treating the entire planet as a single point allowed
researchers to include intricate details of radiation and convection processes without needing an
impossible amount of computing time.(20) These models were especially useful for checking the gross effects
of influences that had not been incorporated in the bigger models. As late as 1985, this type of
schematic calculation gave crucial estimates for the greenhouse effect of a variety of industrial
gases (collectively they turned out to be even more important than
CO2).(21)
|
=>Other gases
|
Another example was
a 1978 study by James Hansen's NASA group, which used a one-dimensional
model to study the effects on climate of the emissions from volcanic
eruptions. They got a realistic match to the actual changes that had
followed a 1968 explosion. In 1981, the group got additional important
results by investigating various feedback mechanisms while (as usual)
holding parameters like relative humidity and cloudiness fixed at
a given temperature. Taking into account the dust thrown into the
atmosphere by volcanic eruptions plus an estimate of solar activity
variations, they got a good match to modern temperature trends.(22) |
=>Simple models
<=>Aerosols
|
Primitive one-dimensional
models were also valuable, or even crucial, for studies of conditions
far from normal. Various groups used simple sets of equations to get
a rough picture of the basic physics of the atmospheres of other planets
such as Mars and Venus. When they got plausible rough results for
the vastly different conditions of temperature, pressure, and even
chemical composition, that confirmed that the basic equations were
broadly valid. Primitive models could also give an estimate of how
the Earth's own climate system might change if it were massively clouded
by dust from an asteroid strike, or by the smoke from a nuclear war.
|
<=Venus & Mars
=>World
winter
|
Other scientists worked with zonal energy-balance
models, taking the atmosphere's vertical structure as given while
averaging over zones of latitude. These models could do quick calculations
of surface temperatures from equator to pole. Theywere useful to get
a feeling for the effects of things like changes in ice albedo, or
changes in the angle of sunlight as the Earth's orbit slowly shifted.
More complex two-dimensional models, varying for example in longitude
as well as latitude, were becoming useful chiefly as pilot projects
and testing-grounds for the far larger three-dimensional "general
circulation models" (GCMs). Even the few scientists who had access
to months of time on the fastest available computers sometimes preferred
not to spend it all on a few gigantic runs. Instead they could do
many runs of a simpler model, varying parameters in order to get an
intuitive grasp of the effects. |
=>Simple models
|
To give one example of many, a group at the Lawrence Livermore Laboratory in California used
a zonal model to track how cloud cover interfered with the heat radiation that escaped from the
Earth. The relationship changed when they doubled the amount of
CO2. They traced the cause of the change to variations in the height
and thickness of clouds at particular latitudes. As one expert pointed out, "it is much more
difficult to infer cause-effect relationships in a GCM."(23) A GCM's output was hundreds of thousands of numbers, a
simulated climate nearly as complicated and inscrutable as the Earth's climate itself.
| |
Simple models also served as testbeds for "parameterizations"
the simple equations or tables of numbers that modelers built into
GCMs to represent averages of quantities they lacked the power to
compute for every cubic meter of atmosphere. You could fiddle with
details of physical processes, varying things in run after run (which
would take impossibly long in a full-scale model) to find which details
really mattered. Still, as one group admitted, simple models were
mostly useful to explore mechanisms, and "cannot be relied upon for
quantitative discussion."(24)
|
|
The basic models could still be questioned at the core. Most critical
were the one-dimensional radiative-convective models for energy transfer
through a single column of the atmosphere, which were often taken
over directly for use in GCMs. In 1979, Reginald Newell and Thomas
Dopplick pointed to a weakness in the common GCM prediction that increased
CO2 levels would bring a large greenhouse warming.
Newell and Dopplick noted that the prediction depended crucially on
assumptions about the way a warming atmosphere would contain more
of that other greenhouse gas, water vapor. Suggesting that the popular
climate models might overestimate the temperature rise by an order
of magnitude, the pair cast doubt on whether scientists understood
the greenhouse effect at all.(25) |
|
In 1980 a scientist at the U.S. Water Conservation Laboratory in Arizona, Sherwood Idso, joined
the attack on the models. In articles and letters to several journals, he asserted that he could
determine how sensitive the climate was to additional gases by applying elementary radiation
equations to some basic natural "experiments." One could look at the difference in temperature
between an airless Earth and a planet with an atmosphere, or the difference between Arctic and
tropical regions. Since these differences were only a few tens of degrees, he computed that the
smaller perturbation that came from doubling
CO2 must cause only a negligible change, a tenth of a degree or
so.(26)
| |
Stephen Schneider and other modelers
counterattacked. They showed that Idso, Newell, and Dopplick were misusing the equations
indeed their conclusions were "simply based upon various violations of the first law of
thermodynamics." Refusing to admit error, Idso got into a long technical controversy with
modelers, which on occasion descended into personal attacks.(27) It was the sort of conflict that an outsider might find arcane, almost
trivial. But to a scientist, raising doubts about whether you were making scientific sense or
nonsense aroused the deepest feelings of personal value and integrity.
|
=>Models (GCMs)
=>Public
opinion
|
Most experts remained confident that the radiation models used
as the basis for GCMs were fundamentally sound, so long as they did
not push the models too far. The sets of equations used in different
elementary models were so different from one another, and the methods
were so different from the elaborate GCM computations, that they gave
an almost independent check on one another. Where all of the approaches
agreed, the results were very probably robust and where they
didn't agree, well, everyone would have to go back to their blackboards.(28) |
|
The most important such
comparison of various elementary models and GCMs was conducted for
the U.S. government in 1979 by a panel of the National Academy of
Sciences, chaired by Jule Charney.(29) The panel's report announced that the
simple models agreed quite well with one another and with the GCMs;
simple one-dimensional radiative-convective models, in particular,
showed a temperature increase only about 20% lower than the best GCMs.
That gave a new level of confidence in the predictions, from every
variety of model, that doubled CO2 would bring
significant warming. As a 1984 review explained, the various simple
radiative-convective and energy-balance models all continued to show
remarkably good agreement with one another: doubling CO2
would change temperature within a range of roughly 1.3 to 3.2°C
(that is, 2.3 to 5.8°F). And that was comfortably within the
range calculated by the big general circulation models (with their
wide variety of assumptions about feedbacks and other conditions,
these gave a wider spread of possible temperatures).(30) |
=>Models (GCMs)
=>Simple
models
|
Much remained to be done before anyone could be truly confident
in these findings. There was the problem of cloud feedback, in particular,
which the Charney panel had singled out as one of the "weakest links."
Simple models would continue to be helpful for investigating such
parameters. Otherwise the Charney panel's report marked the successful
conclusion of the program of simple radiation calculations. While
they would still provide useful guidance for specialized topics, in
future their main job would be making a foundation for the full apparatus
of the general circulation models. |
|
|
RELATED:
Home
General Circulation Models of the Atmosphere
1. Simpson (1928), p. 70.
BACK
2. As Langley later realized, his estimate went much too
far below freezing, Langley (1884); see also
Langley (1886) . BACK
3. The pioneer was W.H. Dines, who gave the first explicit model
including infrared radiation upward and downward from the atmosphere itself, and energy moved
up from the Earth's surface into the atmosphere in the form of heat carried by moisture, Dines (1917); Hunt et al. (1986)
gives a review.
BACK
4. Simpson began with a gray-body calculation, Simpson (1928); very soon after he reported that this paper
was worthless, for the spectral variation must be taken into account,
Simpson (1928); 2-dimensional model (mapping ten degree
squares of latitude and longitude): Simpson (1929);
a pioneer in pointing to latitudinal transport of heat by atmospheric
eddies was Defant (1921); for other early energy budget climate models
taking latitude into account, not covered here, see Kutzbach
(1996), pp. 354-59. BACK
5. Hulburt (1931); a still better
picture of the vertical temperature structure, in mid-latitudes, was derived by Möller (1935).
BACK
6. Callendar (1941);
low-pressure resolution of details was pioneered by Martin and
Baker (1932).
BACK
7. Chandrasekhar (1950),
which includes historical notes. Most of this work was first published
in the Astrophysical Journal, a publication that meteorological
papers of the period scarcely ever referenced. BACK
8. For a review at the time, see Goody
and Robinson (1951).
BACK
9. Sheppard (1962), p. 93.
BACK
10. Plass (1956); see also Plass (1956); Plass (1956); Möller (1957) reviews the state of understanding as of about
1955.
BACK
11. Kaplan (1960); see
exchange of letters with Plass, Plass and Kaplan (1961); "chain
of reasoning:" Crowe (1971), p. 486; another critique: Sellers (1965), p. 217.
BACK
12. Möller (1963),
quote p. 3877. Möller recognized that his calculation, since it did
not take all feedbacks into account, gave excessive temperatures, p. 3885.
BACK
13. Houghton (1954).
Houghton did not discuss whether an important part of the heat flux might be carried by the
oceans.
BACK
14. Published only in an Air Force contract report, Telegdas and London (1954).
BACK
15. They mostly assumed that the flux of sensible
and latent heat would be fixed. Möller was aware that this was an
oversimplification which needed further work. Arrhenius further had inadequate
data for water vapor absorption, while Callendar and Plass left out the
water vapor feedback altogether. I thank Manabe for clarifying these matters.
BACK
16. Manabe and Strickler
(1964); see also Manabe et al. (1965); the 1965 paper was
singled out by National Academy of Sciences (1966), see pp.
65-67 for general discussion of this and other models.
BACK
17. "Our model does not have the extreme sensitivity... adduced
by Möller." Manabe and Wetherald (1967), quote p. 241;
the earlier paper, Manabe and Strickler (1964), used a fixed
vertical distribution of absolute humidity, whereas the 1967 work more realistically had moisture
content depend upon temperature by fixing relative humidity, a method adopted by subsequent
modelers. 21st-century modelers recognized that relative humidity tends to remain constant in the
lowest kilometer or so of the atmosphere but follows a more complex evolution in higher levels.
BACK
18. The pioneer radiation balance model incorporating aerosols
was Freeman and Liou (1979); for cloudiness data they cite Telegdas and London (1954).
BACK
19. Rasool and Schneider
(1971).
BACK
20. Ramanathan and Coakley
(1978) gives a good review, see p. 487.
BACK
21. Ramanathan et al. (1985).
BACK
22. Hansen et al. (1978);
Another pioneer radiation balance model incorporating aerosols was Freeman and Liou (1979); Hansen et al.
(1981).
BACK
23. Potter et al. (1981); quote:
Ramanathan and Coakley (1978), p. 487.
BACK
24. GCMs were "typically as complicated and inscrutable as the
Earth's climate..." simple models "cannot be relied upon," Washington and Meehl (1984), p. 9475.
BACK
25. Newell and Dopplick
(1979).
BACK
26. Idso (1980); Idso (1987).
BACK
27. Schneider et al. (1980), see
pp. 7-8; Ramanathan (1981) (with the aid of W. Washington's
model); National Research Council (1982); Cess and Potter (1984), quote p. 375; Schneider (1984); Webster
(1984); for further references, see Schneider and Londer
(1984); cf. reply, Idso (1987); the controversy is reviewed
by Frederick M. Luther and Robert D. Cess in MacCracken and
Luther (1985), App. B, pp. 321-34; see also Gribbin
(1982), pp. 225-32.
BACK
28. Schneider and Dickinson
(1974), p. 489; North et al. (1981), quote
p. 91, see entire articles for review. BACK
29. National Academy of Sciences
(1979).
BACK
30. Schlesinger (1984).
BACK
copyright
© 2003-2004 Spencer Weart & American Institute of Physics
|