How Global Warming Really Works

You must be reading different charts.

One graph shows an increase in temperature since 1960. The other shows an increase since 1980. It's not really something you can argue about.

...and the last 10 years, as CO2 levels continued to rise?

...and consider the scale. The entire range charted falls win one and one half degrees. The variance at any given data=point is often 1/2 of that.

Go outside. Sit there. Come back in when you detect the difference between 76.5 and 78 degrees on your skin.

Next- time the rate an ice-cube melts between the two "extremes".

Next- chart the growth rate of your tomatoes at these extreme variations.

I'll be here when you, too have joined the legions of "scientists" that have settled this debate once and for all.
 
...and the last 10 years, as CO2 levels continued to rise?

...and consider the scale. The entire range charted falls win one and one half degrees. The variance at any given data=point is often 1/2 of that.

Go outside. Sit there. Come back in when you detect the difference between 76.5 and 78 degrees on your skin.

Next- time the rate an ice-cube melts between the two "extremes".

Next- chart the growth rate of your tomatoes at these extreme variations.

I'll be here when you, too have joined the legions of "scientists" that have settled this debate once and for all.

Still rising, just not as fast.

So now your position has gone from it isn't changing to it's changing, but it's not a big deal. Trysail's chart doesn't show a 1.5F change, it shows about a 2.7F change. So that 78 would actually be 79.2.
 
Still rising, just not as fast.

So now your position has gone from it isn't changing to it's changing, but it's not a big deal. Trysail's chart doesn't show a 1.5F change, it shows about a 2.7F change. So that 78 would actually be 79.2.

 
Last edited:
You can post that data until you're blue in the face Trysail and it won't change a thing. You are dealing with a religion now, not science, not facts, not rational thought.

The baseline for these cultists is the tail end of the little ice age. Everything that came before is either non-existent or out and out lies.

Ishmael
 
Everyone knows that the temperature has been raised from a world-wide average of 78.7° to 77.9°.

Don't you dummies get it? .9 is clearly bigger than .7!

Your degrees must have come off of the back of a matchbook cover address...



:mad:
 
You can post that data until you're blue in the face Trysail and it won't change a thing. You are dealing with a religion now, not science, not facts, not rational thought.

The baseline for these cultists is the tail end of the little ice age. Everything that came before is either non-existent or out and out lies.

Ishmael





"For those with faith, no proof is necessary."



The Song of Bernadette
http://en.wikipedia.org/wiki/The_Song_of_Bernadette_(film)


 

So when talking about Global Climate Change you think it's accurate to cherry pick one particular place.

Science knows that weather changes. Global Climate Change deals specifically with how human's activities are changing the climate. Pointing at (cherry picked) data before humans were actually changing the climate shows that you just don't understand what it is.
 
So when talking about Global Climate Change you think it's accurate to cherry pick one particular place.

Science knows that weather changes. Global Climate Change deals specifically with how human's activities are changing the climate. Pointing at (cherry picked) data before humans were actually changing the climate shows that you just don't understand what it is.


Neither you nor climatology has the foggiest idea in hell why the climate changes.




 
You can post that data until you're blue in the face Trysail and it won't change a thing. You are dealing with a religion now, not science, not facts, not rational thought.

The baseline for these cultists is the tail end of the little ice age. Everything that came before is either non-existent or out and out lies.

Ishmael

So you should have no problem showing facts that an increase of CO2 makes temperature stay the same.

This will be the point in the conversation where you run away (or start calling me names), because your belief is based on faith, not science.
 
So you should have no problem showing facts that an increase of CO2 makes temperature stay the same.

This will be the point in the conversation where you run away (or start calling me names), because your belief is based on faith, not science.

Did you know that at one time Earths atmosphere was 10 - 25% CO2 (Science is still arguing about the exact number). That's 100K to 250K ppm. So how come the earth didn't burn up then? Or during the Devonian? Or during all those other periods of MUCH higher atmospheric CO2 concentration?

Damn cultists.

Ishmael
 
Given the problems facing our country global warming is a zero on a scale of 1-10
 
Did you know that at one time Earths atmosphere was 10 - 25% CO2 (Science is still arguing about the exact number). That's 100K to 250K ppm. So how come the earth didn't burn up then? Or during the Devonian? Or during all those other periods of MUCH higher atmospheric CO2 concentration?

Damn cultists.

Ishmael

Who said anything about the planet "burning up"?

What was the climate like in the Devonian?
 
We don't need water to live.

Lol...it's funny how they focus on the debt that suddenly appeared during the Obama administration...and how we're leaving it for our children to pay. While at the same time not having a problem with leaving a waterless barren planet. Priorities, huh?
 
http://wattsupwiththat.com/2014/05/07/the-global-climate-model-clique-feedback-loop/#comment-1630812


By Robert G. Brown, Ph.D.
Department of Physics
Duke University
I exchanged a few emails with mathematician Chris Essex recently who claimed (I hope I’m translating this correctly) that climate models are doomed to failure because you can’t use finite difference approximations in long-time scale integrations without destroying the underlying physics. Mass and energy don’t get conserved. Then they try to fix the problem with energy “flux adjustments”, which is just a band aid covering up the problem.

We spent many months trying to run the ARPS cloud resolving models in climate mode, and it has precisely these problems.

I’ve spent a fair bit of time solving open stochastic differential equations (or, arguably, integrodifferential equations) — things like coupled Langevin equations — back when I did around five or six years of work on a microscopic statistical simulation of quantum optics, as well as LOTS of work numerically solving systems of coupled (de facto partial) differential equations (the system detailed in my Ph.D. dissertation on an exact single-electron band theory). Even before one hits the level of stiff equations — quantum bound states — where tiny errors are magnified due to the fact that numerically, eigensolutions are always numerically unstable past the classical turning point, problems with drift and normalization and accumulation of error are commonplace in all of these sorts of systems. And yes, one of the consequences of drift is that conservation laws aren’t satisfied by the numerical solution.

A common solution for closed systems is to renormalize per time step (or N time steps) to ensure that the solution is projected back to the conserved subspace instead of being allowed to drift away. This prevents the accumulation of error and ensures that the local dynamics remains MOSTLY on the tangent bundle of the conserved hypersurface/subspace, if you like, with only small/second order deviation that is immediately projected away. This is, however, computationally expensive, and it isn’t the way e.g. stiff systems are solved (they use special backwards ODE solvers).

For open systems doing mass/energy transport, however, there is a real problem — energy ISN’T conserved locally ANYWHERE, and most cells can both gain or lose energy directly out of the whole system. In the case of climate models, in some sense this is “the point” — sunlight can and does warm the entire column from the TOA to the lower bound surface of light transport in the model, outgoing radiation can and does cool the entire column from the TOA to the lower bound surface of light transport for the model, AND each cell can exchange energy not only with nearest neighbors but with neighbors as far away as light can travel from the cell boundaries and remain inside the system laterally and vertically combined. That is, radiation from a ground level atmospheric cell can exchange energy with a cell two levels up (assuming that the model has multiple vertical slabs) and at least one cell over PAST any nearest-neighbor, same level cell. One can model this as a nearest-neighbor interaction/exchange system, but it’s not, and in physics systems with long range interactions often have startlingly different behavior from systems that only have short range interactions. Radiation is a long range coupling and direct source and sink for energy.

The consequence of this is that enforcing energy conservation per se is impossible. All one can do is try to solve a system of cell dynamics that is conservative at the differential level, basically implementing the First Law per cell — the energy crossing the borders of the cell has to balance the change in internal energy of the cell plus the work done by the cell. Errors in the integration of those ODE/PDEs cannot be corrected, they simply accumulate. To the extent that the system has negative feedbacks or dynamical processes that limit growth, this doesn’t mean that the energy will diverge, only that the trajectory of the system will rapidly, irreversibly and uncorrectably diverge from the true trajectory. If the negative feedbacks are not correctly implemented, of course, the system can diverge — the addition of a series of randomly drawn -1, +1 numbers diverges like the square root of the number of steps (a so-called “drunkard’s walk”). In the climate system the issue isn’t so much a divergence as the possibility of bias in the errors. Even if you have some sort of damping/global conservation principle forcing you back to zero, if the selection of +/- 1 steps is NOT random but (say) biased 2 +1′s for ever -1, you will not be driven back to the correct equilibrium energy content. This sort of thing can easily happen in numerical code as a simple artifact of things like rounding rules or truncation rules — setting an integer from a floating point number and then using the integer as if it were the float.

It can also happen as an artifact of the method used to extrapolate when solving differential equations or finite difference equations, or because of using interpolation on an inadequate grid while trying to solve dynamics with significant short term variation. Interpolation basically cuts off extremes, but in a nonlinear model the contribution of the excluded extremes will not be symmetric in the deviation specifically when the deviation is normally distributed around the interpolation. As a trivial, but highly relevant example, if one has a transport process that (say) is proportional to T^4, and use interpolated T’s on an inadequate grid de facto assuming unbiased noise around the interpolation, one will strictly underestimate the transport.

In fact, if one creates a field that has a fixed mean and a purely normal distribution around the mean, and use the mean temperature as an estimate of the (say, radiative) transport, one will strictly underestimate the actual transport because the places where the temperature is warmer than the mean lose energy (relative to the mean) faster than the places where the temperature is cooler, do, relative to the mean temperature. If you like: T_0^4 < \frac{1}{2} (T_0 + \Delta T)^4 + \frac{1}{2}(T_0 - \Delta T)^4 for any \Delta T.

Mass transport is even harder to deal with. Well, not really, but it is more expensive to deal with. In the Earth system, one can probably assume that total mass of atmosphere plus ocean plus land (including the highly variable humidity/water/ice distribution that can swing all three ways) is constant. Yes, there is a small exchange across the “boundary” from thermal outgassing and infalling meteor and other matter but it is a very, very small (net) number compared to the total mass in question and probably is irrelevant on less than geological time (in geological time that is not clear!) But GCMs don’t work over geological time so we can assume that the Earth’s mass is basically conserved.

Back a few paragraphs, you’ll note that one has to implement the first law per cell in the model. This is nontrivial, because cells not only receive energy transport across their boundaries in the form of radiation and conduction at the boundary “surfaces”, but they can receive mass across those boundaries and the mass itself carries energy. Worse, the energy carried by the mass isn’t a simple matter of multiplying its temperature by some specific heat and the mass itself; in the case of pesky old water it also carries latent heat, heat that shows up or disappears from the cell relative to its temperature as water in the cell changes phase. Finally, each cell can do work.

This last thing is a real problem. It is left as an exercise to see that a cell model with fixed boundaries cannot directly/internally compute work, because work is a force through a distance and fixed boundaries do not move. If a cell expands into its neighbors it clearly does work (and all things being equal, cools) but “expanding into neighbors” means that the neighbors get smaller and cell boundaries move. One cannot compute the work done at any single boundary from mass transport alone, because no work is done by the uniform motion of mass through a set of cells — a constant wind, as it were. One cannot compute work done by any SIMPLE rule. One has to basically look at net flux of mass into/out of a constant volume cell and instead of evaluating P\Delta V (work) evaluate V \Delta P and try to infer work from this plus a knowledge of the cell temperature, plus a knowledge of the cell’s heat capacity plus fond hopes concerning the rates of phase transitions occurring inside the cell. Yet without this, the model cannot work because ultimately this is the source of convective forces and the cause of things like the wind and global mass-energy transport. Not to worry, this is what the Navier-Stokes equation is all about:

http://en.wikipedia.org/wiki/Navier–Stokes_equations

In fact, the NS equations do even better. They account for the fact that there the motion of the transport is accompanied by the moral equivalent of friction — drag forces that exert shear stress across the fluid, a.k.a. viscosity. They account for the fact that the motion occurs in a gravitational field, so that downward transport of parcels is accompanied by the gain of total energy while upward transport costs total energy (the kind of thing that gives rise to lapse rates). They can be modified to account for “pseudoforces” that appear in a rotating frame, e.g. Coriolis forces, so that mass transported south is deflected spinward (East) in the northern hemisphere, mass transported upwards is deflected antispinward (West) in both hemispheres, by “forces” that depend in detail on where you are on the globe and in which direction you are moving. However, when solving the system (as this article notes) a statement of mass conservation is necessary, for example the continuity equation:

\frac{\partial \rho}{\partial t} + \vec{\nabla} \cdot (\rho\vec{v}) = 0

Recall, though, that the continuity equation in climate science is not so simple. The ocean evaporates. Rain falls. Ice melts. Water itself also transports itself nontrivially around the globe according to a separate, coupled NS equation with its own substantial complexity. Not only does this mean mass transport is difficult to enforce as a constraint, it means that one has to account for enormous, nonlinear variations in cell transport dynamics according to the state of a substantial fraction of the mass in any given cell. Where by “substantial” I don’t mean that it is ever a particularly large fraction — most of the dry atmosphere is nitrogen and oxygen and argon — but water averages 0.25% of the atmosphere and locally can be as much as 5%!

This is not negligible in any sense of the word when integrating a long, long time series!

So just as was the case when considering energy, one has to consider mass transport across cell boundaries and the forces that drive this transport, where one is not tracking parcels of mass as they move around and interact with other parcels of mass, but rather tracking what enters and leaves fixed volume, fixed location cells. One is better off because one only has to worry about nearest neighbor cells. One is far worse off in that one has to worry not only about dry air, but about the fact that in any given timestep, an entirely non-negligible fraction of the fluid mass in a cell can “appear” or “disappear” as water in the cell changes state, and worse, does so in perfect consonance with the appearance and disappearance of energy in the cell as measured by things like the “temperature” of the atmosphere in the cell.

Maintaining normalization in circumstances like this in a purely numerical computation is enormously difficult. One could in principle do it by integrating the total mass of the system in each timestep and using the result to renormalize it to a constant value, but this is safe to do only if the system is sufficiently detailed that it can be considered closed. That is, the solution has to track in detail the water in inland lakes, the water locked up as ice and snow, the water that evaporates from the surface of the ocean, the total water content of the ocean (cell by cell, all the way to whatever layer you want to consider an “unchanging boundary”. Otherwise errors in your treatment of water as a contribution of total cell mass will bleed over into errors in the total dry atmospheric mass, and the system will drift slowly away into the nonphysical regime as the integration proceeds.

Finally, one has the same issues one had with energy and granularity, only worse. Considerably worse on a lat/long grid on a sphere. At the poles, one can literally step from one cell to the next — in fact at the north pole itself one can put one’s foot down and have it be in dozens of cells at once. Dynamics based on approximately rectilinear cells at the equator, where cells are hundreds of kilometers across and one might be forgiven for neglecting non-nearest neighbor cell coupling are totally wrong at the poles where a fire in one call can heat somebody’s hands when they are standing four cells away. Mass transport dynamics is also skewed — a tiny storm that would be literally invisible at the equator in the middle of a cell might span many cells at the poles, just as “Antarctica” stretches all the way across a rectangular map with linear latitude and longitude, apparently contributing a 12th or so of the total area of the globe in spite of it being only a modest sized continent that is far less than 1/12 of the land area only, where land area is only 30% of the globe in the first place. Siberia, Canada, Greenland are all similarly distorted into appearing comparatively huge. One can of course compensate — in a sense — for this distortion by multiplying by a suitable trig factor in the integrals (the Jacobean) but this doesn’t correct the dynamical algorithms themselves!

This I do have direct experience with. In my band theory problem, I had to perform projective integrals over the surfaces of spheres. Ordinarily, a 2D integral with controllable error would be simple — just use two 1D adaptive quadratures, or without too much effort, write a 2D rectilinear adaptive quadrature routine to use directly. But on a sphere, using a gridding of the spherical polar angles \phi, \theta this does not work. Or rather, it works, but enormously inefficiently and with nearly uncontrollable error estimates. By the time one has a grid that integrates the equator accurately, one has a grid that enormously oversamples the poles. Using a differential adaptive routine can help, but it still doesn’t account for the non-rectangular nature of the cells at the poles and hence one’s error estimates there are still sketchy.

Finally, note well my use of the word adaptive. Even when solving simple problems with ordinary quadrature (let alone trying to solve a nonlinear, chaotic, partial differential equation that cannot even be proven to have general solutions) one cannot just say “hey, I’m going to use a fixed grid/step size of x, I’m sure that will be all right”. Errors grow at a rate that directly depends on x. Whether or not the error growth rate for any given problem can be neglected can only rarely be known a priori, and in this problem any such assertion of a priori knowledge would truly be absurd. That is why even ordinary, boring numerical integration routines with controllable error specification do things like:

* Pick a grid size. Do the integral.
* Divide the grid size by some number, say two. Do the integral again, comparing the answers. (Some methods will do the integral a third time, or will avoid factors of two as being too likely to produce a spurious/accidentally accepted result that is still badly wrong.)
* If the two answers agree within the requested tolerance, accept the second (probably more accurate) one.
* If not, throw away the first answer, replace the first by the second, divide the grid size by two again, and repeat until the two answers do agree within the accepted tolerance.

Again, there are more subtle/efficient variants of this — sometimes adapting only over part of the range where the function itself is rapidly varying (e.g. applying the sort of process observed above directly to the subdivisions until they separately converge). All adaptive quadrature routines can, however, be fooled by just the right function into giving precisely the wrong answer, for example a harmonic function with zero integral will give a nonzero result of the initial gridding is some large, even integer multiple of the period as even 2 or three divisions by 2 will of course give you the same nonzero value and the routine will converge without discovering the short period variation.

Solving differential equations has exactly this problem, only — you guessed it — worse. The accurate knowledge of the integral to be done for each step depends on the accuracy of the integral done in the step before. Or, in many methods, the steps before. Even if the error in that step was well within a very small tolerance, integrating over many steps can — drunkard’s walk fashion — cause the cumulated error to grow, even grow rapidly, outside of the requested tolerance for the ODE solution. If there is any sort of bias in the cumulated error, it can grow much faster than the square root of the number of steps, and even if the differential equations themselves have intrinsic analytic conservation laws built in, the numerical solution will not.

Certain differential systems can still be fairly reliably integrated over quite long time periods with controllable errors by a good, adaptive code. These systems are called (generically) “non-stiff” systems, because they have an internal stability such that the solutions one jumps around between due to small uncontrollable errors in the integration tend to move together so that even as you drift away you drift away slowly, and can usually make the step size small enough to make that drift “slow enough” to achieve a given probable tolerance or absolute error requirement.

Others — yes, the ones that are called “stiff” — cannot. These are systems where neighboring trajectories rapidly (usually exponentially or faster) diverge from one another, even when started with very small perturbations in their initial conditions. In these systems, simply using different integration routines from precisely the same initial condition, or changing a single least-significant digit in the initialization will, over time, lead to completely different solutions.

Guess which kind of system the climate is. Not just the climate, but even comparatively simple systems described by the Navier-Stokes equation. One doesn’t even usually describe such systems as being merely stiff, as the methods that will work adequately to integrate stiff systems with a simple exponential divergence will generally not work for them. We call these systems chaotic, and deterministic chaos was discovered in the context of weather prediction. Not only do neighboring solutions diverge, they diverge in ways described by a set of Lyapunov exponents:

http://en.wikipedia.org/wiki/Lyapunov_exponent

Note well that the Maximal Lyapunov Exponent (MLE) is considered to be a direct measure of the predictability of any given chaotic system (where the “phase space compactness” requirement mentioned in the first paragraph is precisely the requirement that e.g. mass-energy be conserved by the dynamics (allowing for the ins and outs of energy in the energy-open system). The differential equations of quantum eigensolutions aren’t chaotic as they diverge “badly” and produce non-normalizable solutions (violating the axioms of quantum theory, which is why only eigensolutions that are normalizable are allowed), they are merely stiff. I’m not certain, but I’m guessing the the MLE for climate dynamics is such that system stability extends only out to pretty much the limits of weather prediction, decades away from any sort of ability to predict climate.

Without an adaptive solution, one literally cannot validate any given solution to the system for a given, a priori determined spatiotemporal gridding. One cannot even say if the solution one obtains is like the actual solution. All one can do is solve the differential system lots of times and hope that the result is e.g. normally distributed with respect to the actual solution, or that the actual solution has some reasonable probability of being “like” one of the solutions obtained. One cannot even estimate that probability, because one cannot verify that the distribution of solutions obtained is stationary as one further subdivides the gridding or improves or merely alters the algorithm used to solve the differential system.

The really sad thing is that we know that there are numerous small scale weather phenomena that easily fit inside of an equatorial cell in any of the current gridding schemes and that we know will have a large, local effect on the cell’s dynamics. For example, thunderstorms. Tornadoes. Mere rainfall. Winds. The weather system is not homogeneous on a scale of 3 degree lat/long blocks.

What this in turn means is that we know that the cell dynamics are fundamentally wrong. If we put a thunderstorm in a cell, it is too big that has one, it is too big. If we don’t put a thunderstorm into a cell that has one, we miss a rapidly varying mass fraction, latent heat exchange, variation of albedo, transport of bulk matter all over the place vertically and horizontally carrying variations in energy density with it. There isn’t the slightest reason to believe that dynamics carried out with thunderstorms that are always a hundred kilometers across or more will in any way have a long term integral that matches the integral of a system with thunderstorms that can be as small as one kilometer across (not a crazy estimate for the lateral size of a summer thunderhead), or that either extreme of assigning thunderstormness plus some interpolatory scheme for “mean effect of a cell thunderstorm” will integrate out to the right result either.

So here’s the conclusion of the rather long article above. In my opinion — which one is obviously free to reject or criticize as you see fit — using a lat/long grid in climate science, as appears to be pretty much universally done, is a critical mistake, one that is preventing a proper, rescalable, dynamically adaptive climate model from being built. There are unbiased, rescalable tessellations of the sphere — triangular ones, or my personal favorite/suggestion, the icosahedral tessellation. There are probably tessellations still undiscovered that can even be rescaled N-fold for some N and still preserve projective cell boundaries (some variation of a triangular tesselation, for example). These tessellations do not treat the poles any different from the equator, and one can (sigh) still use lat/long coordinates to locate the centers, corners and sides of the tessera. Yes, it is a pain in the ass to write a stable, rescalable quadrature routine over the tessera, but one only has to do it once, and that’s the thing federal grant money should be used for — to fund eminently practical applied computationa mathematics to facilitate the accurate, rescalable solution to many problems that involve quadrature on hyperspherical surfaces (which is a nontrivial problem, as I’ve worked on it and written (well, stolen and adapted) an algorithm/routine for it myself. It happens all the time in physics not just NS equations on the globe in climate science.

Given a solid, trivially rescalable grid, many questions concerning the climate models could directly be addressed, at some considerable expense in computer time. For example, the routine could simply be asked to adaptively rescale until the model converged to some fairly demanding tolerance over as long a time series as possible, and then the features it produces could be compared to real weather features to see if it is getting the dynamics right with the many per-cell approximations being made. Many of those approximations can probably be eliminated, as they exist only because cells right now are absurdly large compared to nearly all known local weather features and only manage large scale, broad transport processes while accumulating errors with an unknown bias in each and every cell due to the approximation of everything else. The MLE of the models could be computed, and used to determine the probable predictivity of the model on various time scales. The dynamically adaptive distributions of final trajectories could be computed and compared to see if even this is converging. And finally, one wouldn’t ever again have to completely rewrite a climate model to make it higher resolution. One would simply have to alter a single parameter in the program input to set the scale size of the cell, and a single parameter to set the scale size of the time step, with a third parameter used to control whether or not to let the program itself determine if these initial settings are adequate or need to be globally or locally subdivided to resolve key dynamics.

I’m guessing that predictivity will still suck, because hey, the climate is chaotic and highly nonlinear. But at least such a program might be able to answer metaquestions like “just how chaotic and nonlinear is that, anyway, when one can freely increase resolution or run the model to some sort of adaptive convergence”. Even if solving the model to some reasonable tolerance proved to be impossibly expensive — as I strongly suspect is the case — we would actually know this and would know not to take climate models claiming to predict fifty years out seriously. Hey, there are problems Man was Not Meant to Solve (so far), like building a starship within the bounds of the current knowledge of the laws of physics, solving NP complete problems in P time (oh, wait, that could actually be what this problem is), building a direct recursion relation capable of systematically sieving out out all of the primes, generating a truly random number using an algorithm, and who knows, maybe long term climate modelling.

rgb
 
Did you know that at one time Earths atmosphere was 10 - 25% CO2 (Science is still arguing about the exact number). That's 100K to 250K ppm. So how come the earth didn't burn up then? Or during the Devonian? Or during all those other periods of MUCH higher atmospheric CO2 concentration?

Damn cultists.

Ishmael

It did burn up. Are you joking cus you used to be smarter than this, a lot smarter.
 
Brown doesn't say where the energy is going. That doesn't concern you?
Brown says that we can't know where the energy is going. We can't prove that the sun's energy won't simply bounce off into space and leave us cold. So there's no point in observing climate trends.
 
Brown says that we can't know where the energy is going. We can't prove that the sun's energy won't simply bounce off into space and leave us cold. So there's no point in observing climate trends.

The CO2 prevents it from bouncing back into space.
 
What effect will global warming have on the growth of mushrooms?
 
It did burn up. Are you joking cus you used to be smarter than this, a lot smarter.

It did? Well then we all must be living in a Matrix on some far off world because according to the predictions of the cultists when compared to the paleo-climate, or even the historical climate, we just shouldn't be here.

Ishmael
 
Back
Top