Peregrinator
Hooded On A Hill
- Joined
- May 27, 2004
- Posts
- 89,482
As long as we're bashing religiosity and the right...
Michael Barone, NRO
Care to comment on the fact that with this post you c&ped an ad hominem attack and signed on to it?
Follow along with the video below to see how to install our site as a web app on your home screen.
Note: This feature may not be available in some browsers.
As long as we're bashing religiosity and the right...
Michael Barone, NRO
has anyone said why if Anthro climate change is bullshit scientists keep insisting it's real. what's in it for them?
Care to comment on the fact that with this post you c&ped an ad hominem attack and signed on to it?
It's only ad hominem if it's not true.
The movement has adopted the methods and trappings of religion.
This is an a priori observation to even the most slow among us. You yourself act as devout as any member of any congregation that has tried to convert me over the course of 50+ years to the "truth" of their particular belief system. There are a ton of them out there that you have no problem in denigrating as sheer ignorance, yet somehow, it's not the least bit possible that you too could fall sway to a powerful, seductive and charismatic movement, oh, no, perish that thought for you are a free thinker, a man of enlightenment, of letters and many, many experts for friends who constantly give you good reason for your surety...
It's gone long past Science. Science does not crusade; followers, Hoffer's True Believers, they crusade in the name of Science.
Money, lots of money, and power. That's what it's all about.
Ishmael
Ca$h money, government grants, a cowed and worshipful following ready to Crusade, to do good, to join that noble cause of saving all of humanity.
Government can use the fear to humiliate and enslave us and once society is stripped of all religious moral, then any moral will do and soon you have a self-feeding cycle, Government plunders, Science partakes, and everyone is proud of their efforts for they have set for themselves a morality and a nobility in what they do.
As Perg as said many times, even if they're wrong, what the hell is wrong with trying to clean up the planet? You see even in this an ends justifies the mean mentality and usually the best way to produce this mentality is an irrational fear stoked by the infallible omnipotent and unquestioned expertise of the Priesthood.
That's what Christians say to me all the time, A_J, you heathen Atheist, what could be wrong with believing? There's no downside to being wrong and a huge upside to being right...
"When plunder becomes a way of life for a group of men living together in society, they create for themselves in the course of time a legal system that authorizes it and a moral code that justifies it."
Frédéric Bastiat
Money, lots of money, and power. That's what it's all about.
Ishmael
...
Loading just the first program opens up another huge can o' worms. The program description reads:
pro cal_cld_gts_tdm,dtr_prefix,outprefix,year1,year2,info=info
; calculates cld anomalies using relationship with dtr anomalies
; reads coefficients from predefined files (*1000)
; reads DTR data from binary output files from quick_interp_tdm2.pro (binfac=1000)
; creates cld anomaly grids at dtr grid resolution
; output can then be used as dummy input to splining program that also
; includes real cloud anomaly data
So, to me this identifies it as the program we cannot use any more because
the coefficients were lost. As it says in the gridding read_me:
Bear in mind that there is no working synthetic method for cloud, because Mark New
lost the coefficients file and never found it again (despite searching on tape
archives at UEA) and never recreated it. This hasn't mattered too much, because
the synthetic cloud grids had not been discarded for 1901-95, and after 1995
sunshine data is used instead of cloud data anyway.
But, (Lord how many times have I used 'however' or 'but' in this file?!!), when
you look in the program you find that the coefficient files are called:
rdbin,a,'/cru/tyn1/f709762/cru_ts_2.0/_constants/_7190/a.25.7190',gridsize=2.5
And, if you do a search over the filesystems, you get:
crua6[/cru/cruts] ls fromdpe1a/data/grid/cru_ts_2.0/_makecld/_constants/_7190/spc2cld/_ann/
a.25.01.7190.glo.Z a.25.05.7190.glo.Z a.25.09.7190.glo.Z a.25.7190.Z
crua6[/cru/cruts] ls fromdpe1a/data/grid/cru_ts_2.0/_makecld/_constants/_7190/spc2cld/_mon/
...
So.. we don't have the coefficients files (just .eps plots of something). But
what are all those monthly files? DON'T KNOW, UNDOCUMENTED. Wherever I look,
there are data files, no info about what they are other than their names. And
that's useless.. take the above example, the filenames in the _mon and _ann
directories are identical, but the contents are not. And the only difference
is that one directory is apparently 'monthly' and the other 'annual' - yet
both contain monthly files.
...
… I have studied the climate models and I know what they can do. The models solve the equations of fluid dynamics, and they do a very good job of describing the fluid motions of the atmosphere and the oceans. They do a very poor job of describing the clouds, the dust, the chemistry and the biology of fields and farms and forests. They do not begin to describe the real world that we live in. The real world is muddy and messy and full of things that we do not yet understand. It is much easier for a scientist to sit in an air-conditioned building and run computer models, than to put on winter clothes and measure what is really happening outside in the swamps and the clouds. That is why the climate model experts end up believing their own models.
The history of humanity is full of people who were absolutely dead-set sure, and completely wrong.
Climate models are not evidence: they are imperfect “simulations” of the climate, not the climate itself. Our global atmosphere is a messy algorithm, with oceans, clouds, rain, water vapor, solar wind, magnetic fields, forests, ice-cover, glaciers, volcanoes, heat from below, and moving dust clouds of soot. It’s just not possible to simulate the real atmosphere without making assumptions, estimates or decisions on which parts to simplify or omit. Since all those things rely on the opinions of the modelers, no matter how well intentioned or educated they are, a model is a glorified opinion.
-Joanne Nova
http://joannenova.com.au/2010/01/is-there-any-evidence/
Oct. 8 (Bloomberg) -- Nassim Nicholas Taleb, author of “The Black Swan,” said investors who lost money in the financial crisis should sue the Swedish Central Bank for awarding the Nobel Prize to economists whose theories he said brought down the global economy.
“I want to make the Nobel accountable,” Taleb said today in an interview in London. “Citizens should sue if they lost their job or business owing to the breakdown in the financial system.”
Taleb said that the Nobel Prize for Economics has conferred legitimacy on risk models that caused investors’ losses and taxpayer-funded bailouts. Sweden’s central bank will announce the winner of this year’s award on Oct. 11.
Taleb singled out the Nobel award to Harry Markowitz, Merton Miller and William Sharpe in 1990 for their work on portfolio theory and asset-pricing models.
“People are using Sharpe theory that vastly underestimates the risks they’re taking and overexposes them to equities,” Taleb said. “I’m not blaming them for coming up with the idea, but I’m blaming the Nobel for giving them legitimacy. No one would have taken Markowitz seriously without the Nobel stamp.”
Markowitz, a professor of finance at the Rady School of Management at the University of California, San Diego, didn’t return a phone call seeking comment. Miller, who was a professor at the University of Chicago, died in 2000 at the age of 77.
“People used the theory and assigned numerical forecasts to the algebra,” said Sharpe, a professor of finance, emeritus, at the Graduate School of Business at Stanford University, in a telephone interview. “But I’m not going to take the blame for the numbers they put in.”
Probability Models
In his 2007 bestseller “The Black Swan: The Impact of the Highly Improbable,” Taleb described how unforeseen events can roil markets. He warned that bankers were relying too much on probability models and disregarding the potential for unexpected catastrophes.
“If no one else sues them, I will,” said Taleb, who declined to say where or on what basis a lawsuit could be brought.
The Nobel prizes in physics, chemistry, medicine, peace and literature were established in the will of Alfred Nobel, the Swedish inventor of dynamite who died in 1896. The first awards were handed out 1901. The Swedish Central Bank founded the economics award in 1968 in memory of Nobel. Previous winners of that prize include Milton Friedman, Amartya Sen, Paul Krugman, Robert Merton and Myron Scholes.
A former derivatives trader, Taleb is a professor of risk engineering at New York University and advises Universa Investments LP, a Santa Monica, California-based fund that bets on extreme market moves.
While there are uncertainties with climate models, they successfully reproduce the past and have made predictions that have been subsequently confirmed by observations.
Well then, of the few people who do understand what's going on (in your opinion), how many are deniers and skeptics?(edited)
I don't make a habit of attempting to convince the unconvinceable. It's usually a mistake to do so.
__________________
There aren't enough hours in the day to explain to someone that a computer model of an already imperfectly understood, immensely complex, multi-variate, NON-LINEAR system is a farce from the git-go.
If you don't understand that, you're already beyond help— you're gullible and a computer model can be manufactured that will get you to believe damn near anything.
If you have not looked at NASA GISS ModelE, if you have no experience with attempting to program computer simulations of highly complex, multi-variate, non-linear systems, you'll never understand what's going on.
[
There aren't enough hours in the day to explain to someone that a computer model of an already imperfectly understood, immensely complex, multi-variate, system is a farce from the git-go.
that graph, indeed, is compelling.
maybe jenn should add that to her USPS screed.
You are also pretty damned sure that I should not be using it for economics, but there is one problem with that attitude, American Thinker has been a lot closer to reality than Turbo TIMMAH! and friends...
Detailed weather records from 20,000 years ago...
Was that Barney Rubble's job?
I missed that post earlier. Trysail, what other huge swathes of science are primarily computer-modeled or not reproducible? Do you routinely castigate them, too? Cuz it seems to me that could get problematic pretty fast.
He'll just dodge your question, as he did when I asked the same thing a few pages back.
Well, he might, but I think he actually likes me, which might make a small difference.
So we should stop designing aircraft, then? You do know that non-laminar flow over aircraft wings is chaotic, right? Bernouilli's theory notwithstanding?
Congratulations. In a rare demonstration of posting virtuosity, you actually managed to post nonsense within a non sequitur.
Would you check the results of a model with another model? Before you answer, be sure you know what the question is.
A model—whether it is physical, statistical, mathematical, or some combination—is an algorithmic device designed to make predictions about some observable thing. You want today to know that price of tomorrow’s Dow Jones Industrial Index? There are models for that; usually statistical models.
You want today to know whether it will rain in Detroit tomorrow so that you can decide whether to plant your crops in the old lots that used to contain houses? There’s a model for that; a physical-statistical weather model called MOS (model output statistics; see Part II).
Now, how would you, assuming you are not an expert in these matters, check the accuracy of your model? Would you (a) compare the model’s predictions with what actually happened, or (b) produce another model and check the results of the first model against the predictions of the second?
The right answer is (a), of course, but the problem is that there are two ways to interpret “what actually happened.” You probably thought it meant “what happened in the future.” Now, it is the great shame in the field of statistics—both in the dismal way it is taught and the worse way it is practiced by most—that (a) is nearly always is interpreted to mean “what happened in the past.”
Nearly all—the exceptions to this are rarer than sober Paul Krugman columns—statistical models, and many physical models, are checked against the data that was used to fit, or create them. Since it is an elementary theorem that any model may be made to fit perfectly—not just closely, perfectly—to any set of historical data, to claim that your model is good because it fits old data well is a hollow boast.
This is the reason for the great overconfidence of experts who build and use models. And don’t think it doesn’t matter, because it does. People in charge of us makes decisions and set policy based on these models frequently. We are at the mercy of bad statistics.
Weather and Climate Models
But it’s not all bad. It is to the great glory of meteorological models that they are usually—in practice, I mean—checked against what happened in the future. Weather models have the advantage of a constant stream of model predictions and future observations. Discrepancies between the two are noted quickly and used in tweaking the models so that they perform better in the future.
Anybody who cares to look will discover that the performance of meteorological models has improved dramatically over the last thirty years. Of course, people’s expectations of accuracy has also increased, so that the level of grousing about weatherman has remained constant. Human nature.
Climate models are in a different category. So far, all they can boast about is how well they fit the data used to build them, which we have just seen is no great shakes. This being true, those who use climate model output should be humble, they should be cautious, even timid about their prognostications. And that’s just what we see in practice, right?
Actually, it’s still worse, because climate modelers—and in their development stages, weather modelers—answer (b) to that question above. They check their models against the output of other models. How could this be?
The Analysis
Climate/weather models take current observations as input and produce forecasts of future observables as output. But these physical models cannot take observations raw, like statistical models can. They must first process those observations so that they fit into the model environment. This assimilation is called an analysis. Analysis is a model itself.
Climate/weather models are run on grid-like structures, but observations come irregularly: we do not have equally spaced observations over the surface of the Earth and through the atmosphere. To operate, the observations have to be placed on the model grid. The analysis, then, is a sort of interpolation that does this. This is not a detriment; it is a necessary step to get these models to run.
Once the analysis is complete, the model is integrated forward in time to produce a forecast. OK so far? Because it’s about to get tricky. At that future point—the time of the forecast—come new observations. Ideally, the climate/weather model’s output would be checked against these actual observations, at only the irregularly spaced sites where they are taken. These observations are, are the truth, the whole truth, and the only truth.
But that’s not what happens. Instead, these new observations are read into the model in a new analysis cycle. This interpolates these new observations to the model grid. Then the old model integration is checked against this new analysis.
Thus, the model’s accuracy is checked with another model.
The Analysis (cont.)
Two problems arise when comparing a model’s integration (the forecast) with an analysis of new observations, which are not found when comparing the forecast to the observations themselves. Verifying the model with an analysis, we compare two equally sized “grids”; verifying the model with observations, we compare a tiny number of model grid points with reality.
Now, some kinds of screwiness in the model are also endemic in the analysis: the model and analysis are, after all, built from the same materials. Some screwiness, therefore, will remain hidden, undetectable in the model-analysis verification.
However, the model-analysis verification can reveal certain systematic errors, the knowledge of which can be used to improve the model. But the result is that the model, in its improvement cycle, is pushed towards the analysis. And always remember: the analysis is not reality, but a model of it.
Therefore, if models over time are tuned to analyses, they will reach an accuracy limit which is a function of how accurate the analyses are. In other words, a model might come to predict future analyses wonderfully, but it could still predict real-life observations badly.
Which brings us to the second major problem of model-against-analysis verification. We do not know actually how well the model is performing because it is not being checked against reality. Modelers who rely solely on the analysis model-checking method will be—they are guaranteed to be—overconfident.
The direct output of most climate and weather models is difficult to check against actual observations because models makes predictions at orders and orders of magnitude more locations than there are observations. Yet modelers are anxious to check their models at all places, even where there are no observations. They believe that analysis-verification is the only way they can do this.
This is important, so allow me a redundancy: models make predictions at wide swaths of the Earth’s surface where no observations are taken. At a point near Gilligan’s Island, the model says “17oC”, yet we can never know whether the model was right or wrong. We’ll never be able to check the model’s accuracy at that point.
We can guess accuracy at that point by using an analysis to make a guess of what the actual temperature is. But since model points—in the atmosphere, in the ocean, on the surface—outnumber actual observation locations by so much, our guess of accuracy is bound to be poor.
MOS
Actual observations can be brought into the picture by matching model forecasts to future observations and then building a statistical model between the two. This is called model output statistics, or MOS. The whole model, at all its grid points, is fed into a statistical model: luckily, many of the points in the model will be found to be non-predictive and thus are dropped. Think of it like a regression. The models’ output are like the Xs, and the observations are like the Ys, and we statistically model Y as a function of the Xs.
So, when a new model integration comes along, it is fed into a MOS model, and that model is used to make forecasts. Forecasters will also make reference to the physical model integrations, but the MOS will often be the starting point.
Better, MOS predictions are checked against actual observations, and it is by these checks which we know meteorological models are improving. And those checks are also fed back into the model building process, creating another avenue for model improvement. MOS techniques are common for meteorological models, but not yet for climatological models.
Measurement Error
MOS is a good approach to correct gross model biases and inaccuracies. It is also used to give a better indication of how accurate the model—the model+MOS, actually—really is, because it tells us how the model works at actual observation locations.
But MOS verification will still given an overestimate of the accuracy of the model. This is because of measurement error in the observations.
In many cases, nowadays, measurement error of observations is small and unbiased. By “unbiased” I mean, sometimes the errors are too high, sometimes too low, and the high and low errors balance themselves out given enough time. However, measurement error is still significant enough that an analysis must be used to read data into a model; the raw data measured with error will lead to unphysical model solutions (we don’t have space to discuss why).
Measurement error is not harmless. This is especially true for the historical data that feeds climate models, especially proxy-derived data. Proxy-derived data is itself the result of a model from some proxy (like a tree ring) and a desired observation (like temperature). The modeled—not actual—temperature is fed to an analysis, which in turn models the modeled observations, which in turn is physically modeled. Get it?
Measurement error is a problem is two ways. Historical measurement error can lead to built-in model biases: after all, if you’re using mistaken data to build—or if you like “inform”—a model, that model, while there is a chance it will be flawless, is not likely to be.
Plus, even if we use a MOS-type system for climate models, if we check the MOS against observations measured with error, and we do not account for that measurement error in the final statistics (and nobody does), then we will be too certain of the model’s accuracy in the end.
In short, the opportunity for over-certainty is everywhere.
1) It's not true. Retract it or admit your hypocrisy.
2) The scientists are not crusading. Well, most of them aren't. The scientists are trying to muddle through the smokescreen set up by assholes like the author of your ad hominem piece. I'm not "devout" about this any more than I'm "devout about gravity, evolution, genetics, solution chemistry, or any of a million other things that other people figured out before me and told me about.
3) Your ad hominem c&p contains two paragraphs with factual errors. Do you care? Did you notice?
http://reason.com/archives/2011/10/24/obama-fishing-czar-divides-demThe next battle over President Obama’s job-killing regulations may take place on the Atlantic Coast, where fishermen, and the senators and congressmen who represent them, are voicing mounting frustration at the Obama administration’s “catch-share” rules for the fishing industry.
The Republican senator from Massachusetts, Scott Brown, on Saturday stood with fishermen in Gloucester and called on Mr. Obama to fire the administrator of the National Oceanic and Atmospheric Administration, Jane Lubchenco.
But the frustration at Ms. Lubchenco, who also serves as under secretary of commerce for oceans and atmosphere, extends well beyond Republican, Tea Party-backed senators or libertarians for whom the idea of a federally enforced “share” program sounds like some nightmare out of an Ayn Rand novel.
A surprising and growing number of Democratic elected officials are also expressing annoyance and outright opposition. Sen. Kerry, the Democrat of Massachusetts who was his party’s presidential nominee in 2004, said Friday, “Because of federal regulations limiting fishing in our waters, a lot of our fisherman have been put out of business or pushed the brink.” Also last week, he sent a stern letter to Ms. Lubchenco, warning her, “tensions between federal regulators and the fishing community have reached a boiling point beyond anything I’ve ever witnessed in my 26 years in the Senate.”
...
The story hasn’t yet hit The New York Times, Politico, or the Drudge Report. But when it does, it won’t be pretty. At the center of the storm is Ms. Lubchenco, whose official biography fits what to the Obama administration’s critics will seem like a familiar pattern. Like President Obama himself and like Mr. Obama’s initial economic adviser, Lawrence Summers, Ms. Lubchenco has an advanced degree from Harvard. Like Mr. Obama and Mr. Summers, Ms. Lubchenco has little private sector experience, but spent a lot of time teaching at a university—in her case, more than 20 years at Oregon State University. When President Obama nominated her to the NOAA job, she was vice chairman of the board of the Environmental Defense Fund, an environmental advocacy group that promotes catch shares, which are kind of like a cap-and-trade emissions scheme transferred to fishery management. When her appointment was announced, EDF’s president, Fred Krupp, praised her by saying, “her depth of understanding of climate change is unmatched.”
Her official biography also notes that she is a recipient of 14 honorary doctoral degrees and of one of the MacArthur Foundation’s “genius” awards.
...
Partly it is by displaying a kind of arrogance towards those not blessed with her genius. She reportedly minimized the job losses under catch-share by describing them as “marginal jobs where people are squeaking by.”