We should pay less attention to "results" produced by computer models
That’s the basic message our friend Warren Meyers (of Coyote Blog and now Forbes) makes in an article. His points are not only good, but valid. And if one thinks about how inaccurate the models we’ve seen drive debate and spending are, we’d insist on better data before those decisions are made.
Meyer points out that there are few, if any CEOs in non-financial firms who would invest a penny based solely on computer models. Yet we have this propensity to place much more confidence in models that have done nothing to earn that confidence than they deserve.
Last week the Council of Economic Advisors (CEA) released its congressionally commissioned study on the effects of the 2009 stimulus. The panel concluded that the stimulus had created as many as 3.6 million jobs, an odd result given the economy as a whole actually lost something like 1.5 million jobs in the same period. To reach its conclusions, the panel ran a series of complex macroeconomic models to estimate economic growth assuming the stimulus had not been passed. Their results showed employment falling by over 5 million jobs in this hypothetical scenario, an eyebrow-raising result that is impossible to verify with actual observations.
Not only is it impossible to verify, it was issued as a defacto “truth” and the “stimulus” was declared a “success”. And don’t forget the inclusion, now, of one of the world’s best weasle words to pad the results – jobs “saved”. However the administration goes to great lengths to ignore its previous claim that if the “stimulus” was passed, unemployment wouldn’t rise above 8%. One has to guess, given the results, that the computer model was wrong about that.
Meyer goes on to point out how the modeling which can’t predict the complex world of economics, is somehow considered the “gold-standard” of predictability when it comes to the exponentially more complex climate. So much so that governments everywhere are basing trillions of dollars of taxes (cap-and-trade) on the results of such models in an supposed effort to “save the planet”.
While we have been bombarded with hockey sticks and forlorn polar bears, our focus in climate should really be on the computer models. The primary scientific case for man-made CO2 as the main driver of global temperatures is made in exactly the same way that the stimulus was determined to have created 3.6 million jobs: computer modeling. No one yet has been clever enough to structure a controlled experiment to isolate the effect of rising CO2 levels from other changing variables in the complex global climate. So, just like the CEA did in scoring the stimulus, climate scientists use computer models to run virtual experiments, running the models backward over the last century with varying assumptions for CO2 levels.
This modeling approach yields amazingly circular logic. Like macroeconomic models built by devoted Keynesians, climate models are constructed by academics who passionately believe that a single variable, CO2 concentration, is the dominant driver of the whole complex climate system. When run retrospectively, the models they create unsurprisingly give the result that past temperature increases are mainly attributable to CO2. The problem with these models is that when run forward, as in the case of the Washington Redskins election model, they do a terrible job of predicting the future. None of them, for example, predicted the flattening of global temperatures over the last decade.
Yet policy has been proposed and written based on results that are nonverifiable and questionable at best. That’s insanity. But the purported case for using the results is if we wait for real data it may be too late. But when the real data appears (such as the flattening of global temps for this past decade) the modelers and proponents of the government action want to ignore it and deny its importance.
This all goes back to two themes I’ve been hammering for quite some time – common sense and scientific skepticism. Both are necessary tools of a rational person. And Meyers nails the point:
Our common sense about government stimulus tells us that the government is highly unlikely to invest money more productively than the private entities from whom the government took the money. Unfortunately, we have allowed this common sense to be trumped by computer models. Once our imperfect understanding the economy was laundered through computer models and presented with two-decimal precision, smart people somehow lost their skepticism.
We are now facing what is potentially an even more expensive decision: to regulate CO2 based mainly on computer models that claim to be able to separate the effects of trace concentrations of CO2 from a hundred other major climate variables. If your common sense is whispering to you that this seems crazy, listen to it. Otherwise all we get is garbage in, money out.
The “garbage in” should be obvious. Unfortunately, the “money out” is money coming out of your wallet to pay for unproven science and unfounded economic models.