Why the UN Climate Models Are Inherently Unreliable, and Should Be Abandoned in Favor of More Top-Down Approaches

Alan Carlin | May 11, 2017

Those who have examined the Wallace, Christy, and D’Aleo (WCD) 2016 and 2017 reports discussed in recent months on this blog may be understandably confused as to how they relate to the numerous climate models used by the UN IPCC and the USEPA over many years in support of their climate alarmism. The methodologies used are entirely different and the conclusions are largely if not totally opposite. The purpose of this post is to try to explain why and reach a judgment as to which one is scientifically valid. Only one is likely to be given the opposite conclusions.

The UN selected two parent organizations for their Intergovernmental Panel on Climate Change (IPCC) effort to show how modern civilization was bound to fail because of catastrophic global temperature increases. (Yes, the UN IPCC effort has never been a neutral one.) These parent organizations were the UN World Meteorological Organization (UNWMO), made up of the national meteorological organizations (such as NOAA in the US) and the UN Environment Programme (UNEP).

The UNWMO and its member national weather services in each country have had extensive experience with General Circulation Models (GCMs) because they have long used them for weather forecasting, which is what they do for a living. So this may have led to a decision to adapt them to the new purpose, climate forecasting, in response to the UN’s interest in this new problem in the late 20th Century. They have continued using this application over several decades until the present day. The GCM models may be the best approach to short-term weather forecasting but are well known to fail for longer-term (even as little as two weeks) weather forecasting.

They are particularly bad for climate forecasting, because it involves long periods of the operation of a coupled, non-linear chaotic system. Mike Jonas has argued that “It is clear that no matter how much more knowledge and effort is put into the climate models, and no matter how many more billions of dollars are poured into them, they can never be used for climate prediction while they retain the same basic structure and methodology.”

In my view the climate GCMs should never have been assumed to be sufficiently reliable for determining climate policy. But that is exactly the opposite of what has been done by the UN, climate alarmist groups, and even the Obama Administration. Even the IPCC itself admits (2007, WG1) that: “We should recognize that we are dealing with a coupled nonlinear chaotic system, and therefore that the long-term prediction of future climate states is not possible.”

So the UN IPCC attempted to undertake the gargantuan task of modelling the climate as a whole by aggregating data from a large number of grid cells covering the planet as part of their GCM methodology. Any new variable used requires additional data for each grid cell and an understanding of exactly how the new variable affects other variables. The results are extremely sensitive to the assumed initial conditions. In doing this Jonas argues that they left out some really important explanatory variables that explain global temperatures according to the results of the WCD reports.

The IPCC climate GCM effort quickly reached the limits of computer power, available budgets, and knowledge of how many important climate variables work. Because of this, lack of interest, or lack of knowledge, Mike Jonas believes major variables were left out of the GCMs, including oceanic oscillations, volcanic eruptions, and changes in the sun. Even a quick glance at global temperature charts over recent decades shows the great importance of oceanic oscillations, particularly the El Nino Southern Oscillations (ENSO) such as El Nino and La Nina. Yet the GCMs apparently left them out, possibly because they had little idea how they functioned or wanted the world to concentrate on their CO2-centric views.

For reasons explained by Jonas, the UN GCM effort was impossible from the start, has been much more complicated than necessary to answer the key questions, and is highly questionable as to their conclusions because of the IPCC’s decision to use GCM climate models. This will continue to be the case as long as these models are the principal tool used. The IPCC has been using a poor tool and not admitting that the tool has little if anything to contribute to solving the alleged problem.

The WCD Econometric Approach

Jonas recommends a top-down rather than the IPCC’s bottom-up GCM approach. The WCD econometric approach would appear to fit that description. Rather than trying to understand all aspects of climate in each grid cell in the world (but leaving out major explanatory variables), they picked out the variables that seemed most likely to affect the one they cared about (tropical and global temperatures), and used available data sets to determine the impact/contribution of each explanatory variable to the variable they cared about. They did not need to understand the physical interactions between every variable in each grid cell since all they wanted to know is how much of an effect each relevant explanatory variable had on temperatures. They also used a completely different methodology and discipline, called econometrics, a subdiscipline of economics, not meteorology.

The computer effort required by the WCD approach is vastly smaller and the importance of each explanatory variable is determined by data, not the guesswork of fallible and all too often biased humans in the IPCC GCM efforts. Major explanatory variables are included rather than being left out. This approach does not show exactly how each variable operates, only how much of an effect it has on the critical dependent variable (global temperature). One potential complication is that a simultaneous parameter estimation process must be used, not just a direct least squares, on a single temperature equation. This process is somewhat more difficult to carry out, but in the end was not needed in this particular case because CO2 was shown to have no significant effect on temperature.

In summary, a much better and far simpler approach (if anything was needed at all) by the IPCC would have been that instead of turning to vastly complicated and ill-suited meteorological models and ignoring many of the obvious variables that might actually determine global temperatures (such as solar and volcanic activity, and ENSO oceanic oscillations), would have been to use the available data to determine the extent to which each of these and other variables affect global temperatures using statistical econometric techniques.

The IPCC models do the opposite by using current knowledge to try to determine the relationships between various somewhat known factors influencing climate and ignoring many of the variables of some importance, and then fiddling with the remaining unknowns to “tune” the models to fit historical observations but not necessarily the physics. This is nothing more than guesswork (and probably biased guesswork in this case) and is so unreliable as to be worthless despite the tens of billions of dollars spent on this approach. It has brought unexpected riches to the former meteorological modelers (now called climatologists) but has cost taxpayers dearly in terms of paying for the useless research and the faulty climate policies that are “justified” on the basis of useless and biased GCMs.

So a much more useful approach would have been (if anything, since climate alarmism is basically a non-problem) to turn to a completely different discipline: econometrics, as used in the WCD reports. Unfortunately this may have been precluded by the UN’s original choice of the WMO to pair with the UNEP.

An interesting question is whether the IPCC’s choice of using GCMs may also have been influenced by a guess that this would greatly complicate the task of critics by requiring the use of advanced expertise in many areas to understand the results. It may also have been the view of some climate researchers that this was a golden opportunity to get their climate GCM “play things” funded by taxpayers. Or maybe it was just ignorance on the part of the UNWMO and the IPCC as a whole concerning the usefulness of climate GCMs.

What the WCD Reports and IPCC Models Have Concluded

And what do the WCD reports done to date (on a pro bono basis) conclude: the IPCC GCM conclusions are wrong on the major issue of what effect CO2 has on temperatures. The IPCC models have been used to argue that CO2 is a virtual “control knob” for global temperatures. The WCD reports conclude that changes in CO2 have no statistically significant effects on temperatures. Global warming over the range of data used can be explained by natural factors when they are included as explanatory variables.

So which to believe? The IPCC modelers undertook a hopeless research effort and appear to have left out what turned out to be critical variables in their reports. WCD undertook a doable effort focused on the much needed answers to a few critical questions. The answer is very clear to me.

My Conclusions

I suggest a complete moratorium on climate alarmist-inspired spending until more detailed comparisons can be made and examined by all concerned. I personally believe we know enough to decide now to abandon the use of all GCM climate modeling and all efforts to reduce CO2 emissions.

Tens of trillions of dollars have been wasted in part because of the initial UN decision as to which disciplines should be involved in the research and the decision that the IPCC models were a sufficiently reliable basis on which to base climate policy. And all this should have been known before any work was ever started.

And as long as these misguided UN efforts are continued the world will probably continue to waste about $1.5 trillion per year for almost no benefits on the greatest scam in world history. The US should no longer participate in or fund the hopeless IPCC GCM effort.

The Strange Absence of Science in the Paris “Treaty” Discussion

Alan Carlin | May 5, 2017

The climate topic du jour is whether President Trump should abandon the Paris non-treaty “treaty.” Most of the discussion seems to revolve around esoteric legal issues concerning what the US is allowed and not allowed to do as a result of President Obama’s agreement to the “treaty” without Senate consent. I find most of the arguments propounded by climate skeptics on this topic to be sound and perceptive. What I find odd is that I have yet to see any discussion of what climate change science might say about the wisdom of continued US “participation” in the Paris “treaty.” In other words, is there any reason why the US should want to abide by the “treaty?”

The purpose of the “treaty” is to provide a framework for developed countries to reduce their emissions of greenhouse gases primarily by reducing their use of fossil fuels. The reason people use fossil fuels is that for most uses they are the most efficient way to supply the energy humans need to supplement their own energy use that will improvetheir health and welfare. The climate alarmists have exploited the public’s understandable lack of knowledge concerning climate science to argue that the developed countries (but usually not less developed countries) should give up some or preferably all fossil fuel use in order to avoid alleged catastrophic anthropogenic global warming (CAGW). Although they have never proven that changes in CO2 emissions will even change global temperatures, that has not prevented them from urging/coercing others to spend other people’s money on their unproven or now almost certain false claims.

The Most Important Issue: Would It Accomplish Anything if “Successful?”

After the dismal failure of their Kyoto Treaty to achieve this end, the alarmists have tried a second approach called the Paris accord or “treaty,” and flouted the US Constitutionby claiming that the “treaty” is not really a treaty. Whether all this is worthwhile ultimately hinges on whether there are sufficient benefits from reducing fossil fuel use to make it worthwhile to give up the many uses humans have found for them. If there are not, humans should not agree to give up any uses of fossil fuels, or waste time and resources on efforts to bring this about. That includes non-treaty “treaties.”

As explained previously, the best current science shows that there are no significant reductions in global temperatures that would result from reducing fossil fuel use, let alone CAGW. And there are much more efficient and effective ways to reduce real pollution from fossil fuel use. Climate alarmism “science” is simply what Richard Feynman called “cargo cult” science. It is long past time to abandon it as well as “treaties” trying to implement it.

Has Science Lost its Way?

By Michael Guillen Ph.D

Science’s reproducibility crisis.

For any study to have legitimacy, it must be replicated, yet only half of medical studies celebrated in newspapers hold water under serious follow-up scrutiny — and about two-thirds of the “sexiest” cutting-edge reports, including the discovery of new genes linked to obesity or mental illness, are later “disconfirmed.”

Though erring is a key part of the scientific process, this level of failure slows scientific progress, wastes time and resources and costs taxpayers excesses of $28 billion a year, writes NPR science correspondent Richard Harris/

The single greatest threat to science right now comes from within its own ranks. Last year Nature, the prestigious international science journal, published a study revealing that “More than 70% of researchers have tried and failed to reproduce another scientist’s experiments, and more than half have failed to reproduce their own experiments.”

The inability to confirm research that was published in highly respected, peer-reviewed journals suggests something is very wrong with how science is being done.

The crisis afflicts even science’s most revered ‘facts,’ as cancer researchers C. G. Begley and Lee Ellis discovered. Over an entire decade they put fifty-three published “landmark” studies to the test; they succeeded in replicating only six – that’s an 11% success rate.

A major culprit, they discovered, is that many researchers cherry-picked the results of their experiments – subconsciously or intentionally – to give the appearance of success, thereby increasing their chances of being published.

“They presented specific experiments that supported their underlying hypothesis, but that were not reflective of the entire data set,” report Begley and Ellis, adding this shocking truth: “There are no guidelines that require all data sets to be reported in a paper; often, original data are removed during the peer review and publication process.”

Another apparent culprit is that – and it’s going to surprise most of you – too many scientists are actually never taught the scientific method. As graduate students, they take oodles of courses in their chosen specialty; but their thesis advisors never sit them down and indoctrinate them on best practices. Consequently, remarks University of Wisconsin-Madison biologist Judith Kimble: “They will go off and make it worse.”

This observation seems borne out by the Nature study, whose respondents said the three top weaknesses behind science’s reproducibility crisis are: 1) selective reporting, 2) pressure to publish, and 3) low statistical power or poor analysis. In other words, scientists need to improve on practicing what they preach, which is: 1) a respect for facts – all of them, not just the ones they like, 2) integrity, and 3) a sound scientific method.

The attendees of the so-called March for Science made a lot of noise about wanting more money and respect from the public and government – what group wouldn’t want that? But nary a whisper was heard from them or the media about science’s urgent reproducibility crisis. Leaving unspoken this elephant-sized question: If we aren’t able to trust the published results of science, then what right does it have to demand more money and respect, before making noticeable strides toward better reproducibility?

Michael Guillen  Ph.D., former Science Editor for ABC News, taught physics at Harvard. His novel, “The Null Prophecy,” debuts July 10. 

14,000 Abandoned Wind Turbines Litter the United States

by The Elephant’s Child

July 7, 2013, 7:19 am

abandoned wind turbines 2

The towering symbols of a fading religion, over 14,000 wind turbines, abandoned, rusting, slowly decaying. When it is time to clean up after a failed idea, no green environmentalists are to be found. Wind was free, natural, harnessing Earth’s bounty for the benefit of all mankind, sounded like a good idea. Wind turbines, like solar panels, break down.  They produce less energy before they break down than the energy it took to make them.  The wind does not blow all the time, or even most of the time. When it is not blowing, they require full-time backup from conventional power plants.

Without government subsidy, they are unaffordable. With governments facing financial troubles, the subsidies are unaffordable. It was a nice dream, a very expensive dream, but it didn’t work.

California had the “big three” of wind farm locations — Altamont Pass, Tehachapi, and San Gorgonio, considered the world’s best wind sites. California’s wind farms, almost 80% of the world’s wind generation capacity ceased to generate even more quickly than Kamaoa Wind Farm in Hawaii. There are five other abandoned wind farms in Hawaii. When they are abandoned, getting the turbines removed is a major problem. They are highly unsightly, and they are huge, and that’s a lot of material to get rid of.

Unfortunately the same areas that are good for siting wind farms are a natural pass for migrating birds. Altamont’s turbines have been shut down four months out of every year for migrating birds after environmentalists filed suit. According to the Golden Gate Audubon Society 75-110 Golden Eagles, 380 Burrowing Owls, 300 Red-Tailed Hawks and 333 American Kestrels are killed by the turbines every year. An Alameda County Community Development Agency study points to 10,000 annual bird deaths from Altamont wind turbines. The Audubon Society makes up numbers like the EPA, but there’s a reason why they call them bird Cuisinarts.

Palm Springs has enacted an ordinance requiring their removal from San Gorgonio Pass, but unless something else changes abandoned turbines will remain a rotting eyesores, or the taxpayers who have already paid through the nose for overpriced energy and crony-capitalist tax scams will have to foot the bill for their removal.

President Obama’s offshore wind farms will be far more expensive than those sited in California’s ideal wind locations. Salt water is far more damaging than sun and rain, and offshore turbines don’t last as long. But nice tax scams for his crony-capitalist backers will work well as long as he can blame it all on saving the planet.

MARCHES FOR SCIENCE VS. ACTUAL SCIENCE

EPA-climate-march-portugal-3-jt-170429_12x5_1600

Over the weekend, various ill-informed leftists marched around the world in support, ostensibly, of the Earth’s climate. As usual, ignorance was plentiful while knowledge of anything relevant to climate science was invisible.

If you want to learn something about climate science, as opposed to political propaganda, go here to read an important, just-released paper by Dr. James P. Wallace III, Dr. John R. Christy and Joseph S. D’Aleo, which has been endorsed by a number of other prominent climate scientists.

The paper is titled “On the Existence of a ‘Tropical Hot Spot’ & The Validity of EPA’s CO2 Endangerment Finding.” As you likely know, the EPA’s outrageous finding that emissions of carbon dioxide, which is necessary for essentially all life on earth, endanger public health or welfare was the basis for the Obama administration’s war on affordable energy.

Like any legitimate scientific paper, it is hard to summarize. I will try, but you really should read the whole thing.

The models on which global warming alarmism is based all critically hypothesize a “tropical hot spot” which is the alleged “signature” of human-caused warming. In fact, however, no such tropical hot spot exists:

Adjusting for just the Natural Factor impacts, NOT ONE of the Nine (9) Tropical temperature time series analyzed above was consistent with the EPA’s [Tropical Hot Spot] Hypothesis.

That is, adjusting for just the Natural Factor Impacts over their entire history; all nine of tropical temperature data analyzed above have non-statistically significant trend slopes—which invalidates the THS theory. Moreover, CO2 did not even come close to having a statistically significant impact on a single one of these temperature data sets. From an econometric structural analysis standpoint, the generic model worked extremely well in all 9 cases.
***
These analysis results would appear to leave very, very little doubt but that EPA’s claim of a Tropical Hot Spot, caused by rising atmospheric CO2 levels, simply does not exist in the real world. Also critically important, this analysis failed to find that the steadily rising Atmospheric CO2 Concentrations have had a statistically significant impact on any of the 14 temperature time series that were analyzed.

Thus, the analysis results invalidate each of the Three Lines of Evidence in its CO2 Endangerment Finding. Once EPA’s THS assumption is invalidated, it is obvious why the climate models they claim can be relied upon, are also invalid.

It is remarkable that anyone would argue for the superiority of a half-baked theory, as described in a model, over empirical observation. Certainly no competent scientist would do so. Yet that is what is happening in the global warming debate. As we have documented many times, leftists, knowing they are losing the argument, have resorted to altering surface temperature records, over which they have jurisdiction, to conform to their theory. This is, in my opinion, the worst scandal in the history of science.

If the principal natural factors–solar, volcanic and ENSO (El Niño–Southern Oscillation) activity–are taken out of the equation, there has been no net global warming in recent years:

Screen-Shot-2017-05-01-at-6.54.02-PM

The conclusion, based on empirical evidence:

The above analysis of Global Balloon & Satellite atmospheric temperature as well as Contiguous U.S. and Hadley Global Average Surface Temperature data turned up no statistical support for suggesting that CO2, even taken together with all other omitted variables, is the cause of the positive trend in the reported U.S. and Global temperature data.

In fact, it seems very clear that the Global Warming that has occurred over the period 1959 to date can be quite easily explained by Natural Factor impacts alone. Given the number of independent entities and differing instrumentation used in gathering the temperature data analyzed herein, it seems highly unlikely that these findings are in error.

I have tried to excerpt understandable paragraphs, but there is plenty of raw science in the article, e.g. (footnotes omitted):

One final question remains that has not yet been explicitly dealt with herein. It is, can the existence of the CO2 equation really be confirmed so that simultaneous equation parameter estimation techniques must be utilized to confirm CO2’s statistically significant impact on temperature? In the Preface, the authors referred to a specific paper for a proof. Below very significant additional proof is provided.

With CO2 determined to be not statistically significant in the structural analysis of the 13 temperature data sets as summarized in Section XXIII immediately above, the equation system described in the Preface can be seen to be recursive which permits parameter estimation of the CO2 equation in the system by ordinary or direct least squares.

An explicit form of the CO2 equation referred to in the Preface is:

[1] (∆C- cfossil)t = a + b*Tt + c* CO2,t-1

Where

(∆C – cfossil)t, is the efflux of Net non-fossil fuel CO2 emissions from the oceans and land into the atmosphere and cfossil is CO2 emissions from Fossil Fuel consumption.

Tt is UAH Tropical TLT Ocean temperature. The expected sign is positive.

CO2,t-1 on the right-hand side is a proxy for Land use. The expected sign is negative, because as CO2 levels rise, other things equal, the CO2 absorption of the flora increase.

As shown in Table XXIV-1, applying ordinary least squares to this equation yields a high Adjusted R square (0.64.) The coefficients have the correct signs and are statistically significant at the 95% confidence level.

The science is fascinating, but you don’t have to be a scientist to understand why global warming hysteria is wrong. Here are the indisputable, basic facts:

* The earth’s climate has been changing for millions of years. We are currently living in a geologic era characterized by ice ages. I like to point out that 15,000 years ago–the blink of an eye–the place where I live was buried under ice somewhere between a half mile and a mile thick. Scientists have theories, but nothing approaching knowledge about why wild swings in the earth’s climate have occurred over the last million years. One thing we know for sure is that it had nothing to do with mankind’s emission of carbon dioxide.

* We are living in a relatively cool era. Since the end of the last Ice Age, the earth has been warmer than it is now most of the time–most experts say, about 90% of the time. So if temperatures rise a little, it is hardly a surprise.

* A reasonable (although debatable) scientific argument based on energy transfer can be made that a doubling of CO2 would raise the earth’s average temperature by 1 degree centigrade. Everyone agrees this would be a good thing.

* To generate scary headlines, alarmists speculate that various positive feedbacks would increase that possible 1 degree temperature gain to somewhere between 3 and 6 degrees. These feedback theories are speculative at best. Really, we know they are false, since higher temperatures over the past 500,000 years have not caused any sort of runaway temperature increase.

* Global warming alarmism is based solely on models, not on observation. But we know the models are wrong. They predict far greater warming than has been observed over recent decades. A model that has been proved wrong is worthless. It can’t be resuscitated by after-the-fact selective, politically-motivated tweaking.

That, really, is all you need to know.