Rush Limbaugh is right about Climate Change

No doubt you’ve heard Rush Limbaugh’s occasional rants about the issue of man-made climate change. It’s a hoax, he says, complaining that the science isn’t based on actual data but on computer models. Well, I’d like to address that point.

No doubt you’ve heard Rush Limbaugh’s occasional rants about the issue of man-made climate change. It’s a hoax, he says, complaining that the science isn’t based on actual data but on computer models. Well, I’d like to address that point. In my view, climate scientists can’t help but employ computer models because the phenomenon they’re attempting to simulate doesn’t exist in reality. In other words, these models are the scientific equivalent of digitally generated Unicorns.

Given the dire warnings we hear about overheating the Earth with greenhouse gases, we ought to step back a moment and ask, what is heat? Is it some kind of entity, like a pile of leaves on the lawn? No, heat is more like a verb than a noun; it is an action that occurs due to a temperature difference. Heat is that which heats, i.e., raises a material body’s temperature. Broadly speaking, this action is accomplished by vibration. In conductive heat transfer, quickly vibrating molecules excite slower neighbors, thus raising their temperature. By the same token, as molecules vibrate faster they emit more light and at higher frequencies, and this too – radiative heat transfer – is a means of heating something that’s radiating less vigorously.

It shouldn’t need saying that heat always transfers from more to less. That is, a conductive donor cannot make the recipient’s temperature higher than its own, nor can a radiative donor make the recipient radiate more than the donor is supplying. Otherwise, this would constitute the creation of thermal energy ex nihilo, violating the law of energy conservation. This law basically states that you can never squeeze more energy out of less; you can only break even.

Being imaginary, however, a Unicorn is exempt from the conservation law, as shown by The National Center for Atmospheric Research diagram below.

Global energy flows

Here, please note, the Earth’s surface and atmosphere together are seen to gain a total of 239 watts per square meter from the Sun (the rest is reflected) and ultimately to release that 239 to outer space. Yet here the Earth’s surface is also seen to discharge 396 watts per square meter due to a shroud of “Greenhouse Gases” constantly blasting a radiant power of 333 downward. (NASA’s version of events is much the same.)

But how can you gain 239 and lose 239 and still have 333 left over? Ask a Unicorn.

Bear in mind that the term “watts per square meter” denotes a rate of energy outlay, one Watt being equal to 1 Joule per second, which is not a pile of leaves. In this sense, wattage is akin to rates like miles per hour or gallons per minute. So if you still can’t see the Unicorn hiding in the NCAR diagram, perhaps this visual aid will help.

Energy Budget Pipeline

The National Center’s energy model depicts a similarly unfeasible rate of flow. As much radiant energy (light) continuously exits to space as continuously enters… while some kind of insulating/radiating gas layer is continuously cranking out 139% more radiant energy, the conservation law be damned.

What you might call a Proof Of Absurdity lies in the simple fact that no one has been able to put this reputed radiative mechanism to use. For consider, if less than 1% of our leaky atmosphere’s gases can generate 139% more radiant energy than the Sun provides, a far greater surplus could be generated simply by replacing an open volume of gas with sealed panels and thick insulation. Why, the resultant oven could theoretically roast a turkey with a double-A battery!

But such a radiative enhancement mechanism has not and never can be replicated. It’s physically impossible.

This is not to say, however, that scientists haven’t tried to prove that so-called “greenhouse gases” reduce radiant heat loss by — umm, radiating. On the premise that CO2 and other gases have a way of trapping heat, a team from Lawrence Berkeley Laboratory exhaustively tested the insulative impact of several infrared-absorbing gases as fillers for double-pane windows. Please read the report yourself, but I will cut to the chase: They found that infrared emission from these gases “adversely affects the window performance” and, in particular, that “the infrared radiation properties of CO2 is unnoticeable.” My emphasis.

Bottom line, although misnamed “greenhouse gases” absorb and emit thermal radiation, they cannot make the Earth’s surface radiate more energy than the Sun supplies. That the Earth’s surface temperature is higher than assumptions have led certain people to predict is attributable to their faulty assumptions, not to a few trace gases.

Rush Limbaugh is right. His instincts are sound. There’s many a fanciful Unicorn lurking inside this “settled science,” which is why its forecasts keep failing.

Alan Siddons


Biography – Alan Siddons

A former radiochemist for Yankee Atomic Electric Company and similar sites in the United States, Siddons found himself out of work at one point and in 2007 decided to learn a bit about the Greenhouse Effect so that he could speak somewhat knowledgeably if ever the subject came up in conversation. Little did he know at the time what a nightmare journey he was embarking on.

Advertisements

Misuse of the scientific method has led to peer review failures with significant implications

By Joseph D’Aleo, CCM, AMS Fellow

THE SCIENTIFIC METHOD

The scientific method in science is an iterative process. The scientific method starts with a theory or hypothesis. The data needed to test it and all possible factors involved are identified and gathered. The data is processed and the results rigorously tested. The data and methods are made available for independent replication. Reviewers for the proposed theory must have the requisite skills in the topic and in the proper statistical analysis of the data to judge its validity. If it passes the tests and replication efforts, a conclusion is made and the theory may be turned into a paper for publication. If it fails the tests, the hypothesis or theory must be rethought or modified.

SC1.png

It should noted a refutation of a previously accepted theory even one that has been published and widely accepted can follow the same route to review and publication as Albert Einstein observed:

SC2The peer review process is failing due to political and economic pressures that have altered the scientific method to virtually ensure a politically correct or economically fruitful theory can never fail.

When the tests fail, instead of rethinking the theory or including other factors, there is an alarming tendency to modify input data to more closely fit the theory or models.

SC3.png

Also often, the authors and reviewers do not to have the proper understanding of all the factors involved and often the needed mathematical skills to properly evaluate the results. And even if they do, the input data and methods are generally not made available to the reviewers for replication. And in many cases, forecasts are made for many decades or even centuries into the future, so true validation is not possible, a luxury those of us who must forecast in shorter time frames (days to seasons) do not enjoy.

Also too often, the reviewers that then serve as final gatekeepers are often not only not fully capable of this kind of rigorous review, they are often biased and speed politically correct or economically beneficial work to publication while blocking or at least ‘slow walking’ work that challenges the so-called consensus science or their own often ideologically driven beliefs.

As Dr. Michael Crichton wrote “Let’s be clear: the work of science has nothing whatever to do with consensus. Consensus is the business of politics. In science, consensus is irrelevant. What is relevant is reproducible results. The greatest scientists in history are great precisely because they broke with the consensus. (Galileo, Newton, Einstein, etc)”

SCIENTIFIC METHOD FAILURE IN THE CLIMATE SCIENCES

So when greenhouse climate models fail, they don’t revisit the theory but instead try and find the right data to fit that model. All data today is adjusted with models with a goal of addressing data errors, changes in location or instrumentation or changing distribution or filling in for missing data or station closures. Once you start this adjustment process, there is always the risk to adjust the find ways to mine from the data the desired results.

With the climate models there is an increasingly large divergence with balloon, satellite and surface reanalysis data sets the last 20 years. The one model that follows the temperature is a Russian model that has roughly half the greenhouse forcing and improved ocean modeling.

SC4.png

John Christy 2017 has shown models without greenhouse warming agreed perfectly with atmospheric (tropical) observations.

SC5.png
This kind of refutation should, if scientists abided by the scientific method, spark an effort to revisit the theory but that is too politically incorrect. This kind of ideologically or politically or economically driven thinking is pervasive across the sciences (atmospheric and medical).

EVIDENCE THAT TRADITIONAL PEER REVIEW IS FAILING

There is increasing proof that the traditional journal peer review process is broken. This is true in the Medical and Scientific areas. See examples here and here.

This story in Forbes by Henry Miller says “A number of empirical studies show that 80-90% of the claims coming from supposedly scientific studies in major journals fail to replicate”.

Another recent paper in Nature showed 70% of the papers in medical journals had studies that could not be replicated, many not even by the original authors. See also this example of one such falsified report that the author worries is a part of an epidemic of agenda-driven science by press release and falsification that has reached crisis proportions.

Other reports show an alarming number of papers having to be retracted. Springer is retracting 107 papers from one journal after discovering they had been accepted with fake peer reviews (here).

Result-oriented corruption of peer review in climate science was proven by the Climategate emails.

In the journals, there are a small set of gatekeepers that block anything that goes against the editorial biases of the journals. Conversely, these journals and their reviewers do not provide a thorough due diligence review of those that they tend to agree with ideologically. They are engaged in orthodoxy enforcement.

Indeed, Henry Miller wrote: “Another worrisome trend is the increasing publication of the results of flawed ‘advocacy research’ that is actually designed to give a false result that provides propaganda value for activists and can be cited long after the findings have been discredited.”

A prime example of this is the hideously flawed but endlessly repeated “97% of climate scientists” paper by Cook and Lewandowski. EPA’s own Inspector General found that EPA’s Endangerment Finding was never properly reviewed, yet it is the basis of all EPA GHG regulations that imposed hundreds of billions in costs on the U.S. economy.

The scientific method requires the data used be made available and the work must be capable of being replicated. This should be required of all journals (in virtually all cases, as shown above, it is not). Peer review has become pal review with gatekeepers that prevent alternate unbiased data analyses and presentation but rush new papers that support their ideology or view on the science.

Our team chose to apply the same research report procedures used in industry, which is to assemble the most qualified authors with the skills required to compile the data and rigorously perform the correct analysis. They draft a report and share the draft with a team of experts chosen for their expertise in this field to provide feedback. In our research reports, we identify the reviewers, who have lent their names to the conclusion, and provide full access to the data for others to work with and either refute or replicate, with and instructions on the analytical methods used.

Almost no journals require that and their failure and rejection numbers speak for themselves.

Wegman suggested one of the common failures in climate papers is the lack of necessary statistical expertise. For our research reports we assembled the highest qualified data experts, econometricians/statisticians and meteorologists/climatologists to draft the research project, do the rigorous statistical/econometric analyses, and then submitted their work to the best qualified scientists/econometricians for review. Attempts to discredit this report are now of course being made because it raises critically important questions about the quality and trustworthiness of the global surface temperature data sets.

The facts and statistical reasoning of this paper cannot be refuted merely by carping peer review. Instead, demonstration of a factual or mathematical error is required.

The scientific method must be reestablished. Bad science, bad medicine, bad economics lead to bad policies. Bad policies hurt good people

The totalitarianism of the environmentalists

Al-Gore-1400x788

By  

Late last year, I gave a talk about human progress to an audience of college students in Ottawa, Canada. I went through the usual multitude of indicators – rising life expectancy, literacy and per capita incomes; declining infant mortality, malnutrition and cancer death rates – to show that the world was becoming a much better place for an ever growing share of its population.

It seemed to me that the audience was genuinely delighted to hear some good news for a change. I had won them over to the cause of rational optimism. And then someone in the audience asked about climate change and I blew it.

While acknowledging that the available data suggests a “lukewarming” trend in global temperatures, I cautioned against excessive alarmism. Available resources, I said, should be spent on adaptation to climate change, not on preventing changes in global temperature – a task that I, along with many others, consider to be both ruinously expensive and, largely, futile. The audience was at first shocked – I reckon they considered me a rational and data-savvy academic up to that point – and then became angry and, during a breakout session, hostile. I even noticed one of the students scratching out five, the highest mark a speaker could get on an evaluation form, and replacing it with one. I suppose I should be glad he did not mark me down to zero.

My Ottawa audience was in no way exceptional. Very often, when speaking to audiences in Europe and North America about the improving state of the world, people acknowledge the positive trends, but worry that, as Matt Ridley puts it, “this happy interlude [in human history will come] to a terrible end.” Of course, apocalyptic writings are as old as humanity itself. The Bible, for example, contains the story of the Great Flood, in which God “destroyed all living things which were on the face of the ground: both man and cattle, creeping thing and bird of the air”.

The Akkadian poem of Gilgamesh similarly contains a myth of angry gods flooding the Earth, while an apocalyptic deluge plays a prominent part in the Hindu Dharmasastra. And then there is Al Gore. In his 2006 film An Inconvenient Truth, Gore warns that “if Greenland broke up and melted, or if half of Greenland and half of West Antarctica broke up and melted, this is what would happen to the sea level in Florida”, before an animation shows much of the state underwater. Gore also shows animations of San Francisco, Holland, Beijing, Shanghai, Calcutta and Manhattan drowning. “But this is what would happen to Manhattan, they can measure this precisely,” Gore says as he shows much of the city underwater,

It is possible, I suppose, that our eschatological obsessions are innate. The latest research suggests that our species, Homo Sapiens Sapiens, is 300,000 years old. For most of our existence, life was, to quote Thomas Hobbes, “solitary, poor, nasty, brutish and short.” Our life expectancy was between 25 years and 30 years, and our incomes were stuck at a subsistence level for millennia. Conversely, our experience with relative abundance is, at most, two centuries old. That amounts to 0.07 per cent of our time on Earth. Is there any wonder that we are prone to be pessimistic?

That said, I wonder how many global warming enthusiasts have thought through the full implications of their (in my view overblown) fears of a looming apocalypse. If it is true that global warming threatens the very survival of life on Earth, then all other considerations must, by necessity, be secondary to preventing global warming from happening.

That includes, first and foremost, the reproductive rights of women. Some global warming fearmongers have been good enough to acknowledge as much. Bill Nye, a progressive TV personality, wondered if we should “have policies that penalise people for having extra kids.”

Then there is travel and nutrition. Is it really so difficult to imagine a future in which each of us is issued with a carbon credit at the start of each year, limiting what kind of food we eat (locally grown potatoes will be fine, but Alaskan salmon will be verboten) and how far we can travel (visiting our in-laws in Ohio once a year will be permitted, but not Paris). In fact, it is almost impossible to imagine a single aspect of human existence that would be free from government interference – all in the name of saving the environment.

These ideas might sound nutty, but they are slowly gaining ground. Just last week, a study came out estimating the environmental benefits of “having one fewer child (an average for developed countries of 58.6 tonnes CO2-equivalent (tCO2e) emission reductions per year), living car-free (2.4 tCO2e saved per year), avoiding air travel (1.6 tCO2e saved per roundtrip transatlantic flight) and eating a plant-based diet (0.8 tCO2e saved per year).”

Antarctic Ice Breakaway Further Overheats Climate Hysteria

Image: Antarctic Ice Breakaway Further Overheats Climate Hysteria
In 2015, pieces of thawing ice scatter along the beachshore at Punta Hanna, Livingston Island, in the Antarctic. (Natacha Pisarenko/AP)

By Larry Bell
Monday, 24 Jul 2017 10:19 AMCurrent | Bio | Archive

As accurately reported, a huge 2,240 square mile region of a floating ice shelf nearly the size of the state of Delaware recently split off of the Western Antarctic Peninsula. The new iceberg accounts for about 12 percent of the total shelf called “Larsen C.”

It should be no surprise that this event is being cited my many in the mainstream media as more clear evidence that human smokestacks and SUVs are overheating the planet and raising sea levels. Stoking this fossil-fueled fire and brimstone, they can blame President Trump for failing to jump aboard the sunbeam and windmill-powered Paris Climate Accord hybrid train to salvation.

At the risk of ruining a really good and scary campfire story, there are some important reasons for less concern and guilt.

For starters, consider that the entire West Antarctic ice sheet which has truly been experiencing modest warming contains less than 10 percent of continent’s total ice mass.

That other 90 percent has been getting colder, with no decline in polar ice extant since satellite recordings first began in 1979.

Researchers from the University of Maryland, NASA Goddard Space Flight Center and the engineering firm Sigma Space Corporation reported satellite data showing that between 2003 and 2008, the continent of Antarctica has gained ice mass.

As summarized by Jay Zwally, a NASA geologist and lead author of the study, “We’re essentially in agreement with other studies that show an increase in ice discharge in other parts of the continent,” adding that “Our main disagreement is for East Antarctica and the interior of West Antarctica; there we see an ice gain that exceeds the losses in other areas.”

Overall, the West Antarctic ice sheet has been melting at about its recent rate over thousands of years, a condition that will likely continue until either it entirely disappears or the next Ice Age intervenes to enable global warming alarmists to relax. As their domiciles and rose gardens become buried under miles of ice, those worrisome sea levels will likely drop about 400 feet again, just as they did during the last Ice Age.

Meanwhile, the latest iceberg won’t contribute to sea level rise which has been occurring without acceleration at the rate of about 7 inches per century over the past several hundreds of years, long before the Industrial Revolution. Just like a melting ice cube in a glass of water (or whatever), the level won’t change at all.

Can the calving be confidently attributed to climate change (global warming) at all? As quoted in The Wall Street Journal, Kelly Brunt, a glaciologist at the University of Maryland and the NASA Goddard Space Flight Center, doesn’t believe so. She said that while the calving event was “way outside the average size,” it lacked telltale signs such as melt ponds.

Dr. Brunt went on to observe that while the collapse of an entire ice shelf could contribute more problematically to sea level rise by helping to prevent ice from the continent’s interior from creeping toward the edge and into the sea, this shouldn’t be a major concern in this instance. Since glaciers originating from a mountain range blocked by Larsen C are relatively small, the potential effects of an entire collapse would be “nothing to lose sleep over.”

Major West Antarctic ice sheet calving is nothing new. A similar event occurred at neighboring Larsen B in 2002. From a scientific perspective this can be expected to continue for reasons none of us, not even by bicycling our pets to the vet or electric carpooling, can control.

There is strong evidence indicating that the West Antarctic ice sheet isn’t melting due to warming surface temperatures, but rather because of natural heating from below.

In 2012 some experts from the University of Aberdeen and British Antarctic Survey discovered a huge one-mile-deep rift valley about the size of the Grand Canyon located beneath the ice in the Western Antarctica. Since this previously-hidden ice-filled basin connects directly with the warmer ocean, they think it might constitute a major cause for much of the melting in this region.

It might also be worth mentioning that a chain of active volcanoes has recently been discovered under that West Antarctic ice sheet as well. While it is believed that eruptions are unlikely to penetrate through the up to more than mile-thick overlying ice, researchers conclude that they could generate enough melt water to significantly influence ice stream flow.

So what does all of this really mean? It depends upon who we ask. Al Gore and the mainstream media will continue to tell us that it means more fossil fuel regulations and wind power subsidies are urgently needed to prevent catastrophic coastal flooding. Others will say that those who spread such nonsense are already in way over their heads.

Larry Bell is an endowed professor of space architecture at the University of Houston where he founded the Sasakawa International Center for Space Architecture (SICSA) and the graduate program in space architecture. He is the author of “Scared Witless: Prophets and Profits of Climate Doom”(2015) and “Climate of Corruption: Politics and Power Behind the Global Warming Hoax” (2012).

The Greatest Scientific Fraud Of All Time — Part XVI

Fifteen posts into this series — and I certainly hope that you have read all of them — perhaps there are still a few of you out there who continue to believe that this whole global average surface temperature (GAST), “hottest year ever,” “record warming” thing can’t really be completely fraudulent.  I mean, these claims are put out by government bureaucrats, highly paid “experts” in their designated field of temperature measurement.  It’s really complicated stuff to figure out a “global average surface temperature” from hundreds of scattered thermometers, some of which get moved, get read at different times of the day, have cities grow up around them, whatever.  Somebody’s got to make the appropriate adjustments.  Surely, they are trying their best to get the most accurate answer they can with a challenging task.  Could it really be that they are systematically lying to the people of America and the world?

The designated field for my own career was civil litigation, and in that field lawyers regularly call upon ordinary members of the public (aka jurors) to draw the inference of whether fraud has occurred.  Lawyers claiming that a defendant has committed fraud normally proceed by presenting to the jury a few glaring facts about what the defendant has done.  “Here is what he said”; and “here is the truth.”  The defendant then gets the chance to explain.  The jurors apply their ordinary judgment and experience to the facts presented.

So, consider yourself a member of my jury.  The defendants (NASA and NOAA) have been accused of arbitrarily adjusting the temperatures of the past downward in order to make fraudulent claims of “hottest year ever” for the recent years.  You decide!  I’ll give you a couple of data points that have come to my attention just today.

James Freeman is the guy who has taken over the Wall Street Journal’s “Best of the Web” column since James Taranto moved on to another gig at the paper earlier this year.  Here is his column for yesterday.  (You probably can’t get the whole thing without subscribing, but I’ll give you his critical links.)  Freeman first quotes the New York Times, March 29, 1988, which in turn quotes James Hansen, then head of the part of NASA that does the GAST calculations:

One of the scientists, Dr. James E. Hansen of the National Aeronautics and Space Administration’s Institute for Space Studies in Manhattan, said he used the 30-year period 1950-1980, when the average global temperature was 59 degrees Fahrenheit, as a base to determine temperature variations.

So 59 deg F was the “average global temperature” for the 30-year period 1950-1980.  Could that have been a typo?  Here is the Times again, June 24, 1988:

Dr. Hansen, who records temperatures from readings at monitoring stations around the world, had previously reported that four of the hottest years on record occurred in the 1980’s. Compared with a 30-year base period from 1950 to 1980, when the global temperature averaged 59 degrees Fahrenheit, the temperature was one-third of a degree higher last year. 

OK, definitely not a typo.  Freeman also has multiple other quotes from the Times, citing both NASA and “a British group” (presumably Hadley CRU) for the same 59 deg F global average mean for the period 1950-80.  So let’s then compare that figure to the official NOAA January 18, 2017 “record” global warming press release:  “2016 marks three consecutive years of record warmth for the globe”:

2016 began with a bang. For eight consecutive months, January to August, the globe experienced record warm heat.  With this as a catalyst, the 2016 globally averaged surface temperature ended as the highest since record keeping began in 1880. . . .

And kindly tell us, what was the global average temperature that constituted this important “record warm heat”?

The average temperature across global land and ocean surfaces in 2016 was 58.69 degrees F . . . .

OK, over to you to decide.  Was the claimed “record warm heat” real, or was it an artifact of downward adjustments of earlier temperatures?  If you think it might help (it won’t), here is a link to NASA’s lengthy bafflegab explanation of its adjustments.  It’s way too long to copy into this post, and provides literally no useful information as to what they are doing, or why they think it’s OK.

Do you still think it might be possible that they are playing straight with you?  My friend Joe D’Aleo (he’s one of the co-authors of the paper that was the subject of Part XV of this series) sent me this morning a write-up he had done about the temperature adjustments at one of the most prominent sites in the country, the one at Belvedere Castle in Central Park in Manhattan.  There are lots of charts and graphs at the link for your edification.  The temperature measuring site has been at the very same location near the exact middle of the park since 1920.  That location is about 0.2 mi from the West edge of the park, and 0.3 mi from the East edge, so relatively speaking it is highly immune to local land use changes that affect many other stations.  Yes, the City has grown some in that century, but the periphery of the park was already rather built up in 1920, and in any event the closest Central Park West park boundary is almost a quarter-mile away at the closest point.

This paper is another real eye-opener.  You should read the whole thing (it’s only 7 pages long).  The Central Park site is one for which the National Weather Service (part of NOAA) makes completely original, raw data available.  D’Aleo does a comparison between that completely raw data and adjusted data for the same site from NOAA’s so-called “HCN Version 1” set, for two months each year (July and January) going for the century from 1909 to 2008.  Essentially all of the temperatures for Central Park in the HCN Version 1 set are adjusted down, and dramatically so; but the adjustments are not uniform.  From approximately 1950 to 1999, the downward adjustments for both months are approximately a flat 6 deg F — an astoundingly huge amount, especially given that the recently declared “record” temperature for 2016 beat the previous “record” by all of 0.07 deg C (which would be 0.126 deg F).  Then, when 1999 comes, the downward adjustments start to decrease rapidly each year, until by 2008 the downward adjustment is only about 2 deg F.  Result:  whereas the raw data have no material upward or downward trend of any kind over the whole century under examination, the adjusted data show a dramatic upward slope in temperatures post-2000, all of which is in the adjustments rather than the raw data.  D’Aleo:

[T]he adjustment [for July] was a significant one (a cooling exceeding 6 degrees from the mid 1950s to the mid 1990s.) Then inexplicably the adjustment diminished to less than 2 degrees.  The result is [that] a trendless curve for the past 50 years became one with an accelerated warming in the past 20 years. It is not clear what changes in the metropolitan area occurred in the last 20 years to warrant a major adjustment to the adjustment. The park has remained the same and there has not been a population decline but a spurt in the city’s population [since 1990]. 

Since NOAA and NASA will not provide a remotely satisfactory explanation of what they are doing with the adjustments, various independent researchers have tried to reverse-engineer the results to figure out what assumptions are implied.  One such effort was made by Steve McIntyre of the climateaudit.org website, and D’Aleo discusses that effort at the link.  McIntyre gathered from correspondence with NOAA that their algorithm was making an “urbanization” adjustment based on the growing population of the urbanized area surrounding the particular site.  Based on the adjusted temperatures reported at Central Park and the known population of New York City in the first half of the twentieth century, McIntyre then extrapolated to calculate the implied population of New York City for the recent years of the adjusted record.  He came up with an implied population of about 17 million for 1975-95, then suddenly plunging to barely 1 million in 2005.  Well, I guess that’s not how they do it!  Any other guesses out there?

By the way, in case you have the idea that you might be able to dig into this and figure out what they are doing, I would point out that by the time you have completed any analysis they will undoubtedly have adjusted their data yet again and will declare your work inapplicable because that’s “not how we do it any more.”  As the Wall Street Journal’s Holman Jenkins noted in November 2015:

By the count of researcher Marcia Wyatt in a widely circulated presentation, the U.S. government’s published temperature data for the years 1880 to 2010 has been tinkered with 16 times in the past three years.

I’m just wondering if you still think there’s anything honest about this.

Global Warming Derangement Syndrome: Please Make It Stop

By Kerry Jackson

In the 2000s, there was Bush Derangement Syndrome, but it faded after Barack Obama was elected. Then came Trump Derangement Syndrome after it turned out that it wasn’t Hillary Clinton’s turn after all. It, too, will fade after Donald Trump is either voted out of office or serves two terms.

Yet with us always and forever, it seems, is the Global Warming Derangement Syndrome.

Just as Democrats and journalists, typically Democrats with a media pipeline, have lost their minds over the Trump election and have vilified him as a sprite from Hades — often claiming things that are simply untrue and repeatedly declaring him to be mentally ill just because they disagree with his policies or found something he’s said or tweeted that violates their ever-flexible sensibilities — they’ve gone around the glacier over climate change.

It seems a day can’t go by without at least one mainstream media outlet reporting that Old Testament-esque disasters have already begun, or covering the rant of an elected official who is yammering on about how the end is near if big policy changes are immediately enacted. Consider the reaction from Trump’s announcement that he’s pulling the U.S. out of the Paris climate accord. Contact with reality was severed.

Well, actually it’s been severed for some time. It’s the media and alarmists’ distance from reality that has moved. How else to explain how the alarmists, with a supportive media, could rip Trump for backing out of a deal they said was insufficient to start with?

Yet they did, even though James Hansen, the global warming alarmist in chief, said when the Paris accord was agreed to that it was “a fraud really, a fake.”

But this is only a small portion of the derangement that has produced a rising ocean of fake news.

For years we’ve been bombarded with claims that we had only so many months or years to do something about climate change, only to have those deadlines pass without incident; that every ice shelf that has naturally broken off from a landmass or glacier that’s receded is a sign of imminent human-caused disaster; that heavy storms are indisputable evidence that man is cooking his planet with carbon dioxide emissions; that our capitalism-driven advancements are going to eventually cause famine, war, and economic and civilizational doom.

The alarmists’ screeching is incessant, their lectures grating and without restraint, their hypocrisy as fetid as the wrong side of a sewage treatment plant. And of course their fanaticism is so rigid they cannot acknowledge anything that challenges their narrative.

Such as a report issued last month that says the global average surface temperature (GAST) data that are used to frighten and force everyone to surrender to the leftist-progressive agenda “are not a valid representation of reality.”

The report says that “it is impossible to conclude from the three published GAST data sets that recent years have been the warmest ever — despite current claims of record setting warming.”

This isn’t a news release from an oil company trying to pass a public relations effort off as science. It is authentic work that has been, in the words of a Zero Hedge blogger, “peer reviewed by administrators, scientists and researchers from the U.S. Environmental Protection Agency (EPA), the Massachusetts Institute of Technology (M.I.T.), and several of America’s leading universities.”

But it will be tossed into the “Ignore” baskets in the mainstream media’s newsrooms, just as these “80 graphs from 58 new (2017) papers invalidate claims of unprecedented global-scale modern warming” will also be trashed.

Because none fit the narrative. Because none will help the Democrat-media industrial complex “Dump Trump.” Because all challenge the “scientific consensus” and therefore the power and status that the alarmists have seized through their campaign of fear and intimidation. Because this is just the deranged way the political left and its media wing operate.

 

The Hiatus: One Message for Politicians, Another for Scientists

Dr David Whitehouse, GWPF Science Editor

Politicians are usually seen as fair game for criticism especially if they talk about the inconvenient details of climate change. If only they would stick to the simplicities and repeat the mantra that climate change is real and happening and we are entirely to blame. Woe betide any politician who delves into the detail. Usually we like our politicians to get down amongst the minutiae of government, but not when it comes to climate change.

This is what happened when the Environmental Protection Agency Administrator Scott Pruitt discussed the global temperature hiatus of the past 20 years. In written comments to the U.S. Senate about his confirmation hearing on the 18th of January he wrote, “over the past two decades satellite data indicates there has been a leveling off of warming.”

Despite the vigorous debate about the hiatus in the peer-reviewed literature this was seen by some as such an incorrect statement that a response had to be made, and fast.

Ben Santer of the Lawrence Livermore National Laboratory was quick off the mark putting together a paper for the journal Nature Scientific Reports. It looked at satellite measurements of the temperature of the atmosphere close to the ground from when such data first became available in 1979. It concluded: Satellite temperature measurements do not support the recent claim of a “leveling off of warming” over the past two decades. Tropospheric warming trends over recent 20-year periods, the authors concluded, are always significantly larger (at the 10% level or better) than model estimates of 20-year trends arising from natural internal variability.

Ben Santer on the Seth Myers Show.

The Nature Scientific Reports paper was submitted on 6th March, accepted on the 4th of April and published on the 24th May. But as that paper, with its simple message that Pruitt was wrong, was being written another paper on the same topic and also involving Santer was already in the works. It had been submitted three months before, on the 23rd of December the previous year.

It was eventually published in Nature Geoscience on 19th June having been accepted on the 22nd of May. It comes to an entirely different conclusion about the hiatus. “We find that in the last two decades of the twentieth century, differences between modelled and observed tropospheric temperature trends are broadly consistent with internal variability. Over most of the early twenty-first century, however, model tropospheric warming is substantially larger than observed; warming rate differences are generally outside the range of trends arising from internal variability…We conclude that model overestimation of tropospheric warming in the early twenty-first century is partly due to systematic deficiencies in some of the post-2000 external forcings used in the model simulations.”

In other words the climate models have failed. They did not predict and they cannot explain the hiatus. To reach this conclusion the Nature Geoscience paper analysed trends in the satellite data over 10, 12, 14, 16 and 18 years because the researchers said that they are typical record lengths used for the study of the ‘warming slowdown’ in the early 21st century. Note they did not analyse trends over 20 years directly. Thus the first Santer et al paper analysed the past 20 years and concluded there was no hiatus, while his second paper concluded there was a hiatus of up to 18 years, the maximum period that paper studied.

The authors realised the problem of the two papers seemingly conflicting results. To avoid any confusion they issued a helpful Q&A document saying the results were not contradictory but complimentary. It must be said that two methods they used are only very slightly different. On would expect them to give the same result. But that does not matter. If as the authors say the results are complimentary why was the result that disagreed with Pruitt used with no qualification or hint that a similar technique showed the opposite?

On the 22nd February Ben Santer on appeared on the Seth Meyers chat show saying these are strange and unusual times, something that with hindsight is laced with irony. He was introduced as being from the Lawrence Livermore National Laboratory but stated he was talking as a private citizen about research he had done and published on behalf of the Program for Climate Model Diagnosis and Intercomparison at the Lawrence Livermore National Laboratory.

U.S. Senator Ted Cruz on the Seth Myers Show in March 2015.

Santer aimed his sights at a statement made by U.S. Senator Ted Cruz statement made on the same show two years earlier:

“Many of the alarmists on global warming, they got a problem because the science doesn’t back them up. And in particular, satellite data demonstrates for the last 17 years, there’s been zero warming. None whatsoever. “

Santer challenged Senator Cruz in direct contradiction of his own paper he had submitted but wasn’t published yet:

“Listen to what he (Cruz) said. Satellite data. So satellite measurements of atmospheric temperature show no significant warming over the last 17 years, and we tested it. We looked at all of the satellite data in the world, from all groups, and wanted to see, was he right or not? And he was wrong. Even if you focus on a small segment of the now 38-year satellite temperature record – the last 17 years – was demonstrably wrong.”

Santer concluded

“So the bizarre thing is, Senator Cruz is a lawyer. He’s got to look at all of the evidence when he’s trying a case, when he’s involved in a case, not just one tiny segment of the evidence.”

Oh the irony.

THE CRISIS OF INTEGRITY-DEFICIENT SCIENCE

JULY 11, 2017

The epidemic of agenda-driven science by press release and falsification has reached crisis proportions.

In just the past week: Duke University admitted that its researchers had falsified or fabricated data that were used to get $113 million in EPA grants – and advance the agency’s air pollution and “environmental justice” programs. A New England Journal of Medicine (NJEM) article and editorial claimed the same pollutants kill people – but blatantly ignored multiple studies demonstrating that there is no significant, evidence-based relationship between fine particulates and human illness or mortality.

In an even more outrageous case, the American Academy for the Advancement of Science’s journal Science published an article whose authors violated multiple guidelines for scientific integrity. The article claimed two years of field studies in three countries show exposure to neonicotinoid pesticides reduces the ability of honeybees and wild bees to survive winters and establish new populations and hives the following year. Not only did the authors’ own data contradict that assertion – they kept extensive data out of their analysis and incorporated only what supported their (predetermined?) conclusions.

Some 90% of these innovative neonic pesticides are applied as seed coatings, so that crops absorb the chemicals into their tissue and farmers can target only pests that feed on the crops. Neonics largely eliminate the need to spray with old-line chemicals like pyrethroids that clearly do harm bees.  But neonics have nevertheless been at the center of debate over their possible effects on bees, as well as ideological opposition in some quarters to agricultural use of neonics – or any manmade pesticides.

Laboratory studies had mixed results and were criticized for overdosing bees with far more neonics than they would ever encounter in the real world, predictably affecting their behavior and often killing them. Multiple field studies – in actual farmers’ fields – have consistently shown no adverse effects on honeybees at the colony level from realistic exposures to neonics. In fact, bees thrive in and around neonic-treated corn and canola crops in the United States, Canada, Europe, Australia and elsewhere.

So how did the Dr. Ben Woodcocket al., Center for Ecology and Hydrology (CEH) field studies reach such radically different conclusions? After all, the researchers set up 33 sites in fields in Germany, Hungary, and England, each one with groups of honeybee or wild bee colonies in or next to oilseed rape (canola) crops. Each group involved one test field treated with fungicides, a neonic, and a pyrethroid; one field treated with a different neonic and fungicides; and one “control” group by a field treated only with fungicides. They then conducted multiple data analyses throughout the 2-year trial period.

Their report and Science article supposedly presented all the results of their exhaustive research. They did not. The authors fudged the data, and the “peer reviewers” and AAAS journal editors failed to spot the massive flaws. Other reviewers (herehere, and here) quickly found the gross errors, lack of transparency, and misrepresentations – but not before the article and press releases had gone out far and wide.

Thankfully, and ironically, the Woodcock-CEH study was funded by Syngenta and Bayer, two companies that make neonics. That meant the companies received the complete study and all 1,000 pages of data – not just the portions carefully selected by the article authors. Otherwise, all that inconvenient research information would probably still be hidden from view – and the truth would never have come out.

Most glaring, as dramatically presented in a chart that’s included in each of the reviews just cited, there were far more data sets than suggested by the Science article. In fact, there were 258 separate honeybee statistical data analyses. Of the 258, a solid 238 found no effects on bees from neonics! Seven found beneficial effects from neonics! Just nine found harmful impacts, and four had insufficient data.

Not one group of test colonies in Germany displayed harmful effects, but five benefited from neonics. Five in Hungary showed harm, but the nosema gut fungus was prevalent in Hungarian beehives during the study period; it could have affected bee foraging behavior and caused colony losses. But Woodcock and CEH failed to mention the problem or reflect it in their analyses. Instead, they blamed neonics.

In England, four test colony groups were negatively affected by neonics, while two benefited, and the rest showed no effects. But numerous English hives were infested with Varroa mites, which suck on bee blood and carry numerous pathogens that they transmit to bees and colonies. Along with poor beekeeping and mite control practices, Varroa could have been the reason a number of UK test colonies died out during the study – but CEH blamed neonics.

(Incredibly, even though CEH’s control hives in England were far from any possible neonic exposure, they had horrendous overwinter bee losses: 58%, compared to the UK national average of 14.5% that year, while overwinter colony losses for CEH hives were 67% to 79% near their neonic-treated fields.)

In sum, fully 95% of all the hives studied by CEH demonstrated no effects or benefited from neonic exposure – but the Science magazine authors chose to ignore them, and focus on nine hives (3% of the total) which displayed harmful impacts that they attributed to neonicotinoids.

Almost as amazing, CEH analyses found that nearly 95% of the time pollen and nectar in hives showed no measurable neonic residues. Even samples taken directly from neonic-treated crops did not have residues – demonstrating that bees in the CEH trials were likely never even exposed to neonics.

How then could CEH researchers and authors come to the conclusions they did? How could they ignore the 245 out of 258 honeybee statistical data analyses that demonstrated no effects or beneficial effects from neonics? How could they focus on the nine analyses (3.4%) that showed negative effects – a number that could just as easily have been due to random consequences or their margin of error?

The sheer number of “no effect” results (92%) is consistent with what a dozen other field studies have found: that foraging on neonicotinoid-treated crops has no effect on honeybees. Why was this ignored?

Also relevant is the fact that CEH honeybee colonies near neonic-treated fields recovered from any adverse effects of their exposure to neonics before going into their winter clusters. As “super organisms,” honeybee colonies are able to metabolize many pesticides and detoxify themselves. This raises doubts about whether any different overwintering results between test colonies and controls can properly be ascribed to neonics. Woodcock, et al. should have discussed this, but failed to do so.

Finally, as The Mad Virologist pointed out, if neonics have negative impacts on bees, the effects should have been consistent across multiple locations and seed treatments. They were not. In fact, the number of bee larval cells during crop flowering periods for one neonic increased in response to seed treatments in Germany, but declined in Hungary and had no change in England. For another neonic, the response was neutral (no change) in all three countries. Something other than neonics clearly seems to be involved.

The honest, accurate conclusion would have been that exposure to neonics probably had little or no effect on the honeybees or wild bees that CEH studied. The Washington Post got that right; Science did not.

U.S. law defines “falsification” as (among other things) “changing or omitting data or results, such that the research is not accurately represented in the research record.” Woodcock and CEH clearly did that. Then the AAAS and Science failed to do basic fact-checking before publishing the article; the media parroted the press releases; and anti-pesticide factions rushed to say “the science is settled” against neonics.

The AAAS and Science need to retract the Woodcock article, apologize for misleading the nation, and publish an article that fully, fairly, and accurately represents what the CEH research and other field studies actually documented. They should ban Woodcock and his coauthors from publishing future articles in Science and issue press releases explaining all these actions. The NJEM should take similar actions.

Meanwhile, Duke should be prosecuted, fined, and compelled to return the fraudulently obtained funds.

Failure to do so would mean falsification and fraud have replaced integrity at the highest levels of once-respected American institutions of scientific investigation, learning and advancement.

Comments on the New RSS Lower Tropospheric Temperature Dataset

July 6th, 2017 by Roy W. Spencer, Ph. D.

It was inevitable that the new RSS mid-tropospheric (MT) temperature dataset, which showed more warming than the previous version, would be followed with a new lower-tropospheric (LT) dataset. (Carl Mears has posted a useful FAQ on the new dataset, how it differs from the old, and why they made adjustments).

Before I go into the details, let’s keep all of this in perspective. Our globally-averaged trend is now about +0.12 C/decade, while the new RSS trend has increased to about +0.17 C/decade.

Note these trends are still well below the average climate model trend for LT, which is +0.27 C/decade.

These are the important numbers; the original Carbon Brief article headline (“Major correction to satellite data shows 140% faster warming since 1998”) is seriously misleading, because the warming in the RSS LT data post-1998 was near-zero anyway (140% more than a very small number is still a very small number).

Since RSS’s new MT dataset showed more warming that the old, it made sense that the new LT dataset would show more warming, too. Both depend on the same instrument channel (MSU channel 2 and AMSU channel 5), and to the extent that the new diurnal drift corrections RSS came up with caused more warming in MT, the adjustments should be even larger in LT, since the diurnal cycle becomes stronger as you approach the surface (at least over land).

Background on Diurnal Drift Adjustments

All of the satellites carrying the MSU and AMSU instruments (except Aqua, Metop-A and Metop-B) do not have onboard propulsion, and so their orbits decay over the years due to very weak atmospheric drag. The satellites slowly fall, and their orbits are then no longer sun-synchronous (same local observation time every day) as intended. Some of the NOAA satellites were purposely injected into orbits that would drift one way in local observation time before orbit decay took over and made them drift in the other direction; this provided several years with essentially no net drift in the local observation time.

Since there is a day-night temperature cycle (even in the deep-troposphere the satellite measures) the drift of the satellite local observation time causes a spurious drift in observed temperature over the years (the diurnal cycle becomes “aliased” into the long-term temperature trends). The spurious temperature drift varies seasonally, latitudinally, and regionally (depending upon terrain altitude, available surface moisture, and vegetation).

Because climate models are known to not represent the diurnal cycle to the accuracy needed for satellite adjustments, we decided long ago to measure the drift empirically, by comparing drifting satellites with concurrently operating non-drifting (or nearly non-drifting) satellites. Our Version 6 paper discusses the details.

RSS instead decided to use climate model estimates of the diurnal cycle, and in RSS Version 4 are now making empirical corrections to those model-based diurnal cycles. (Generally speaking, we think it is useful for different groups to use different methods.)

Diurnal Drift Effects in the RSS Dataset

We have long known that there were differences in the resulting diurnal drift adjustments in the RSS versus our UAH dataset. We believed that the corrections in the older RSS Version 3.3 datasets were “overdone”, generating more warming than UAH prior to 2002 but less than UAH after 2002 (some satellites drift one way in the diurnal cycle, other satellites drift in the opposite direction). This is why the skeptical community liked to follow the RSS dataset more than ours, since UAH showed at least some warming post-1997, while RSS showed essentially no warming (the “pause”).

The new RSS V4 adjustment alters the V3.3 adjustment, and now warms the post-2002 period, but does not diminish the extra warming in the pre-2002 period. Hence the entire V4 time series shows more warming than before.

Examination of a geographic distribution of their trends shows some elevation effects, e.g. around the Andes in S. America (You have to click on the image to see V4 compared to V3.3…the static view below might be V3.3 if you don’t click it).

CM_7_2017_TLT_trend_maps_switch

We also discovered this and, as discussed in our V6 paper, attributed it to errors in the oxygen absorption theory used to match the MSU channel 2 weighting function with the AMSU channel 5 weighting function, which are at somewhat different altitudes when viewing at the same Earth incidence angle (AMSU5 has more surface influence than MSU2). Using existing radiative transfer theory alone to adjust AMSU5 to match MSU2 (as RSS does) leads to AMSU5 still being too close to the surface. This affects the diurnal drift adjustment, and especially the transition between MSU and AMSU in the 1999-2004 period. The mis-match also can cause dry areas to have too much warming in the AMSU era, and in general will cause land areas to warm spuriously faster than ocean areas.

Here are our UAH LT gridpoint trends (sorry for the different map projection):

lt_trend_beta5.png

In general, it is difficult for us to follow the chain of diurnal corrections in the new RSS paper. Using a climate model to make the diurnal drift adjustments, but then adjusting those adjustments with empirical satellite data feels somewhat convoluted to us.

Final Comments

Besides the differences in diurnal drift adjustments, the other major difference affecting trends is the treatment off the NOAA-14 MSU, last in the MSU series. There is clear drift in the difference between the new NOAA-15 AMSU and the old NOAA-14 MSU, with NOAA-14 warming relative to NOAA-15. We assume that NOAA-14 is to blame, and remove its trend difference with NOAA-15 (we only use it through 2001) and also adjust NOAA-14 to match NOAA-12 (early in the NOAA-14 record). RSS does not assume one satellite is better than the other, and uses NOAA-14 all the way through 2004, by which point it shows a large trend difference with NOAA-15 AMSU. We believe this is a large component of the overall trend difference between UAH and RSS, but we aren’t sure just how much compared to the diurnal drift adjustment differences.

It should be kept in mind that the new UAH V6 dataset for LT uses three channels, while RSS still uses multiple view angles from one channel (a technique we originally developed, and RSS followed). As a result, our new LT weighting function is a little higher in the atmosphere, with considerably more weight in the upper troposphere and slightly more weight in the lower stratosphere. Based upon radiosonde temperature trend profiles, we found the net effect on the difference between the two LT weighting functions on temperature trends to be very small, probably 0.01 C/decade or less.

We have a paper in peer review with extensive satellite dataset comparisons to many balloon datasets and reanalyses. These show that RSS diverges from these and from UAH, showing more warming than the other datasets between 1990 and 2002 – a key period with two older MSU sensors both of which showed signs of spurious warming not yet addressed by RSS. I suspect the next chapter in this saga is that the remaining radiosonde datasets that still do not show substantial warming will be the next to be “adjusted” upward.

The bottom line is that we still trust our methodology. But no satellite dataset is perfect, there are uncertainties in all of the adjustments, as well as legitimate differences of opinion regarding how they should be handled.

Also, as mentioned at the outset, both RSS and UAH lower tropospheric trends are considerably below the average trends from the climate models.

And that is the most important point to be made.

Monumental, unsustainable environmental impacts

Replacing fossil fuels with renewable energy would inflict major land, wildlife, resource damage

Paul Driessen

Demands that the world replace fossil fuels with wind, solar and biofuel energy – to prevent supposed catastrophes caused by manmade global warming and climate change – ignore three fundamental flaws.

1) In the Real World outside the realm of computer models, the unprecedented warming and disasters are simply not happening: not with temperatures, rising seas, extreme weather or other alleged problems.

2) The process of convicting oil, gas, coal and carbon dioxide emissions of climate cataclysms has been unscientific and disingenuous. It ignores fluctuations in solar energy, cosmic rays, oceanic currents and multiple other powerful natural forces that have controlled Earth’s climate since the dawn of time, dwarfing any role played by CO2. It ignores the enormous benefits of carbon-based energy that created and still powers the modern world, and continues to lift billions out of poverty, disease and early death.

It assigns only costs to carbon dioxide emissions, and ignores how rising atmospheric levels of this plant-fertilizing molecule are reducing deserts and improving forests, grasslands, drought resistance, crop yields and human nutrition. It also ignores the huge costs inflicted by anti-carbon restrictions that drive up energy prices, kill jobs, and fall hardest on poor, minority and blue-collar families in industrialized nations – and perpetuate poverty, misery, disease, malnutrition and early death in developing countries.

3) Renewable energy proponents pay little or no attention to the land and raw material requirements, and associated environmental impacts, of wind, solar and biofuel programs on scales required to meet mankind’s current and growing energy needs, especially as poor countries improve their living standards.

We properly insist on multiple detailed studies of every oil, gas, coal, pipeline, refinery, power plant and other fossil fuel project. Until recently, however, even the most absurd catastrophic climate change claims behind renewable energy programs, mandates and subsidies could not be questioned.

Just as bad, climate campaigners, government agencies and courts have never examined the land use, raw material, energy, water, wildlife, human health and other impacts of supposed wind, solar, biofuel and battery alternatives to fossil fuels – or of the transmission lines and other systems needed to carry electricity and liquid and gaseous renewable fuels thousands of miles to cities, towns and farms.

It is essential that we conduct rigorous studies now, before pushing further ahead. The Environmental Protection Agency, Department of Energy and Interior Department should do so immediately. States, other nations, private sector companies, think tanks and NGOs can and should do their own analyses. The studies can blithely assume these expensive, intermittent, weather-dependent alternatives can actually replace fossil fuels. But they need to assess the environmental impacts of doing so.

Renewable energy companies, industries and advocates are notorious for hiding, minimizing, obfuscating or misrepresenting their environmental and human health impacts. They demand and receive exemptions from health and endangered species laws that apply to other industries. They make promises they cannot keep about being able to safely replace fossil fuels that now provide over 80% of US and global energy.

A few articles have noted some of the serious environmental, toxic/radioactive waste, human health and child labor issues inherent in mining rare earth and cobalt/lithium deposits. However, we now needquantitative studies – detailed, rigorous, honest, transparent, cradle-to-grave, peer-reviewed analyses.

The back-of-the-envelope calculations that follow provide a template. I cannot vouch for any of them. But our governments need to conduct full-blown studies forthwith – before they commit us to spending tens of trillions of dollars on renewable energy schemes, mandates and subsidies that could blanket continents with wind turbines, solar panels, biofuel crops and battery arrays; destroy habitats and wildlife; kill jobs, impoverish families and bankrupt economies; impair our livelihoods, living standards and liberties; and put our lives under the control of unelected, unaccountable state, federal and international rulers – without having a clue whether these supposed alternatives are remotely economical or sustainable.

Ethanol derived from corn grown on 40,000,000 acres now provides the equivalent of 10% of US gasoline – and requires billions of gallons of water, and enormous quantities of fertilizer and energy. What would it take to replace 100% of US gasoline? To replace the entire world’s motor fuels?

Solar panels on Nevada’s Nellis Air Force Base generate 15 megawatts of electricity perhaps 30% of the year from 140 acres. Arizona’s Palo Verde nuclear power plant generates 900 times more electricity, from less land, some 95% of the year. Generating Palo Verde’s output via Nellis technology would require land area ten times larger than Washington, DC – and would still provide electricity unpredictably only 30% of the time. Now run those solar numbers for the 3.5 billion megawatt-hours generated nationwide in 2016.

Modern coal or gas-fired power plants use less than 300 acres to generate 600 megawatts 95% of the time. Indiana’s 600-MW Fowler Ridge wind farm covers 50,000 acres and generates electricity about 30% of the year. Calculate the turbine and acreage requirements for 3.5 billion MWH of wind electricity.

Delving more deeply, generating 20% of US electricity with wind power would require up to 185,000 1.5-MW turbines, 19,000 miles of new transmission lines, 18 million acres, and 245 million tons of concrete, steel, copper, fiberglass and rare earths – plus fossil-fuel back-up generators for the 75-80% of the year that winds nationwide are barely blowing and the turbines are not producing electricity.

Energy analyst David Wells has calculated that replacing 160,000 teraWatt-hours of total global energy consumption with wind would require 183,400,000 turbines needing roughly: 461,000,000,000 tons of steel for the towers; 460,00,000,000 tons of steel and concrete for the foundations; 59,000,000,000 tons of copper, steel and alloys for the turbines; 738,000,000 tons of neodymium for turbine magnets; 14,700,000,000 tons of steel and complex composite materials for the nacelles; 11,000,000,000 tons of complex petroleum-based composites for the rotors; and massive quantities of other raw materials – all of which must be mined, processed, manufactured into finished products and shipped around the world.

Assuming 25 acres per turbine, the turbines would require 4,585,000,000 acres (1,855,500,000 hectares) – 1.3 times the land area of North America! Wells adds: Shipping just the iron ore to build the turbines would require nearly 3 million voyages in huge ships that would consume 13 billion tons of bunker fuel (heavy oil) in the process. And converting that ore to iron and steel would require 473 billion tons of coking coal, demanding another 1.2 million sea voyages, consuming another 6 billion tons of bunker fuel.

For sustainability disciples: Does Earth have enough of these raw materials for this transformation?

It gets worse. These numbers do not include the ultra-long transmission lines required to carry electricity from windy locations to distant cities. Moreover, Irina Slav notes, wind turbines, solar panels and solar thermal installations cannot produce high enough heat to melt silica, iron or other metals, and certainly cannot generate the required power on a reliable enough basis to operate smelters and factories.

Wind turbines (and solar panels) last just 20 years or so (less in salt water environments) – while coal, gas and nuclear power plants last 35-50 years and require far less land and raw materials. That means we would have tear down, haul away and replace far more “renewable” generators twice as often; dispose of or recycle their component parts (and toxic or radioactive wastes); and mine, process and ship more ores.

Finally, their intermittent electricity output means they couldn’t guarantee you could boil an egg, run an assembly line, surf the internet or complete a heart transplant when you need to. So we store their output in massive battery arrays, you say. OK. Let’s calculate the land, energy and raw materials for that. While we’re at it, let’s add in the requirements for building and recharging 100% electric vehicle fleets.

Then there are the bird and bat deaths, wildlife losses from destroying habitats, and human health impacts from wind turbine noise and flicker. These also need to be examined – fully and honestly – along with the effects of skyrocketing renewable energy prices on every aspect of this transition and our lives.

But for honest, evenhanded EPA and other scientists, modelers and regulators previously engaged in alarmist, biased climate chaos studies, these analyses will provide some job security. Let’s get started.

Paul Driessen is senior policy analyst for the Committee For A Constructive Tomorrow (www.CFACT.org) and author of Eco-Imperialism: Green power – Black death.

PDF Driessen – unsustainable environmental impacts