The Greatest Scientific Fraud Of All Time — Part XVI

Fifteen posts into this series — and I certainly hope that you have read all of them — perhaps there are still a few of you out there who continue to believe that this whole global average surface temperature (GAST), “hottest year ever,” “record warming” thing can’t really be completely fraudulent.  I mean, these claims are put out by government bureaucrats, highly paid “experts” in their designated field of temperature measurement.  It’s really complicated stuff to figure out a “global average surface temperature” from hundreds of scattered thermometers, some of which get moved, get read at different times of the day, have cities grow up around them, whatever.  Somebody’s got to make the appropriate adjustments.  Surely, they are trying their best to get the most accurate answer they can with a challenging task.  Could it really be that they are systematically lying to the people of America and the world?

The designated field for my own career was civil litigation, and in that field lawyers regularly call upon ordinary members of the public (aka jurors) to draw the inference of whether fraud has occurred.  Lawyers claiming that a defendant has committed fraud normally proceed by presenting to the jury a few glaring facts about what the defendant has done.  “Here is what he said”; and “here is the truth.”  The defendant then gets the chance to explain.  The jurors apply their ordinary judgment and experience to the facts presented.

So, consider yourself a member of my jury.  The defendants (NASA and NOAA) have been accused of arbitrarily adjusting the temperatures of the past downward in order to make fraudulent claims of “hottest year ever” for the recent years.  You decide!  I’ll give you a couple of data points that have come to my attention just today.

James Freeman is the guy who has taken over the Wall Street Journal’s “Best of the Web” column since James Taranto moved on to another gig at the paper earlier this year.  Here is his column for yesterday.  (You probably can’t get the whole thing without subscribing, but I’ll give you his critical links.)  Freeman first quotes the New York Times, March 29, 1988, which in turn quotes James Hansen, then head of the part of NASA that does the GAST calculations:

One of the scientists, Dr. James E. Hansen of the National Aeronautics and Space Administration’s Institute for Space Studies in Manhattan, said he used the 30-year period 1950-1980, when the average global temperature was 59 degrees Fahrenheit, as a base to determine temperature variations.

So 59 deg F was the “average global temperature” for the 30-year period 1950-1980.  Could that have been a typo?  Here is the Times again, June 24, 1988:

Dr. Hansen, who records temperatures from readings at monitoring stations around the world, had previously reported that four of the hottest years on record occurred in the 1980’s. Compared with a 30-year base period from 1950 to 1980, when the global temperature averaged 59 degrees Fahrenheit, the temperature was one-third of a degree higher last year. 

OK, definitely not a typo.  Freeman also has multiple other quotes from the Times, citing both NASA and “a British group” (presumably Hadley CRU) for the same 59 deg F global average mean for the period 1950-80.  So let’s then compare that figure to the official NOAA January 18, 2017 “record” global warming press release:  “2016 marks three consecutive years of record warmth for the globe”:

2016 began with a bang. For eight consecutive months, January to August, the globe experienced record warm heat.  With this as a catalyst, the 2016 globally averaged surface temperature ended as the highest since record keeping began in 1880. . . .

And kindly tell us, what was the global average temperature that constituted this important “record warm heat”?

The average temperature across global land and ocean surfaces in 2016 was 58.69 degrees F . . . .

OK, over to you to decide.  Was the claimed “record warm heat” real, or was it an artifact of downward adjustments of earlier temperatures?  If you think it might help (it won’t), here is a link to NASA’s lengthy bafflegab explanation of its adjustments.  It’s way too long to copy into this post, and provides literally no useful information as to what they are doing, or why they think it’s OK.

Do you still think it might be possible that they are playing straight with you?  My friend Joe D’Aleo (he’s one of the co-authors of the paper that was the subject of Part XV of this series) sent me this morning a write-up he had done about the temperature adjustments at one of the most prominent sites in the country, the one at Belvedere Castle in Central Park in Manhattan.  There are lots of charts and graphs at the link for your edification.  The temperature measuring site has been at the very same location near the exact middle of the park since 1920.  That location is about 0.2 mi from the West edge of the park, and 0.3 mi from the East edge, so relatively speaking it is highly immune to local land use changes that affect many other stations.  Yes, the City has grown some in that century, but the periphery of the park was already rather built up in 1920, and in any event the closest Central Park West park boundary is almost a quarter-mile away at the closest point.

This paper is another real eye-opener.  You should read the whole thing (it’s only 7 pages long).  The Central Park site is one for which the National Weather Service (part of NOAA) makes completely original, raw data available.  D’Aleo does a comparison between that completely raw data and adjusted data for the same site from NOAA’s so-called “HCN Version 1” set, for two months each year (July and January) going for the century from 1909 to 2008.  Essentially all of the temperatures for Central Park in the HCN Version 1 set are adjusted down, and dramatically so; but the adjustments are not uniform.  From approximately 1950 to 1999, the downward adjustments for both months are approximately a flat 6 deg F — an astoundingly huge amount, especially given that the recently declared “record” temperature for 2016 beat the previous “record” by all of 0.07 deg C (which would be 0.126 deg F).  Then, when 1999 comes, the downward adjustments start to decrease rapidly each year, until by 2008 the downward adjustment is only about 2 deg F.  Result:  whereas the raw data have no material upward or downward trend of any kind over the whole century under examination, the adjusted data show a dramatic upward slope in temperatures post-2000, all of which is in the adjustments rather than the raw data.  D’Aleo:

[T]he adjustment [for July] was a significant one (a cooling exceeding 6 degrees from the mid 1950s to the mid 1990s.) Then inexplicably the adjustment diminished to less than 2 degrees.  The result is [that] a trendless curve for the past 50 years became one with an accelerated warming in the past 20 years. It is not clear what changes in the metropolitan area occurred in the last 20 years to warrant a major adjustment to the adjustment. The park has remained the same and there has not been a population decline but a spurt in the city’s population [since 1990]. 

Since NOAA and NASA will not provide a remotely satisfactory explanation of what they are doing with the adjustments, various independent researchers have tried to reverse-engineer the results to figure out what assumptions are implied.  One such effort was made by Steve McIntyre of the climateaudit.org website, and D’Aleo discusses that effort at the link.  McIntyre gathered from correspondence with NOAA that their algorithm was making an “urbanization” adjustment based on the growing population of the urbanized area surrounding the particular site.  Based on the adjusted temperatures reported at Central Park and the known population of New York City in the first half of the twentieth century, McIntyre then extrapolated to calculate the implied population of New York City for the recent years of the adjusted record.  He came up with an implied population of about 17 million for 1975-95, then suddenly plunging to barely 1 million in 2005.  Well, I guess that’s not how they do it!  Any other guesses out there?

By the way, in case you have the idea that you might be able to dig into this and figure out what they are doing, I would point out that by the time you have completed any analysis they will undoubtedly have adjusted their data yet again and will declare your work inapplicable because that’s “not how we do it any more.”  As the Wall Street Journal’s Holman Jenkins noted in November 2015:

By the count of researcher Marcia Wyatt in a widely circulated presentation, the U.S. government’s published temperature data for the years 1880 to 2010 has been tinkered with 16 times in the past three years.

I’m just wondering if you still think there’s anything honest about this.

Global Warming Derangement Syndrome: Please Make It Stop

By Kerry Jackson

In the 2000s, there was Bush Derangement Syndrome, but it faded after Barack Obama was elected. Then came Trump Derangement Syndrome after it turned out that it wasn’t Hillary Clinton’s turn after all. It, too, will fade after Donald Trump is either voted out of office or serves two terms.

Yet with us always and forever, it seems, is the Global Warming Derangement Syndrome.

Just as Democrats and journalists, typically Democrats with a media pipeline, have lost their minds over the Trump election and have vilified him as a sprite from Hades — often claiming things that are simply untrue and repeatedly declaring him to be mentally ill just because they disagree with his policies or found something he’s said or tweeted that violates their ever-flexible sensibilities — they’ve gone around the glacier over climate change.

It seems a day can’t go by without at least one mainstream media outlet reporting that Old Testament-esque disasters have already begun, or covering the rant of an elected official who is yammering on about how the end is near if big policy changes are immediately enacted. Consider the reaction from Trump’s announcement that he’s pulling the U.S. out of the Paris climate accord. Contact with reality was severed.

Well, actually it’s been severed for some time. It’s the media and alarmists’ distance from reality that has moved. How else to explain how the alarmists, with a supportive media, could rip Trump for backing out of a deal they said was insufficient to start with?

Yet they did, even though James Hansen, the global warming alarmist in chief, said when the Paris accord was agreed to that it was “a fraud really, a fake.”

But this is only a small portion of the derangement that has produced a rising ocean of fake news.

For years we’ve been bombarded with claims that we had only so many months or years to do something about climate change, only to have those deadlines pass without incident; that every ice shelf that has naturally broken off from a landmass or glacier that’s receded is a sign of imminent human-caused disaster; that heavy storms are indisputable evidence that man is cooking his planet with carbon dioxide emissions; that our capitalism-driven advancements are going to eventually cause famine, war, and economic and civilizational doom.

The alarmists’ screeching is incessant, their lectures grating and without restraint, their hypocrisy as fetid as the wrong side of a sewage treatment plant. And of course their fanaticism is so rigid they cannot acknowledge anything that challenges their narrative.

Such as a report issued last month that says the global average surface temperature (GAST) data that are used to frighten and force everyone to surrender to the leftist-progressive agenda “are not a valid representation of reality.”

The report says that “it is impossible to conclude from the three published GAST data sets that recent years have been the warmest ever — despite current claims of record setting warming.”

This isn’t a news release from an oil company trying to pass a public relations effort off as science. It is authentic work that has been, in the words of a Zero Hedge blogger, “peer reviewed by administrators, scientists and researchers from the U.S. Environmental Protection Agency (EPA), the Massachusetts Institute of Technology (M.I.T.), and several of America’s leading universities.”

But it will be tossed into the “Ignore” baskets in the mainstream media’s newsrooms, just as these “80 graphs from 58 new (2017) papers invalidate claims of unprecedented global-scale modern warming” will also be trashed.

Because none fit the narrative. Because none will help the Democrat-media industrial complex “Dump Trump.” Because all challenge the “scientific consensus” and therefore the power and status that the alarmists have seized through their campaign of fear and intimidation. Because this is just the deranged way the political left and its media wing operate.

 

The Hiatus: One Message for Politicians, Another for Scientists

Dr David Whitehouse, GWPF Science Editor

Politicians are usually seen as fair game for criticism especially if they talk about the inconvenient details of climate change. If only they would stick to the simplicities and repeat the mantra that climate change is real and happening and we are entirely to blame. Woe betide any politician who delves into the detail. Usually we like our politicians to get down amongst the minutiae of government, but not when it comes to climate change.

This is what happened when the Environmental Protection Agency Administrator Scott Pruitt discussed the global temperature hiatus of the past 20 years. In written comments to the U.S. Senate about his confirmation hearing on the 18th of January he wrote, “over the past two decades satellite data indicates there has been a leveling off of warming.”

Despite the vigorous debate about the hiatus in the peer-reviewed literature this was seen by some as such an incorrect statement that a response had to be made, and fast.

Ben Santer of the Lawrence Livermore National Laboratory was quick off the mark putting together a paper for the journal Nature Scientific Reports. It looked at satellite measurements of the temperature of the atmosphere close to the ground from when such data first became available in 1979. It concluded: Satellite temperature measurements do not support the recent claim of a “leveling off of warming” over the past two decades. Tropospheric warming trends over recent 20-year periods, the authors concluded, are always significantly larger (at the 10% level or better) than model estimates of 20-year trends arising from natural internal variability.

Ben Santer on the Seth Myers Show.

The Nature Scientific Reports paper was submitted on 6th March, accepted on the 4th of April and published on the 24th May. But as that paper, with its simple message that Pruitt was wrong, was being written another paper on the same topic and also involving Santer was already in the works. It had been submitted three months before, on the 23rd of December the previous year.

It was eventually published in Nature Geoscience on 19th June having been accepted on the 22nd of May. It comes to an entirely different conclusion about the hiatus. “We find that in the last two decades of the twentieth century, differences between modelled and observed tropospheric temperature trends are broadly consistent with internal variability. Over most of the early twenty-first century, however, model tropospheric warming is substantially larger than observed; warming rate differences are generally outside the range of trends arising from internal variability…We conclude that model overestimation of tropospheric warming in the early twenty-first century is partly due to systematic deficiencies in some of the post-2000 external forcings used in the model simulations.”

In other words the climate models have failed. They did not predict and they cannot explain the hiatus. To reach this conclusion the Nature Geoscience paper analysed trends in the satellite data over 10, 12, 14, 16 and 18 years because the researchers said that they are typical record lengths used for the study of the ‘warming slowdown’ in the early 21st century. Note they did not analyse trends over 20 years directly. Thus the first Santer et al paper analysed the past 20 years and concluded there was no hiatus, while his second paper concluded there was a hiatus of up to 18 years, the maximum period that paper studied.

The authors realised the problem of the two papers seemingly conflicting results. To avoid any confusion they issued a helpful Q&A document saying the results were not contradictory but complimentary. It must be said that two methods they used are only very slightly different. On would expect them to give the same result. But that does not matter. If as the authors say the results are complimentary why was the result that disagreed with Pruitt used with no qualification or hint that a similar technique showed the opposite?

On the 22nd February Ben Santer on appeared on the Seth Meyers chat show saying these are strange and unusual times, something that with hindsight is laced with irony. He was introduced as being from the Lawrence Livermore National Laboratory but stated he was talking as a private citizen about research he had done and published on behalf of the Program for Climate Model Diagnosis and Intercomparison at the Lawrence Livermore National Laboratory.

U.S. Senator Ted Cruz on the Seth Myers Show in March 2015.

Santer aimed his sights at a statement made by U.S. Senator Ted Cruz statement made on the same show two years earlier:

“Many of the alarmists on global warming, they got a problem because the science doesn’t back them up. And in particular, satellite data demonstrates for the last 17 years, there’s been zero warming. None whatsoever. “

Santer challenged Senator Cruz in direct contradiction of his own paper he had submitted but wasn’t published yet:

“Listen to what he (Cruz) said. Satellite data. So satellite measurements of atmospheric temperature show no significant warming over the last 17 years, and we tested it. We looked at all of the satellite data in the world, from all groups, and wanted to see, was he right or not? And he was wrong. Even if you focus on a small segment of the now 38-year satellite temperature record – the last 17 years – was demonstrably wrong.”

Santer concluded

“So the bizarre thing is, Senator Cruz is a lawyer. He’s got to look at all of the evidence when he’s trying a case, when he’s involved in a case, not just one tiny segment of the evidence.”

Oh the irony.

THE CRISIS OF INTEGRITY-DEFICIENT SCIENCE

JULY 11, 2017

The epidemic of agenda-driven science by press release and falsification has reached crisis proportions.

In just the past week: Duke University admitted that its researchers had falsified or fabricated data that were used to get $113 million in EPA grants – and advance the agency’s air pollution and “environmental justice” programs. A New England Journal of Medicine (NJEM) article and editorial claimed the same pollutants kill people – but blatantly ignored multiple studies demonstrating that there is no significant, evidence-based relationship between fine particulates and human illness or mortality.

In an even more outrageous case, the American Academy for the Advancement of Science’s journal Science published an article whose authors violated multiple guidelines for scientific integrity. The article claimed two years of field studies in three countries show exposure to neonicotinoid pesticides reduces the ability of honeybees and wild bees to survive winters and establish new populations and hives the following year. Not only did the authors’ own data contradict that assertion – they kept extensive data out of their analysis and incorporated only what supported their (predetermined?) conclusions.

Some 90% of these innovative neonic pesticides are applied as seed coatings, so that crops absorb the chemicals into their tissue and farmers can target only pests that feed on the crops. Neonics largely eliminate the need to spray with old-line chemicals like pyrethroids that clearly do harm bees.  But neonics have nevertheless been at the center of debate over their possible effects on bees, as well as ideological opposition in some quarters to agricultural use of neonics – or any manmade pesticides.

Laboratory studies had mixed results and were criticized for overdosing bees with far more neonics than they would ever encounter in the real world, predictably affecting their behavior and often killing them. Multiple field studies – in actual farmers’ fields – have consistently shown no adverse effects on honeybees at the colony level from realistic exposures to neonics. In fact, bees thrive in and around neonic-treated corn and canola crops in the United States, Canada, Europe, Australia and elsewhere.

So how did the Dr. Ben Woodcocket al., Center for Ecology and Hydrology (CEH) field studies reach such radically different conclusions? After all, the researchers set up 33 sites in fields in Germany, Hungary, and England, each one with groups of honeybee or wild bee colonies in or next to oilseed rape (canola) crops. Each group involved one test field treated with fungicides, a neonic, and a pyrethroid; one field treated with a different neonic and fungicides; and one “control” group by a field treated only with fungicides. They then conducted multiple data analyses throughout the 2-year trial period.

Their report and Science article supposedly presented all the results of their exhaustive research. They did not. The authors fudged the data, and the “peer reviewers” and AAAS journal editors failed to spot the massive flaws. Other reviewers (herehere, and here) quickly found the gross errors, lack of transparency, and misrepresentations – but not before the article and press releases had gone out far and wide.

Thankfully, and ironically, the Woodcock-CEH study was funded by Syngenta and Bayer, two companies that make neonics. That meant the companies received the complete study and all 1,000 pages of data – not just the portions carefully selected by the article authors. Otherwise, all that inconvenient research information would probably still be hidden from view – and the truth would never have come out.

Most glaring, as dramatically presented in a chart that’s included in each of the reviews just cited, there were far more data sets than suggested by the Science article. In fact, there were 258 separate honeybee statistical data analyses. Of the 258, a solid 238 found no effects on bees from neonics! Seven found beneficial effects from neonics! Just nine found harmful impacts, and four had insufficient data.

Not one group of test colonies in Germany displayed harmful effects, but five benefited from neonics. Five in Hungary showed harm, but the nosema gut fungus was prevalent in Hungarian beehives during the study period; it could have affected bee foraging behavior and caused colony losses. But Woodcock and CEH failed to mention the problem or reflect it in their analyses. Instead, they blamed neonics.

In England, four test colony groups were negatively affected by neonics, while two benefited, and the rest showed no effects. But numerous English hives were infested with Varroa mites, which suck on bee blood and carry numerous pathogens that they transmit to bees and colonies. Along with poor beekeeping and mite control practices, Varroa could have been the reason a number of UK test colonies died out during the study – but CEH blamed neonics.

(Incredibly, even though CEH’s control hives in England were far from any possible neonic exposure, they had horrendous overwinter bee losses: 58%, compared to the UK national average of 14.5% that year, while overwinter colony losses for CEH hives were 67% to 79% near their neonic-treated fields.)

In sum, fully 95% of all the hives studied by CEH demonstrated no effects or benefited from neonic exposure – but the Science magazine authors chose to ignore them, and focus on nine hives (3% of the total) which displayed harmful impacts that they attributed to neonicotinoids.

Almost as amazing, CEH analyses found that nearly 95% of the time pollen and nectar in hives showed no measurable neonic residues. Even samples taken directly from neonic-treated crops did not have residues – demonstrating that bees in the CEH trials were likely never even exposed to neonics.

How then could CEH researchers and authors come to the conclusions they did? How could they ignore the 245 out of 258 honeybee statistical data analyses that demonstrated no effects or beneficial effects from neonics? How could they focus on the nine analyses (3.4%) that showed negative effects – a number that could just as easily have been due to random consequences or their margin of error?

The sheer number of “no effect” results (92%) is consistent with what a dozen other field studies have found: that foraging on neonicotinoid-treated crops has no effect on honeybees. Why was this ignored?

Also relevant is the fact that CEH honeybee colonies near neonic-treated fields recovered from any adverse effects of their exposure to neonics before going into their winter clusters. As “super organisms,” honeybee colonies are able to metabolize many pesticides and detoxify themselves. This raises doubts about whether any different overwintering results between test colonies and controls can properly be ascribed to neonics. Woodcock, et al. should have discussed this, but failed to do so.

Finally, as The Mad Virologist pointed out, if neonics have negative impacts on bees, the effects should have been consistent across multiple locations and seed treatments. They were not. In fact, the number of bee larval cells during crop flowering periods for one neonic increased in response to seed treatments in Germany, but declined in Hungary and had no change in England. For another neonic, the response was neutral (no change) in all three countries. Something other than neonics clearly seems to be involved.

The honest, accurate conclusion would have been that exposure to neonics probably had little or no effect on the honeybees or wild bees that CEH studied. The Washington Post got that right; Science did not.

U.S. law defines “falsification” as (among other things) “changing or omitting data or results, such that the research is not accurately represented in the research record.” Woodcock and CEH clearly did that. Then the AAAS and Science failed to do basic fact-checking before publishing the article; the media parroted the press releases; and anti-pesticide factions rushed to say “the science is settled” against neonics.

The AAAS and Science need to retract the Woodcock article, apologize for misleading the nation, and publish an article that fully, fairly, and accurately represents what the CEH research and other field studies actually documented. They should ban Woodcock and his coauthors from publishing future articles in Science and issue press releases explaining all these actions. The NJEM should take similar actions.

Meanwhile, Duke should be prosecuted, fined, and compelled to return the fraudulently obtained funds.

Failure to do so would mean falsification and fraud have replaced integrity at the highest levels of once-respected American institutions of scientific investigation, learning and advancement.

Comments on the New RSS Lower Tropospheric Temperature Dataset

July 6th, 2017 by Roy W. Spencer, Ph. D.

It was inevitable that the new RSS mid-tropospheric (MT) temperature dataset, which showed more warming than the previous version, would be followed with a new lower-tropospheric (LT) dataset. (Carl Mears has posted a useful FAQ on the new dataset, how it differs from the old, and why they made adjustments).

Before I go into the details, let’s keep all of this in perspective. Our globally-averaged trend is now about +0.12 C/decade, while the new RSS trend has increased to about +0.17 C/decade.

Note these trends are still well below the average climate model trend for LT, which is +0.27 C/decade.

These are the important numbers; the original Carbon Brief article headline (“Major correction to satellite data shows 140% faster warming since 1998”) is seriously misleading, because the warming in the RSS LT data post-1998 was near-zero anyway (140% more than a very small number is still a very small number).

Since RSS’s new MT dataset showed more warming that the old, it made sense that the new LT dataset would show more warming, too. Both depend on the same instrument channel (MSU channel 2 and AMSU channel 5), and to the extent that the new diurnal drift corrections RSS came up with caused more warming in MT, the adjustments should be even larger in LT, since the diurnal cycle becomes stronger as you approach the surface (at least over land).

Background on Diurnal Drift Adjustments

All of the satellites carrying the MSU and AMSU instruments (except Aqua, Metop-A and Metop-B) do not have onboard propulsion, and so their orbits decay over the years due to very weak atmospheric drag. The satellites slowly fall, and their orbits are then no longer sun-synchronous (same local observation time every day) as intended. Some of the NOAA satellites were purposely injected into orbits that would drift one way in local observation time before orbit decay took over and made them drift in the other direction; this provided several years with essentially no net drift in the local observation time.

Since there is a day-night temperature cycle (even in the deep-troposphere the satellite measures) the drift of the satellite local observation time causes a spurious drift in observed temperature over the years (the diurnal cycle becomes “aliased” into the long-term temperature trends). The spurious temperature drift varies seasonally, latitudinally, and regionally (depending upon terrain altitude, available surface moisture, and vegetation).

Because climate models are known to not represent the diurnal cycle to the accuracy needed for satellite adjustments, we decided long ago to measure the drift empirically, by comparing drifting satellites with concurrently operating non-drifting (or nearly non-drifting) satellites. Our Version 6 paper discusses the details.

RSS instead decided to use climate model estimates of the diurnal cycle, and in RSS Version 4 are now making empirical corrections to those model-based diurnal cycles. (Generally speaking, we think it is useful for different groups to use different methods.)

Diurnal Drift Effects in the RSS Dataset

We have long known that there were differences in the resulting diurnal drift adjustments in the RSS versus our UAH dataset. We believed that the corrections in the older RSS Version 3.3 datasets were “overdone”, generating more warming than UAH prior to 2002 but less than UAH after 2002 (some satellites drift one way in the diurnal cycle, other satellites drift in the opposite direction). This is why the skeptical community liked to follow the RSS dataset more than ours, since UAH showed at least some warming post-1997, while RSS showed essentially no warming (the “pause”).

The new RSS V4 adjustment alters the V3.3 adjustment, and now warms the post-2002 period, but does not diminish the extra warming in the pre-2002 period. Hence the entire V4 time series shows more warming than before.

Examination of a geographic distribution of their trends shows some elevation effects, e.g. around the Andes in S. America (You have to click on the image to see V4 compared to V3.3…the static view below might be V3.3 if you don’t click it).

CM_7_2017_TLT_trend_maps_switch

We also discovered this and, as discussed in our V6 paper, attributed it to errors in the oxygen absorption theory used to match the MSU channel 2 weighting function with the AMSU channel 5 weighting function, which are at somewhat different altitudes when viewing at the same Earth incidence angle (AMSU5 has more surface influence than MSU2). Using existing radiative transfer theory alone to adjust AMSU5 to match MSU2 (as RSS does) leads to AMSU5 still being too close to the surface. This affects the diurnal drift adjustment, and especially the transition between MSU and AMSU in the 1999-2004 period. The mis-match also can cause dry areas to have too much warming in the AMSU era, and in general will cause land areas to warm spuriously faster than ocean areas.

Here are our UAH LT gridpoint trends (sorry for the different map projection):

lt_trend_beta5.png

In general, it is difficult for us to follow the chain of diurnal corrections in the new RSS paper. Using a climate model to make the diurnal drift adjustments, but then adjusting those adjustments with empirical satellite data feels somewhat convoluted to us.

Final Comments

Besides the differences in diurnal drift adjustments, the other major difference affecting trends is the treatment off the NOAA-14 MSU, last in the MSU series. There is clear drift in the difference between the new NOAA-15 AMSU and the old NOAA-14 MSU, with NOAA-14 warming relative to NOAA-15. We assume that NOAA-14 is to blame, and remove its trend difference with NOAA-15 (we only use it through 2001) and also adjust NOAA-14 to match NOAA-12 (early in the NOAA-14 record). RSS does not assume one satellite is better than the other, and uses NOAA-14 all the way through 2004, by which point it shows a large trend difference with NOAA-15 AMSU. We believe this is a large component of the overall trend difference between UAH and RSS, but we aren’t sure just how much compared to the diurnal drift adjustment differences.

It should be kept in mind that the new UAH V6 dataset for LT uses three channels, while RSS still uses multiple view angles from one channel (a technique we originally developed, and RSS followed). As a result, our new LT weighting function is a little higher in the atmosphere, with considerably more weight in the upper troposphere and slightly more weight in the lower stratosphere. Based upon radiosonde temperature trend profiles, we found the net effect on the difference between the two LT weighting functions on temperature trends to be very small, probably 0.01 C/decade or less.

We have a paper in peer review with extensive satellite dataset comparisons to many balloon datasets and reanalyses. These show that RSS diverges from these and from UAH, showing more warming than the other datasets between 1990 and 2002 – a key period with two older MSU sensors both of which showed signs of spurious warming not yet addressed by RSS. I suspect the next chapter in this saga is that the remaining radiosonde datasets that still do not show substantial warming will be the next to be “adjusted” upward.

The bottom line is that we still trust our methodology. But no satellite dataset is perfect, there are uncertainties in all of the adjustments, as well as legitimate differences of opinion regarding how they should be handled.

Also, as mentioned at the outset, both RSS and UAH lower tropospheric trends are considerably below the average trends from the climate models.

And that is the most important point to be made.

Monumental, unsustainable environmental impacts

Replacing fossil fuels with renewable energy would inflict major land, wildlife, resource damage

Paul Driessen

Demands that the world replace fossil fuels with wind, solar and biofuel energy – to prevent supposed catastrophes caused by manmade global warming and climate change – ignore three fundamental flaws.

1) In the Real World outside the realm of computer models, the unprecedented warming and disasters are simply not happening: not with temperatures, rising seas, extreme weather or other alleged problems.

2) The process of convicting oil, gas, coal and carbon dioxide emissions of climate cataclysms has been unscientific and disingenuous. It ignores fluctuations in solar energy, cosmic rays, oceanic currents and multiple other powerful natural forces that have controlled Earth’s climate since the dawn of time, dwarfing any role played by CO2. It ignores the enormous benefits of carbon-based energy that created and still powers the modern world, and continues to lift billions out of poverty, disease and early death.

It assigns only costs to carbon dioxide emissions, and ignores how rising atmospheric levels of this plant-fertilizing molecule are reducing deserts and improving forests, grasslands, drought resistance, crop yields and human nutrition. It also ignores the huge costs inflicted by anti-carbon restrictions that drive up energy prices, kill jobs, and fall hardest on poor, minority and blue-collar families in industrialized nations – and perpetuate poverty, misery, disease, malnutrition and early death in developing countries.

3) Renewable energy proponents pay little or no attention to the land and raw material requirements, and associated environmental impacts, of wind, solar and biofuel programs on scales required to meet mankind’s current and growing energy needs, especially as poor countries improve their living standards.

We properly insist on multiple detailed studies of every oil, gas, coal, pipeline, refinery, power plant and other fossil fuel project. Until recently, however, even the most absurd catastrophic climate change claims behind renewable energy programs, mandates and subsidies could not be questioned.

Just as bad, climate campaigners, government agencies and courts have never examined the land use, raw material, energy, water, wildlife, human health and other impacts of supposed wind, solar, biofuel and battery alternatives to fossil fuels – or of the transmission lines and other systems needed to carry electricity and liquid and gaseous renewable fuels thousands of miles to cities, towns and farms.

It is essential that we conduct rigorous studies now, before pushing further ahead. The Environmental Protection Agency, Department of Energy and Interior Department should do so immediately. States, other nations, private sector companies, think tanks and NGOs can and should do their own analyses. The studies can blithely assume these expensive, intermittent, weather-dependent alternatives can actually replace fossil fuels. But they need to assess the environmental impacts of doing so.

Renewable energy companies, industries and advocates are notorious for hiding, minimizing, obfuscating or misrepresenting their environmental and human health impacts. They demand and receive exemptions from health and endangered species laws that apply to other industries. They make promises they cannot keep about being able to safely replace fossil fuels that now provide over 80% of US and global energy.

A few articles have noted some of the serious environmental, toxic/radioactive waste, human health and child labor issues inherent in mining rare earth and cobalt/lithium deposits. However, we now needquantitative studies – detailed, rigorous, honest, transparent, cradle-to-grave, peer-reviewed analyses.

The back-of-the-envelope calculations that follow provide a template. I cannot vouch for any of them. But our governments need to conduct full-blown studies forthwith – before they commit us to spending tens of trillions of dollars on renewable energy schemes, mandates and subsidies that could blanket continents with wind turbines, solar panels, biofuel crops and battery arrays; destroy habitats and wildlife; kill jobs, impoverish families and bankrupt economies; impair our livelihoods, living standards and liberties; and put our lives under the control of unelected, unaccountable state, federal and international rulers – without having a clue whether these supposed alternatives are remotely economical or sustainable.

Ethanol derived from corn grown on 40,000,000 acres now provides the equivalent of 10% of US gasoline – and requires billions of gallons of water, and enormous quantities of fertilizer and energy. What would it take to replace 100% of US gasoline? To replace the entire world’s motor fuels?

Solar panels on Nevada’s Nellis Air Force Base generate 15 megawatts of electricity perhaps 30% of the year from 140 acres. Arizona’s Palo Verde nuclear power plant generates 900 times more electricity, from less land, some 95% of the year. Generating Palo Verde’s output via Nellis technology would require land area ten times larger than Washington, DC – and would still provide electricity unpredictably only 30% of the time. Now run those solar numbers for the 3.5 billion megawatt-hours generated nationwide in 2016.

Modern coal or gas-fired power plants use less than 300 acres to generate 600 megawatts 95% of the time. Indiana’s 600-MW Fowler Ridge wind farm covers 50,000 acres and generates electricity about 30% of the year. Calculate the turbine and acreage requirements for 3.5 billion MWH of wind electricity.

Delving more deeply, generating 20% of US electricity with wind power would require up to 185,000 1.5-MW turbines, 19,000 miles of new transmission lines, 18 million acres, and 245 million tons of concrete, steel, copper, fiberglass and rare earths – plus fossil-fuel back-up generators for the 75-80% of the year that winds nationwide are barely blowing and the turbines are not producing electricity.

Energy analyst David Wells has calculated that replacing 160,000 teraWatt-hours of total global energy consumption with wind would require 183,400,000 turbines needing roughly: 461,000,000,000 tons of steel for the towers; 460,00,000,000 tons of steel and concrete for the foundations; 59,000,000,000 tons of copper, steel and alloys for the turbines; 738,000,000 tons of neodymium for turbine magnets; 14,700,000,000 tons of steel and complex composite materials for the nacelles; 11,000,000,000 tons of complex petroleum-based composites for the rotors; and massive quantities of other raw materials – all of which must be mined, processed, manufactured into finished products and shipped around the world.

Assuming 25 acres per turbine, the turbines would require 4,585,000,000 acres (1,855,500,000 hectares) – 1.3 times the land area of North America! Wells adds: Shipping just the iron ore to build the turbines would require nearly 3 million voyages in huge ships that would consume 13 billion tons of bunker fuel (heavy oil) in the process. And converting that ore to iron and steel would require 473 billion tons of coking coal, demanding another 1.2 million sea voyages, consuming another 6 billion tons of bunker fuel.

For sustainability disciples: Does Earth have enough of these raw materials for this transformation?

It gets worse. These numbers do not include the ultra-long transmission lines required to carry electricity from windy locations to distant cities. Moreover, Irina Slav notes, wind turbines, solar panels and solar thermal installations cannot produce high enough heat to melt silica, iron or other metals, and certainly cannot generate the required power on a reliable enough basis to operate smelters and factories.

Wind turbines (and solar panels) last just 20 years or so (less in salt water environments) – while coal, gas and nuclear power plants last 35-50 years and require far less land and raw materials. That means we would have tear down, haul away and replace far more “renewable” generators twice as often; dispose of or recycle their component parts (and toxic or radioactive wastes); and mine, process and ship more ores.

Finally, their intermittent electricity output means they couldn’t guarantee you could boil an egg, run an assembly line, surf the internet or complete a heart transplant when you need to. So we store their output in massive battery arrays, you say. OK. Let’s calculate the land, energy and raw materials for that. While we’re at it, let’s add in the requirements for building and recharging 100% electric vehicle fleets.

Then there are the bird and bat deaths, wildlife losses from destroying habitats, and human health impacts from wind turbine noise and flicker. These also need to be examined – fully and honestly – along with the effects of skyrocketing renewable energy prices on every aspect of this transition and our lives.

But for honest, evenhanded EPA and other scientists, modelers and regulators previously engaged in alarmist, biased climate chaos studies, these analyses will provide some job security. Let’s get started.

Paul Driessen is senior policy analyst for the Committee For A Constructive Tomorrow (www.CFACT.org) and author of Eco-Imperialism: Green power – Black death.

PDF Driessen – unsustainable environmental impacts

The Santer Clause

By John McLean

When the IPCC’s in a hole and doesn’t have a paper to cite, who’s it gonna call?

(All together) BEN SANTER!

Santer, Wrigley and others, including several IPCC authors, fixed it for the 1995 report with a “miracle” last-minute paper that claimed to have solid evidence of the human influence on climate. The paper had been submitted and not even reached the stage of review when it was included in the IPCC report. At the instigation of the IPCC Working Group I head, John Houghton, the whole pivotal chapter was revised to accommodate it. And all this happened after the second expert review but before government representatives got together to decide what should be said.

About 18 months later the paper was finally published, citing the IPCC report that cited it, and was laughed off the stage. Never mind. It had served its purpose of manipulating opinion about manmade warming and convincing the new-formed UNFCCC that it didn’t need its own subsidiary organization to fiddle science to support the UNFCCC’s claims; the IPCC was perfectly capable of doing that.

Roll forward about 20 years. The IPCC’s 2013 report showed (text box 9.2) that climate models were rubbish at predicting average global temperatures with 111 of 114 climate model runs predicting, for 1998 to 2012, greater warming than the HadCRUT4 temperature data indicated, which was in fact statistically indistinguishable from zero.

What 5AR didn’t make clear was that climate models are run with and with greenhouses gases and the IPCC blames the difference in the two sets of output on manmade warming. (It’s a completely specious argument unless it can be proven that climate models are 100% accurate when it comes to algorithmically including every climate forcing, which of course they are not. The comparison study in fact shows nothing more than the sensitivity of the models to the inclusion of greenhouse gases.)

With climate models poor at making prediction it also follows that they are poor at estimating the influence of greenhouse gases on climate. If the public becomes aware of this then the ground is cut from beneath the UNFCCC’s claims, which means the Paris Climate Agreement will be seen as the farce it really is and all that rearrangement of the global economy to suit UN socialists won’t take place.

There is simply no way that IPCC 6AR can be allowed to continue to cast doubt on climate models because it might mean that end of both the IPCC and UNFCCC, not to mention the incomes and reputations of so-called climate science experts taking a sharp nose-dive.

So who’s the IPCC gonna call? Ben Santer!

This time around the paper has been published so that it complies with rules set down after the 1995 fiasco and can be cited. Being published of course doesn’t mean that it’s any good.

One of its key sentence is ” “None of our findings call into question the reality of long-term warming of Earth’s troposphere and surface, or cast doubt on prevailing estimates of the amount of warming we can expect from future increases in (greenhouse gas) concentrations.”

I’m going to call this the Santer Clause because the last half of it is about as real.

Even the first half is interesting because anyone can shift the goal posts and start the trend in whatever year supports their argument. Select the year carefully and you’ll find that temperatures have risen since then, select another year and they’re flat, select
another and temperatures have fallen.

The other important sentence in the Santer et al paper is ” We conclude that model overestimation of tropospheric warming in the early twenty-first century is partly due to systematic deficiencies in some of the post-2000 external forcings used in the model simulations.” So it’s not climate models that are wrong; it’s the data put into them, in other words it’s the weather.

Talk about climate denial.

There’s no concession that a more plausible explanation is that climate models are nonsense, as IPCC 5AR showed, and that for the 1980s and 1990s the output of the models looked approximately correct because greenhouses gases were exaggerated while the real drivers of climate, the natural forces and internal variability, were underplayed.

The frequency of El Nino events has slowed since the late 1990s and the dominance of such events over La Nina events has weakened, meaning that warming and cooling episodes are tending to balance and that temperature trends remain flat.

The gap between what the models predict and what the data shows would be smaller if the algorithms in the models were corrected. Of course that’s unlikely to happen because the whole notion of significant manmade warming would implode and the IPCC and UNFCCC disappear. The IPCC will now cite this Santer fantasy to try to ensure that doesn’t happen.

It’s a sobering thought that if the implosion doesn’t happen now and the disconnect between the belief and the reality continues to increase then it’s probably only a matter of time before countries start fudging temperature data, to make it show warming that isn’t happening. They have millions or even billions of dollars at stake if the myth collapsed and surely it’s too big a carrot to give up without a fight.

When the reputation of climate science ends up in the gutter as a result of all the nonsense let’s just hope it’s not Ben Santer who’s called to fix it.

See also here.