Yale study confirms Democrats as champions of climate alarmism propaganda politics

Guest essay by Larry Hamlin

The Los Angeles Times latest climate alarmist campaign article clearly reflects the limited effectiveness of effort by propagandists to foist scientifically unsupported alarmismschemes upon the public.

clip_image002

The article addresses a Yale Program on Climate Change Communication which found that only about a third of Americans broach the climate change subject in discussions.

The article notes:

“Barely more than a third of Americans broach the subject often or even occasionally, according to a recent survey by researchers at the Yale Program on Climate Change Communication.”

The Times article clearly focuses on the need for alarmists to push the purely politically hyped “climate consensus” opinion upon the public clearly demonstrating that those championing the climate alarmist propaganda campaign need to stay clear of actual scientific data issues which have so badly undermined the contrived politics of climate alarmism.

This emphasis on pushing the flawed opinion politics of “climate consensus” versus actual climate science data is reflected in the article as follows:

“The more we talk about global warming, the more we might move the needle on public opinion, the Yale team reported Monday in the Proceedings of the National Academy of Sciences.

The researchers found that simply increasing the frequency of climate-related discussions shifted people’s perceptions of the scientific consensus around human-caused warming as well as their own attitudes on the matter.”

The article presents the clear distortion and deception used by climate alarmist propagandists who make the completely misleading and erroneous claim implying that climate change is driven by man made actions. Additionally the article notes how Democrats lead the climate alarmist propaganda campaign as reflected in the article as follows:

“In general, you tend to think that people around you share the beliefs that you have. So the most accurate folks were liberal Democrats. They were off by just 6 percentage points, guessing 63% instead of 69%. That’s likely because liberal Democrats know a lot of other Democrats, so they correctly believe that a lot of people around them believe climate change is happening.”

Additionally the Times article further cements its deception by highlighting the phony “97% climate scientists agree” baloney as follows:

“Studies show that 97% of climate scientists have concluded that human-caused global warming is happening.”

The Times article then caps its climate alarmist distortion and deception anti-science hype by noting the following claims:

“It’s almost comical how often weather is used for small talk. But that’s a good entry point. For instance, you could mention that there are temperature records being broken all over the world. Weather is also a good way to not touch on the buzzwords for potentially skeptical audiences.

Another approach is to weave in climate change if you’re already talking about another issue, like extreme weather or natural disasters. There’s a way to ease into it by saying something like, “Did you know that a warming climate will make hurricanes worse?”

The article conceals the fact that these weather and hurricane claims are unsupported by the UN IPCC as presented in the WUWT article shown below and by Dr. Judith Curry’s conclusion also shown below regarding the lack of scientific evidence for supporting alarmist claims of increased hurricane activity.

clip_image004

clip_image006

The L. A. Times climate alarmist propaganda campaign pushing its anti-science alarmism is largely based upon a litany of concealed flaws in its contrived alarmism positions with just a few examples of these flawed positions noted in the items below.

The Times conceals the fact that actual NOAA measured coastal sea level rise data shows no sea level rise acceleration occurring. More than 30 years ago climate alarmists falsely claimed that accelerating rates of sea level rise would occur at media hyped politically contrived alarmism Congressional hearings in 1988.

clip_image008

The Times conceals the fact that the U.S. has reduced its CO2 emissions since its peak levels in 2007 and leads the world’s nations in that achievement.

clip_image010

The Times conceals the fact that the world’s developing nations totally dominate both present global CO2 emissions as well as the future increases in these emissions.

clip_image012

The Times conceals the fact that both present and future U.S. CO2 emissions are irrelevant to present global emission levels as well as to future global CO2 emission increases.

clip_image014

The Times conceals the fact that global temperatures through 2019 have not increasedsince the El Niño driven high in 1998 more than 20 years ago with the El Niño driven 2016 high temperature statistically consistent with the 1998 high temperature.

clip_image016

The Times conceals the fact that the Paris Climate Agreement is a politically contrived scheme which has no impact whatsoever on the world’s developing nations that dominate and control global emissions.

clip_image018

The Times conceals the fact that the emission reduction commitments contained in the Paris Agreement have an insignificant impact on global temperatures even using highly exaggerated global temperature climate models to evaluate these outcomes.

clip_image020

The Times conceals the fact that climate models are grossly flawed, incapable of representing global climate outcomes, completely inadequate for purposes of establishing global climate policy and inaccurately characterized by alarmist politicians and media as being “proven climate science”.

clip_image022

The Times conceals the fact that increased use of fossil fuels by the world’s developing nations is on going, inevitable and that these nations are committed to the future use of fossil fuels for achieving both their energy and economic growth.

clip_image024

The Times conceals the fact that renewable energy is costly, unreliable, grossly distorts energy market prices, requires significant fossil fuel power backup and despite trillions of dollars in subsidies worldwide provides only a few percent of total global energy consumption.

clip_image026

The Times conceals the fact that California’s government is solely responsible for the state’s wildfire debacle because of its decades long failure to implement forest management policies and actions that maintained healthy forest conditions.

clip_image028

The Times conceals the fact that California government attempted to falsely blame nebulous “climate change” as being responsible for creating the state’s wildfire debacle in a politically driven scheme to hide its gross mismanagement of the state’s forest lands.

clip_image030

The Times conceals the fact that EU nations are backing away from making any commitments to zero emissions program nonsense and that many other climate alarmist political schemes are collapsing worldwide.

clip_image032

The Times conceals the fact that its idiotic claims of “fighting climate change” when pushing politically motivated, costly and globally irrelevant programs like 100% renewable energy are globally meaningless in the real world and pursued solely for purposes of achieving increased governmental political power.

clip_image034

The anti-science climate alarmist propaganda campaign being conducted by the Democrats and the L. A. Times will no doubt continue with even more intensity as the coming political season marches forward. The existence of the internet becomes even more important than ever as this vehicle of open expression and discussion of viewpoints cannot be controlled by either the incredibly biased politically driven main stream media or the massive climate alarmist political propaganda machine behind the Democratic Party.

Bombshell Claim: Scientists Find “Man-made Climate Change Doesn’t Exist In Practice”

A new scientific study could bust wide open deeply flawed fundamental assumptions underlying controversial climate legislation and initiatives such as the Green New Deal, namely, the degree to which ‘climate change’ is driven by natural phenomena vs. man-made issues measured as carbon footprint. Scientists in Finland found “practically no anthropogenic [man-made] climate change” after a series of studies. 

“During the last hundred years the temperature increased about 0.1°C because of carbon dioxide. The human contribution was about 0.01°C”, the Finnish researchers bluntly state in one among a series of papers.

This has been collaborated by a team at Kobe University in Japan, which has furthered the Finnish researchers’ theory: “New evidence suggests that high-energy particles from space known as galactic cosmic rays affect the Earth’s climate by increasing cloud cover, causing an ‘umbrella effect’,” the just published study has found, a summary of which has been released in the journal Science Daily. The findings are hugely significant given this ‘umbrella effect’ — an entirely natural occurrence  could be the prime driver of climate warming, and not man-made factors.

Clouds over Los Angeles, via AFP/Getty

The scientists involved in the study are most concerned with the fact that current climate models driving the political side of debate, most notably the Intergovernmental Panel on Climate Change’s (IPCC) climate sensitivity scale, fail to incorporate this crucial and potentially central variable of increased cloud cover.

“The Intergovernmental Panel on Climate Change (IPCC) has discussed the impact of cloud cover on climate in their evaluations, but this phenomenon has never been considered in climate predictions due to the insufficient physical understanding of it,” comments Professor Hyodo in Science Daily. “This study provides an opportunity to rethink the impact of clouds on climate. When galactic cosmic rays increase, so do low clouds, and when cosmic rays decrease clouds do as well, so climate warming may be caused by an opposite-umbrella effect.”

In their related paper, aptly titled, “No experimental evidence for the significant anthropogenic [man-made] climate change”, the Finnish scientists find that low cloud cover “practically” controls global temperatures but that “only a small part” of the increased carbon dioxide concentration is anthropogenic, or caused by human activity.

The following is a key bombshell section in one of the studies conducted by Finland’s Turku University team:

We have proven that the GCM-models used in IPCC report AR5 cannot compute correctly the natural component included in the observed global temperature. The reason is that the models fail to derive the influences of low cloud cover fraction on the global temperature. A too small natural component results in a too large portion for the contribution of the greenhouse gases like carbon dioxide. That is why 6 J. KAUPPINEN AND P. MALMI IPCC represents the climate sensitivity more than one order of magnitude larger than our sensitivity 0.24°C. Because the anthropogenic portion in the increased CO2 is less than 10 %, we have practically no anthropogenic climate change. The low clouds control mainly the global temperature.

This raises urgent questions and central contradictions regarding current models which politicians and environmental groups across the globe are using to push radical economic changes on their countries’ populations.

Image source: NASA

Conclusions from both the Japanese and Finnish studies strongly suggest, for example, that Rep. Alexandria Ocasio-Cortez’s “drastic measures to cut carbon emissions” which would ultimately require radical legislation changes to “remake the U.S. economy” would not only potentially bankrupt everyone but simply wouldn’t even work, at least according to the new Finnish research team findings.

To put AOC’s “drastic measures” in perspective  based entirely on the fundamental assumption of the monumental and disastrous impact of human activity on the climate  — consider the following conclusions from the Finnish studies:

“During the last hundred years the temperature increased about 0.1°C because of carbon dioxide. The human contribution was about 0.01°C.

Which leads the scientists to state further:

“Because the anthropogenic portion in the increased carbon dioxide is less than 10 percent, we have practically no anthropogenic climate change,” the researchers concluded.

And the team in Japan has called for a total reevaluation of current climate models, which remain dangerously flawed for dismissing a crucial variable:

This study provides an opportunity to rethink the impact of clouds on climate. When galactic cosmic rays increase, so do low clouds, and when cosmic rays decrease clouds do as well, so climate warming may be caused by an opposite-umbrella effect. The umbrella effect caused by galactic cosmic rays is important when thinking about current global warming as well as the warm period of the medieval era.

Failure to account for this results in the following, according to the one in the series of studies: “The IPCC climate sensitivity is about one order of magnitude too high, because a strong negative feedback of the clouds is missing in climate models.”

Image source: AFP/Getty 

“If we pay attention to the fact that only a small part of the increased CO2 concentration is anthropogenic, we have to recognize that the anthropogenic climate change does not exist in practice,” the researchers conclude.

Though we doubt the ideologues currently pushing to radically remake the American economy through what ends up being a $93 trillion proposal (according to one study including AOC’s call for a whopping 70% top tax rate — will carefully inquire of this new bombshell scientific confirmation presented in the new research, we at least hope the US scientific community takes heed before it’s too late in the cause of accurate and authentic science that would stave off irreparable economic disaster that would no doubt ripple across the globe, adding to both human and environmental misery.

And “too late” that is, not for some mythical imminent or near-future “global warming Armageddon” as the currently in vogue highly politicized “science” of activists and congress members alike claims.

Inconvenient Energy Realities

JULY 1, 2019

The math behind “The New Energy Economy: An Exercise in Magical Thinking

A week doesn’t pass without a mayor, governor, policymaker or pundit joining the rush to demand, or predict, an energy future that is entirely based on wind/solar and batteries, freed from the “burden” of the hydrocarbons that have fueled societies for centuries. Regardless of one’s opinion about whether, or why, an energy “transformation” is called for, the physics and economics of energy combined with scale realities make it clear that there is no possibility of anything resembling a radically “new energy economy” in the foreseeable future. Bill Gates has said that when it comes to understanding energy realities “we need to bring math to the problem.”

He’s right. So, in my recent Manhattan Institute report, “The New Energy Economy: An Exercise in Magical Thinking,” I did just that.

Herein, then, is a summary of some of bottom-line realities from the underlying math. (See the full report for explanations, documentation and citations.)

Realities About the Scale of Energy Demand

1. Hydrocarbons supply over 80% of world energy: If all that were in the form of oil, the barrels would line up from Washington, D.C., to Los Angeles, and that entire line would grow by the height of the Washington Monument every week.

2. The small two percentage-point decline in the hydrocarbon share of world energy use entailed over $2 trillion in cumulative global spending on alternatives over that period; solar and wind today supply less than 2% of the global energy.

3. When the world’s four billion poor people increase energy use to just one-third of Europe’s per capita level, global demand rises by an amount equal to twice America’s total consumption.

4. A 100x growth in the number of electric vehicles to 400 million on the roads by 2040 would displace 5% of global oil demand.

5. Renewable energy would have to expand 90-fold to replace global hydrocarbons in two decades. It took a half-century for global petroleum production to expand “only” 10-fold.

6. Replacing U.S. hydrocarbon-based electric generation over the next 30 years would require a construction program building out the grid at a rate 14-fold greater than any time in history.

7. Eliminating hydrocarbons to make U.S. electricity (impossible soon, infeasible for decades) would leave untouched 70% of U.S. hydrocarbons use—America uses 16% of world energy.

8. Efficiency increases energy demand by making products & services cheaper: since 1990, global energy efficiency improved 33%, the economy grew 80% and global energy use is up 40%.

9. Efficiency increases energy demand: Since 1995, aviation fuel use/passenger-mile is down 70%, air traffic rose more than 10-fold, and global aviation fuel use rose over 50%.

10. Efficiency increases energy demand: since 1995, energy used per byte is down about 10,000-fold, but global data traffic rose about a million-fold; global electricity used for computing soared.

11. Since 1995, total world energy use rose by 50%, an amount equal to adding two entire United States’ worth of demand.

12. For security and reliability, an average of two months of national demand for hydrocarbons are in storage at any time. Today, barely two hours of national electricity demand can be stored in all utility-scale batteries plus all batteries in one million electric cars in America.

13. Batteries produced annually by the Tesla Gigafactory (world’s biggest battery factory) can store three minutes worth of annual U.S. electric demand.

14. To make enough batteries to store two-day’s worth of U.S. electricity demand would require 1,000 years of production by the Gigafactory (world’s biggest battery factory).

15. Every $1 billion in aircraft produced leads to some $5 billion in aviation fuel consumed over two decades to operate them. Global spending on new jets is more than $50 billion a year—and rising.

16. Every $1 billion spent on datacenters leads to $7 billion in electricity consumed over two decades. Global spending on datatcenters is more than $100 billion a year—and rising.

Realities About Energy Economics

17. Over a 30-year period, $1 million worth of utility-scale solar or wind produces 40 million and 55 million kWh respectively: $1 million worth of shale well produces enough natural gas to generate 300 million kWh over 30 years.

18. It costs about the same to build one shale well or two wind turbines: the latter, combined, produces 0.7 barrels of oil (equivalent energy) per hourthe shale rig averages 10 barrels of oil per hour.

19. It costs less than $0.50 to store a barrel of oil, or its equivalent in natural gas, but it costs $200 to store the equivalent energy of a barrel of oil in batteries.

20. Cost models for wind and solar assume, respectively, 41% and 29% capacity factors (i.e., how often they produce electricity). Real-world data reveal as much as 10 percentage points less for both. That translates into $3 million less energy produced than assumed over a 20-year life of a 2-MW $3 million wind turbine.

21. In order to compensate for episodic wind/solar output, U.S. utilities are using oil- and gas-burning reciprocating engines (big cruise-ship-like diesels); three times as many have been added to the grid since 2000 as in the 50 years prior to that.

22. Wind-farm capacity factors have improving at about 0.7% per year; this small gain comes mainly from reducing the number of turbines per acre leading to 50% increase in average land used to produce a wind-kilowatt-hour.

23. Over 90% of America’s electricity, and 99% of the power used in transportation, comes from sources that can easily supply energy to the economy any time the market demands it.

24. Wind and solar machines produce energy an average of 25%–30% of the time, and only when nature permits. Conventional power plants can operate nearly continuously and are available when needed.

25. The shale revolution collapsed the prices of natural gas & coal, the two fuels that produce 70% of U.S. electricity. But electric rates haven’t gone down, rising instead 20% since 2008. Direct and indirect subsidies for solar and wind consumed those savings.

Energy Physics… Inconvenient Realities

26. Politicians and pundits like to invoke “moonshot” language. But transforming the energy economy is not like putting a few people on the moon a few times. It is like putting all of humanity on the moon—permanently.

27. The common cliché: an energy tech disruption will echo the digital tech disruption. But information-producing machines and energy-producing machines involve profoundly different physics; the cliché is sillier than comparing apples to bowling balls.

28. If solar power scaled like computer-tech, a single postage-stamp-size solar array would power the Empire State Building. That only happens in comic books.

29. If batteries scaled like digital tech, a battery the size of a book, costing three cents, could power a jetliner to Asia. That only happens in comic books.

30. If combustion engines scaled like computers, a car engine would shrink to the size of an ant and produce a thousand-fold more horsepower; actual ant-sized engines produce 100,000 times less power.

31. No digital-like 10x gains exist for solar tech. Physics limit for solar cells (the Shockley-Queisser limit) is a max conversion of about 33% of photons into electrons; commercial cells today are at 26%.

32. No digital-like 10x gains exist for wind tech. Physics limit for wind turbines (the Betz limit) is a max capture of 60% of energy in moving air; commercial turbines achieve 45%.

33. No digital-like 10x gains exist for batteries: maximum theoretical energy in a pound of oil is 1,500% greater than max theoretical energy in the best pound of battery chemicals.

34. About 60 pounds of batteries are needed to store the energy equivalent of one pound of hydrocarbons.

35. At least 100 pounds of materials are mined, moved and processed for every pound of battery fabricated.

36. Storing the energy equivalent of one barrel of oil, which weighs 300 pounds, requires 20,000 pounds of Tesla batteries ($200,000 worth).

37. Carrying the energy equivalent of the aviation fuel used by an aircraft flying to Asia would require $60 million worth of Tesla-type batteries weighing five times more than that aircraft.

38. It takes the energy-equivalent of 100 barrels of oil to fabricate a quantity of batteries that can store the energy equivalent of a single barrel of oil.

39. A battery-centric grid and car world means mining gigatons more of the earth to access lithium, copper, nickel, graphite, rare earths, cobalt, etc.—and using millions of tons of oil and coal both in mining and to fabricate metals and concrete.

40. China dominates global battery production with its grid 70% coal-fueled: EVs using Chinese batteries will create more carbon-dioxide than saved by replacing oil-burning engines.

41. One would no more use helicopters for regular trans-Atlantic travel—doable with elaborately expensive logistics—than employ a nuclear reactor to power a train or photovoltaic systems to power a nation.

Mark P. Mills is a senior fellow at the Manhattan Institute, a McCormick School of Engineering Faculty Fellow at Northwestern University, and author of Work in the Age of Robots, published by Encounter Books.

Interested in real economic insights? Want to stay ahead of the competition? Each weekday morning, e21 delivers a short email that includes e21 exclusive commentaries and the latest market news and updates from Washington. Sign up for the e21 Morning eBrief.

Inconvenient Energy Realities

Mark P. Mills

EXECUTIVE SUMMARY

A movement has been growing for decades to replace hydrocarbons, which collectively supply 84% of the world’s energy. It began with the fear that we were running out of oil. That fear has since migrated to the belief that, because of climate change and other environmental concerns, society can no longer tolerate burning oil, natural gas, and coal—all of which have turned out to be abundant.

So far, wind, solar, and batteries—the favored alternatives to hydrocarbons—provide about 2% of the world’s energy and 3% of America’s. Nonetheless, a bold new claim has gained popularity: that we’re on the cusp of a tech-driven energy revolution that not only can, but inevitably will, rapidly replace all hydrocarbons.

This “new energy economy” rests on the belief—a centerpiece of the Green New Deal and other similar proposals both here and in Europe—that the technologies of wind and solar power and battery storage are undergoing the kind of disruption experienced in computing and communications, dramatically lowering costs and increasing efficiency. But this core analogy glosses over profound differences, grounded in physics, between systems that produce energy and those that produce information.

In the world of people, cars, planes, and factories, increases in consumption, speed, or carrying capacity cause hardware to expand, not shrink. The energy needed to move a ton of people, heat a ton of steel or silicon, or grow a ton of food is determined by properties of nature whose boundaries are set by laws of gravity, inertia, friction, mass, and thermodynamics—not clever software.

This paper highlights the physics of energy to illustrate why there is no possibility that the world is undergoing—or can undergo—a near-term transition to a “new energy economy.”

Among the reasons:

  • Scientists have yet to discover, and entrepreneurs have yet to invent, anything as remarkable as hydrocarbons in terms of the combination of low-cost, high-energy density, stability, safety, and portability. In practical terms, this means that spending $1 million on utility-scale wind turbines, or solar panels will each, over 30 years of operation, produce about 50 million kilowatt-hours (kWh)—while an equivalent $1 million spent on a shale rig produces enough natural gas over 30 years to generate over 300 million kWh.
  • Solar technologies have improved greatly and will continue to become cheaper and more efficient. But the era of 10-fold gains is over. The physics boundary for silicon photovoltaic (PV) cells, the Shockley-Queisser Limit, is a maximum conversion of 34% of photons into electrons; the best commercial PV technology today exceeds 26%.
  • Wind power technology has also improved greatly, but here, too, no 10-fold gains are left. The physics boundary for a wind turbine, the Betz Limit, is a maximum capture of 60% of kinetic energy in moving air; commercial turbines today exceed 40%.
  • The annual output of Tesla’s Gigafactory, the world’s largest battery factory, could store three minutes’ worth of annual U.S. electricity demand. It would require 1,000 years of production to make enough batteries for two days’ worth of U.S. electricity demand. Meanwhile, 50–100 pounds of materials are mined, moved, and processed for every pound of battery produced.

DOWNLOAD PDF

Introduction

A growing chorus of voices is exhorting the public, as well as government policymakers, to embrace the necessity—indeed, the inevitability—of society’s transition to a “new energy economy.” (See Peak Hydrocarbons Just Around the Corner.) Advocates claim that rapid technological changes are becoming so disruptive and renewable energy is becoming so cheap and so fast that there is no economic risk in accelerating the move to—or even mandating—a post-hydrocarbon world that no longer needs to use much, if any, oil, natural gas, or coal.

Central to that worldview is the proposition that the energy sector is undergoing the same kind of technology disruptions that Silicon Valley tech has brought to so many other markets. Indeed, “old economy” energy companies are a poor choice for investors, according to proponents of the new energy economy, because the assets of hydrocarbon companies will soon become worthless, or “stranded.”[1] Betting on hydrocarbon companies today is like betting on Sears instead of Amazon a decade ago.

Peak Hydrocarbons Just Around the Corner
“ [Clean tech is] a perfect example of a 10x exponential process which will wipe fossil fuels off the market in about a decade.” —Tony Seba, Stanford economist

“ Until now, observers mostly paid attention to the likely effectiveness of climate policies, but not to the ongoing and effectively irreversible technological [energy] transition.” — Jean-François Mercure, Cambridge University

“ [By] 2030, the cost [of solar] could be so near to zero it will effectively be free.” — Sam Arie, UBS research analyst

“ The world is experiencing a global energy transformation driven by technological change and new policy priorities.” — European Union, Mission Possible report for the G20

“ Global shift to clean energy is under way, but much more needs to be done.”  — Letter to G7 Summit by 288 of the world’s largest investors

“ A carbon tax should increase every year until emissions reductions goals are met [which] … will encourage [carbon-free] technological innovation and large-scale infrastructure development.” — Baker-Shultz Plan, signed by economists, Nobelists, Fed Reserve chairs, etc.

“ Green technologies, like batteries and solar and wind power, are improving far faster than many realize … [It’s] the biggest reshuffling of the economy since the Industrial Revolution.” — Jeremy Grantham, investor, billionaire

“ Smartphone substitution seemed no more imminent in the early 2000s than large-scale energy substitution seems today.” — International Monetary Fund

Source: Tony Seba, “Clean Disruption” (video), Stanford University, 2017; Jean-François Mercure quoted in Steve Hanley, “Carbon Bubble About to Burst, Leaving Trillions in Stranded Assets Behind, Claims New Research,” Clean Technica, June 5, 2018; Sam Arie, “Renewables Are Primed to Enter the Global Energy Race,” Financial Times, Aug. 13, 2018; OECD, “Mission Possible,” Energy Transitions Commission, November 2018; Steve Hanley, “Ahead of G7 Meeting, Investors Urge an End to Coal Power & Fossil Fuel Subsidies,” Clean Technica, June 5, 2018; “Economists’ Statement on Carbon Dividends; “Investing Prophet Jeremy Grantham Takes Aim at Climate Change,” Bloomberg, Jan. 17, 2019; Wall Street Journal, Jan. 16, 2019 (Baker-Shultz plan); International Monetary Fund, “Riding the Energy Transition: Oil Beyond 2040,” May 2017

“Mission Possible,” a 2018 report by an international Energy Transitions Commission, crystallized this growing body of opinion on both sides of the Atlantic.[2] To “decarbonize” energy use, the report calls for the world to engage in three “complementary” actions: aggressively deploy renewables or so-called clean tech, improve energy efficiency, and limit energy demand.

This prescription should sound familiar, as it is identical to a nearly universal energy-policy consensus that coalesced following the 1973–74 Arab oil embargo that shocked the world. But while the past half-century’s energy policies were animated by fears of resource depletion, the fear now is that burning the world’s abundant hydrocarbons releases dangerous amounts of carbon dioxide into the atmosphere.

To be sure, history shows that grand energy transitions are possible. The key question today is whether the world is on the cusp of another.

The short answer is no. There are two core flaws with the thesis that the world can soon abandon hydrocarbons. The first: physics realities do not allow energy domains to undergo the kind of revolutionary change experienced on the digital frontiers. The second: no fundamentally new energy technology has been discovered or invented in nearly a century—certainly, nothing analogous to the invention of the transistor or the Internet.

Before these flaws are explained, it is best to understand the contours of today’s hydrocarbon-based energy economy and why replacing it would be a monumental, if not an impossible, undertaking.

Moonshot Policies and the Challenge of Scale

The universe is awash in energy. For humanity, the challenge has always been to deliver energy in a useful way that is both tolerable and available when it is needed, not when nature or luck offers it. Whether it be wind or water on the surface, sunlight from above, or hydrocarbons buried deep in the earth, converting an energy source into useful power always requires capital-intensive hardware.

Considering the world’s population and the size of modern economies, scale matters. In physics, when attempting to change any system, one has to deal with inertia and various forces of resistance; it’s far harder to turn or stop a Boeing than it is a bumblebee. In a social system, it’s far more difficult to change the direction of a country than it is a local community.

Today’s reality: hydrocarbons—oil, natural gas, and coal—supply 84% of global energy, a share that has decreased only modestly from 87% two decades ago (Figure 1).[3] Over those two decades, total world energy use rose by 50%, an amount equal to adding two entire United States’ worth of demand.[4]

The small percentage-point decline in the hydrocarbon share of world energy use required over $2 trillion in cumulative global spending on alternatives over that period.[5] Popular visuals of fields festooned with windmills and rooftops laden with solar cells don’t change the fact that these two energy sources today provide less than 2% of the global energy supply and 3% of the U.S. energy supply.

The scale challenge for any energy resource transformation begins with a description. Today, the world’s economies require an annual production of 35 billion barrels of petroleum, plus the energy equivalent of another 30 billion barrels of oil from natural gas, plus the energy equivalent of yet another 28 billion barrels of oil from coal. In visual terms: if all that fuel were in the form of oil, the barrels would form a line from Washington, D.C., to Los Angeles, and that entire line would increase in height by one Washington Monument every week.

To completely replace hydrocarbons over the next 20 years, global renewable energy production would have to increase by at least 90-fold.[6] For context: it took a half-century for global oil and gas production to expand by 10-fold.[7] It is a fantasy to think, costs aside, that any new form of energy infrastructure could now expand nine times more than that in under half the time.

If the initial goal were more modest—say, to replace hydrocarbons only in the U.S. and only those used in electricity generation—the project would require an industrial effort greater than a World War II–level of mobilization.[8] A transition to 100% non-hydrocarbon electricity by 2050 would require a U.S. grid construction program 14-fold bigger than the grid build-out rate that has taken place over the past half-century.[9] Then, to finish the transformation, this Promethean effort would need to be more than doubled to tackle nonelectric sectors, where 70% of U.S. hydrocarbons are consumed. And all that would affect a mere 16% of world energy use, America’s share.

This daunting challenge elicits a common response: “If we can put a man on the moon, surely we can [fill in the blank with any aspirational goal].” But transforming the energy economy is not like putting a few people on the moon a few times. It is like putting all of humanity on the moon—permanently.

The Physics-Driven Cost Realities of Wind and Solar

The technologies that frame the new energy economy vision distill to just three things: windmills, solar panels, and batteries.[10] While batteries don’t produce energy, they are crucial for ensuring that episodic wind and solar power is available for use in homes, businesses, and transportation.

Yet windmills and solar power are themselves not “new” sources of energy. The modern wind turbine appeared 50 years ago and was made possible by new materials, especially hydrocarbon-based fiberglass. The first commercially viable solar tech also dates back a half-century, as did the invention of the lithium battery (by an Exxon researcher).[11]

Over the decades, all three technologies have greatly improved and become roughly 10-fold cheaper.[12] Subsidies aside, that fact explains why, in recent decades, the use of wind/solar has expanded so much from a base of essentially zero.

Nonetheless, wind, solar, and battery tech will continue to become better, within limits. Those limits matter a great deal—about which, more later—because of the overwhelming demand for power in the modern world and the realities of energy sources on offer from Mother Nature.

With today’s technology, $1 million worth of utility-scale solar panels will produce about 40 million kilowatt-hours (kWh) over a 30-year operating period (Figure 2). A similar metric is true for wind: $1 million worth of a modern wind turbine produces 55 million kWh over the same 30 years.[13] Meanwhile, $1 million worth of hardware for a shale rig will produce enough natural gas over 30 years to generate over 300 million kWh.[14] That constitutes about 600% more electricity for the same capital spent on primary energy-producing hardware.[15]The fundamental differences between these energy resources can also be illustrated in terms of individual equipment. For the cost to drill a single shale well, one can build two 500-foot-high, 2-megawatt (MW) wind turbines. Those two wind turbines produce a combined output averaging over the years to the energy equivalent of 0.7 barrels of oil per hour. The same money spent on a single shale rig produces 10 barrels of oil, per hour, or its energy equivalent in natural gas, averaged over the decades.[16]

The huge disparity in output arises from the inherent differences in energy densities that are features of nature immune to public aspiration or government subsidy. The high energy density of the physical chemistry of hydrocarbons is unique and well understood, as is the science underlying the low energy density inherent in surface sunlight, wind volumes, and velocity.[17Regardless of what governments dictate that utilities pay for that output, the quantity of energy produced is determined by how much sunlight or wind is available over any period of time and the physics of the conversion efficiencies of photovoltaic cells or wind turbines.

These kinds of comparisons between wind, solar, and natural gas illustrate the starting point in making a raw energy resource useful. But for any form of energy to become a primary source of power, additional technology is required. For gas, one necessarily spends money on a turbo-generator to convert the fuel into grid electricity. For wind/solar, spending is required for some form of storage to convert episodic electricity into utility-grade, 24/7 power.

The high cost of ensuring energy availability

Availability is the single most critical feature of any energy infrastructure, followed by price, followed by the eternal search for decreasing costs without affecting availability. Until the modern energy era, economic and social progress had been hobbled by the episodic nature of energy availability. That’s why, so far, more than 90% of America’s electricity, and 99% of the power used in transportation, comes from sources that can easily supply energy any time on demand.[18]

In our data-centric, increasingly electrified, society, always-available power is vital. But, as with all things, physics constrains the technologies and the costs for supplying availability.[19] For hydrocarbon-based systems, availability is dominated by the cost of equipment that can convert fuel-to-power continuously for at least 8,000 hours a year, for decades. Meanwhile, it’s inherently easy to store the associated fuel to meet expected or unexpected surges in demand, or delivery failures in the supply chain caused by weather or accidents.

It costs less than $1 a barrel to store oil or natural gas (in oil-energy equivalent terms) for a couple of months.[20] Storing coal is even cheaper. Thus, unsurprisingly, the U.S., on average, has about one to two months’ worth of national demand in storage for each kind of hydrocarbon at any given time.[21]

Meanwhile, with batteries, it costs roughly $200 to store the energy equivalent to one barrel of oil.[22] Thus, instead of months, barely two hours of national electricity demand can be stored in the combined total of all the utility-scale batteries on the grid plus all the batteries in the 1 million electric cars that exist today in America.[23]

For wind/solar, the features that dominate cost of availability are inverted, compared with hydrocarbons. While solar arrays and wind turbines do wear out and require maintenance as well, the physics and thus additional costs of that wear-and-tear are less challenging than with combustion turbines. But the complex and comparatively unstable electrochemistry of batteries makes for an inherently more expensive and less efficient way to store energy and ensure its availability.

Since hydrocarbons are so easily stored, idle conventional power plants can be dispatched—ramped up and down—to follow cyclical demand for electricity. Wind turbines and solar arrays cannot be dispatched when there’s no wind or sun. As a matter of geophysics, both wind-powered and sunlight-energized machines produce energy, averaged over a year, about 25%–30% of the time, often less.[24] Conventional power plants, however, have very high “availability,” in the 80%–95% range, and often higher.[25]

A wind/solar grid would need to be sized to meet both peak demand and to have enough extra capacity beyond peak needs in order to produce and store additional electricity when sun and wind are available. This means, on average, that a pure wind/solar system would necessarily have to be about threefold the capacity of a hydrocarbon grid: i.e., one needs to build 3 kW of wind/solar equipment for every 1 kW of combustion equipment eliminated. That directly translates into a threefold cost disadvantage, even if the per-kW costs were all the same.[26]

Even this necessary extra capacity would not suffice. Meteorological and operating data show that average monthly wind and solar electricity output can drop as much as twofold during each source’s respective “low” season.[27]

The myth of grid parity

How do these capacity and cost disadvantages square with claims that wind and solar are already at or near “grid parity” with conventional sources of electricity? The U.S. Energy Information Agency (EIA) and other similar analyses report a “levelized cost of energy” (LCOE) for all types of electric power technologies. In the EIA’s LCOE calculations, electricity from a wind turbine or solar array is calculated as 36% and 46%, respectively, more expensive than from a natural-gas turbine—i.e., approaching parity.[28] But in a critical and rarely noted caveat, EIA states: “The LCOE values for dispatchable and non-dispatchable technologies are listed separately in the tables because comparing them must be done carefully”[29] (emphasis added). Put differently, the LCOE calculations do not take into account the array of real, if hidden, costs needed to operate a reliable 24/7 and 365-day-per-year energy infrastructure—or, in particular, a grid that used only wind/solar.

The LCOE considers the hardware in isolation while ignoring real-world system costs essential to supply 24/7 power. Equally misleading, an LCOE calculation, despite its illusion of precision, relies on a variety of assumptions and guesses subject to dispute, if not bias.

For example, an LCOE assumes that the future cost of competing fuels—notably, natural gas—will rise significantly. But that means that the LCOE is more of a forecast than a calculation. This is important because a “levelized cost” uses such a forecast to calculate a purported average cost over a long period. The assumption that gas prices will go up is at variance with the fact that they have decreased over the past decade and the evidence that low prices are the new normal for the foreseeable future.[30] Adjusting the LCOE calculation to reflect a future where gas prices don’t rise radically increases the LCOE cost advantage of natural gas over wind/solar.

An LCOE incorporates an even more subjective feature, called the “discount rate,” which is a way of comparing the value of money today versus the future. A low discount rate has the effect of tilting an outcome to make it more appealing to spend precious capital today to solve a future (theoretical) problem. Advocates of using low discount rates are essentially assuming slow economic growth.[31]

A high discount rate effectively assumes that a future society will be far richer than today (not to mention have better technology).[32] Economist William Nordhaus’s work in this field, wherein he advocates using a high discount rate, earned him a 2018 Nobel Prize.

An LCOE also requires an assumption about average multi-decade capacity factors, the share of time the equipment actually operates (i.e., the real, not theoretical, amount of time the sun shines and wind blows). EIA assumes, for example, 41% and 29% capacity factors, respectively, for wind and solar. But data collected from operating wind and solar farms reveal actual median capacity factors of 33% and 22%.[33] The difference between assuming a 40% but experiencing a 30% capacity factor means that, over the 20-year life of a 2-MW wind turbine, $3 million of energy production assumed in the financial models won’t exist—and that’s for a turbine with an initial capital cost of about $3 million.

U.S. wind-farm capacity factors have been getting better but at a slow rate of about 0.7% per year over the past two decades.[34] Notably, this gain was achieved mainly by reducing the number of turbines per acre trying to scavenge moving air—resulting in average land used per unit of wind energy increasing by some 50%.

LCOE calculations do reasonably include costs for such things as taxes, the cost of borrowing, and maintenance. But here, too, mathematical outcomes give the appearance of precision while hiding assumptions. For example, assumptions about maintenance costs and performance of wind turbines over the long term may be overly optimistic. Data from the U.K., which is further down the wind-favored path than the U.S., point to far faster degradation (less electricity per turbine) than originally forecast.[35]

To address at least one issue with using LCOE as a tool, the International Energy Agency (IEA) recently proposed the idea of a “value-adjusted” LCOE, or VALCOE, to include the elements of flexibility and incorporate the economic implications of dispatchability. IEA calculations using a VALCOE method yielded coal power, for example, far cheaper than solar, with a cost penalty widening as a grid’s share of solar generation rises.[36]

One would expect that, long before a grid is 100% wind/solar, the kinds of real costs outlined above should already be visible. As it happens, regardless of putative LCOEs, we do have evidence of the economic impact that arises from increasing the use of wind and solar energy.

The Hidden Costs of a “Green” Grid

Subsidies, tax preferences, and mandates can hide real-world costs, but when enough of them accumulate, the effect should be visible in overall system costs. And it is. In Europe, the data show that the higher the share of wind/solar, the higher the average cost of grid electricity (Figure 3).

Germany and Britain, well down the “new energy” path, have seen average electricity rates rise 60%–110% over the past two decades.[37] The same pattern—more wind/solar and higher electricity bills—is visible in Australia and Canada.[38]

Since the share of wind power, on a per-capita basis, in the U.S. is still at only a small fraction of that in most of Europe, the cost impacts on American ratepayers are less dramatic and less visible. Nonetheless, average U.S. residential electric costs have risen some 20% over the past 15 years.[39] That should not have been the case. Average electric rates should have gone down, not up.

Here’s why: coal and natural gas together supplied about 70% of electricity over that 15-year period.[40] The price of fuel accounts for about 60%–70% of the cost to produce electricity when using hydrocarbons.[41] Thus, about half the average cost of America’s electricity depends on coal and gas prices. The price of both those fuels has gone down by over 50% over that 15-year period. Utility costs, specifically, to purchase gas and coal are down some 25% over the past decade alone. In other words, cost savings from the shale-gas revolution have significantly insulated consumers, so far, from even higher rate increases.

The increased use of wind/solar imposes a variety of hidden, physics-based costs that are rarely acknowledged in utility or government accounting. For example, when large quantities of power are rapidly, repeatedly, and unpredictably cycled up and down, the challenge and costs associated with “balancing” a grid (i.e., keeping it from failing) are greatly increased. OECD analysts estimate that at least some of those “invisible” costs imposed on the grid add 20%–50% to the cost of grid kilowatt-hours.[42]

Furthermore, flipping the role of the grid’s existing power plants from primary to backup for wind/solar leads to other real but unallocated costs that emerge from physical realities. Increased cycling of conventional power plants increases wear-and-tear and maintenance costs. It also reduces the utilization of those expensive assets, which means that capital costs are spread out over fewer kWh produced—thereby arithmetically increasing the cost of each of those kilowatt-hours.[43]

Then, if the share of episodic power becomes significant, the potential rises for complete system blackouts. That has happened twice after the wind died down unexpectedly (with some customers out for days in some areas) in the state of South Australia, which derives over 40% of its electricity from wind.[44]

After a total system outage in South Australia in 2018, Tesla, with much media fanfare, installed the world’s single largest lithium battery “farm” on that grid.[45] For context, to keep South Australia lit for one half-day of no wind would require 80 such “world’s biggest” Tesla battery farms, and that’s on a grid that serves just 2.5 million people.

Engineers have other ways to achieve reliability; using old-fashioned giant diesel-engine generators as backup (engines essentially the same as those that propel cruise ships or that are used to back up data centers). Without fanfare, because of rising use of wind, U.S. utilities have been installing grid-scale engines at a furious pace. The grid now has over $4 billion in utility-scale, engine-driven generators (enough for about 100 cruise ships), with lots more to come. Most burn natural gas, though a lot of them are oil-fired. Three times as many such big reciprocating engines have been added to America’s grid over the past two decades as over the half-century prior to that.[46]

All these costs are real and are not allocated to wind or solar generators. But electricity consumers pay them. A way to understand what’s going on: managing grids with hidden costs imposed on non-favored players would be like levying fees on car drivers for the highway wear-and-tear caused by heavy trucks while simultaneously subsidizing the cost of fueling those trucks.

The issue with wind and solar power comes down to a simple point: their usefulness is impractical on a national scale as a major or primary fuel source for generating electricity. As with any technology, pushing the boundaries of practical utilization is possible but usually not sensible or cost-effective. Helicopters offer an instructive analogy.

The development of a practical helicopter in the 1950s (four decades after its invention) inspired widespread hyperbole about that technology revolutionizing personal transportation. Today, the manufacture and use of helicopters is a multibillion-dollar niche industry providing useful and often-vital services. But one would no more use helicopters for regular Atlantic travel—though doable with elaborate logistics—than employ a nuclear reactor to power a train or photovoltaic systems to power a country.

Batteries Cannot Save the Grid or the Planet

Batteries are a central feature of new energy economy aspirations. It would indeed revolutionize the world to find a technology that could store electricity as effectively and cheaply as, say, oil in a barrel, or natural gas in an underground cavern.[47] Such electricity-storage hardware would render it unnecessary even to build domestic power plants. One could imagine an OKEC (Organization of Kilowatt-Hour Exporting Countries) that shipped barrels of electrons around the world from nations where the cost to fill those “barrels” was lowest; solar arrays in the Sahara, coal mines in Mongolia (out of reach of Western regulators), or the great rivers of Brazil.

But in the universe that we live in, the cost to store energy in grid-scale batteries is, as earlier noted, about 200-fold more than the cost to store natural gas to generate electricity when it’s needed.[48] That’s why we store, at any given time, months’ worth of national energy supply in the form of natural gas or oil.

Battery storage is quite another matter. Consider Tesla, the world’s best-known battery maker: $200,000 worth of Tesla batteries, which collectively weigh over 20,000 pounds, are needed to store the energy equivalent of one barrel of oil.[49] A barrel of oil, meanwhile, weighs 300 pounds and can be stored in a $20 tank. Those are the realities of today’s lithium batteries. Even a 200% improvement in underlying battery economics and technology won’t close such a gap.

Nonetheless, policymakers in America and Europe enthusiastically embrace programs and subsidies to vastly expand the production and use of batteries at grid scale.[50] Astonishing quantities of batteries will be needed to keep country-level grids energized—and the level of mining required for the underlying raw materials would be epic. For the U.S., at least, given where the materials are mined and where batteries are made, imports would increase radically. Perspective on each of these realities follows.

How many batteries would it take to light the nation?

A grid based entirely on wind and solar necessitates going beyond preparation for the normal daily variability of wind and sun; it also means preparation for the frequency and duration of periods when there would be not only far less wind and sunlight combined but also for periods when there would be none of either. While uncommon, such a combined event—daytime continental cloud cover with no significant wind anywhere, or nighttime with no wind—has occurred more than a dozen times over the past century—effectively, once every decade. On these occasions, a combined wind/solar grid would not be able to produce a tiny fraction of the nation’s electricity needs. There have also been frequent one-hour periods when 90% of the national electric supply would have disappeared.[51]

So how many batteries would be needed to store, say, not two months’ but two days’ worth of the nation’s electricity? The $5 billion Tesla “Gigafactory” in Nevada is currently the world’s biggest battery manufacturing facility.[52] Its total annual production could store three minutes’ worth of annual U.S. electricity demand. Thus, in order to fabricate a quantity of batteries to store two days’ worth of U.S. electricity demand would require 1,000 years of Gigafactory production.

Wind/solar advocates propose to minimize battery usage with enormously long transmission lines on the observation that it is always windy or sunny somewhere. While theoretically feasible (though not always true, even at country-level geographies), the length of transmission needed to reach somewhere “always” sunny/windy also entails substantial reliability and security challenges. (And long-distance transport of energy by wire is twice as expensive as by pipeline.)[53]

Building massive quantities of batteries would have epic implications for mining

A key rationale for the pursuit of a new energy economy is to reduce environmental externalities from the use of hydrocarbons. While the focus these days is mainly on the putative long-term effects of carbon dioxide, all forms of energy production entail various unregulated externalities inherent in extracting, moving, and processing minerals and materials.

Radically increasing battery production will dramatically affect mining, as well as the energy used to access, process, and move minerals and the energy needed for the battery fabrication process itself. About 60 pounds of batteries are needed to store the energy equivalent to that in one pound of hydrocarbons. Meanwhile, 50–100 pounds of various materials are mined, moved, and processed for one pound of battery produced.[54] Such underlying realities translate into enormous quantities of minerals—such as lithium, copper, nickel, graphite, rare earths, and cobalt—that would need to be extracted from the earth to fabricate batteries for grids and cars.[55] A battery-centric future means a world mining gigatons more materials.[56] And this says nothing about the gigatons of materials needed to fabricate wind turbines and solar arrays, too.[57]

Even without a new energy economy, the mining required to make batteries will soon dominate the production of many minerals. Lithium battery production today already accounts for about 40% and 25%, respectively, of all lithium and cobalt mining.[58] In an all-battery future, global mining would have to expand by more than 200% for copper, by at least 500% for minerals like lithium, graphite, and rare earths, and far more than that for cobalt.[59]

Then there are the hydrocarbons and electricity needed to undertake all the mining activities and to fabricate the batteries themselves. In rough terms, it requires the energy equivalent of about 100 barrels of oil to fabricate a quantity of batteries that can store a single barrel of oil-equivalent energy.[60]

Given the regulatory hostility to mining on the U.S. continent, a battery-centric energy future virtually guarantees more mining elsewhere and rising import dependencies for America. Most of the relevant mines in the world are in Chile, Argentina, Australia, Russia, the Congo, and China. Notably, the Democratic Republic of Congo produces 70% of global cobalt, and China refines 40% of that output for the world.[61]

China already dominates global battery manufacturing and is on track to supply nearly two-thirds of all production by 2020.[62] The relevance for the new energy economy vision: 70% of China’s grid is fueled by coal today and will still be at 50% in 2040.[63] This means that, over the life span of the batteries, there would be more carbon-dioxide emissions associated with manufacturing them than would be offset by using those batteries to, say, replace internal combustion engines.[64]

Transforming personal transportation from hydrocarbon-burning to battery-propelled vehicles is another central pillar of the new energy economy. Electric vehicles (EVs) are expected not only to replace petroleum on the roads but to serve as backup storage for the electric grid as well.[65]

Lithium batteries have finally enabled EVs to become reasonably practical. Tesla, which now sells more cars in the top price category in America than does Mercedes-Benz, has inspired a rush of the world’s manufacturers to produce appealing battery-powered vehicles.[66] This has emboldened bureaucratic aspirations for outright bans on the sale of internal combustion engines, notably in Germany, France, Britain, and, unsurprisingly, California.

Such a ban is not easy to imagine. Optimists forecast that the number of EVs in the world will rise from today’s nearly 4 million to 400 million in two decades.[67] A world with 400 million EVs by 2040 would decrease global oil demand by barely 6%. This sounds counterintuitive, but the numbers are straightforward. There are about 1 billion automobiles today, and they use about 30% of the world’s oil.[68] (Heavy trucks, aviation, petrochemicals, heat, etc. use the rest.) By 2040, there would be an estimated 2 billion cars in the world. Four hundred million EVs would amount to 20% of all the cars on the road—which would thus replace about 6% of petroleum demand.

In any event, batteries don’t represent a revolution in personal mobility equivalent to, say, going from the horse-and-buggy to the car—an analogy that has been invoked.[69] Driving an EV is more analogous to changing what horses are fed and importing the new fodder.

Moore’s Law Misapplied

Faced with all the realities outlined above regarding green technologies, new energy economy enthusiasts nevertheless believe that true breakthroughs are yet to come and are even inevitable. That’s because, so it is claimed, energy tech will follow the same trajectory as that seen in recent decades with computing and communications. The world will yet see the equivalent of an Amazon or “Apple of clean energy.”[70]

This idea is seductive because of the astounding advances in silicon technologies that so few forecasters anticipated decades ago. It is an idea that renders moot any cautions that wind/solar/batteries are too expensive today—such caution is seen as foolish and shortsighted, analogous to asserting, circa 1980, that the average citizen would never be able to afford a computer. Or saying, in 1984 (the year that the world’s first cell phone was released), that a billion people would own a cell phone, when it cost $9,000 (in today’s dollars). It was a two-pound “brick” with a 30-minute talk time.

Today’s smartphones are not only far cheaper; they are far more powerful than a room-size IBM mainframe from 30 years ago. That transformation arose from engineers inexorably shrinking the size and energy appetite of transistors, and consequently increasing their number per chip roughly twofold every two years—the “Moore’s Law” trend, named for Intel cofounder Gordon Moore.

The compound effect of that kind of progress has indeed caused a revolution. Over the past 60 years, Moore’s Law has seen the efficiency of how logic engines use energy improve by over a billionfold.[71] But a similar transformation in how energy is produced or stored isn’t just unlikely; it can’t happen with the physics we know today.

In the world of people, cars, planes, and large-scale industrial systems, increasing speed or carrying capacity causes hardware to expand, not shrink. The energy needed to move a ton of people, heat a ton of steel or silicon, or grow a ton of food is determined by properties of nature whose boundaries are set by laws of gravity, inertia, friction, mass, and thermodynamics.

If combustion engines, for example, could achieve the kind of scaling efficiency that computers have since 1971—the year the first widely used integrated circuit was introduced by Intel—a car engine would generate a thousandfold more horsepower and shrink to the size of an ant.[72] With such an engine, a car could actually fly, very fast.

If photovoltaics scaled by Moore’s Law, a single postage-stamp-size solar array would power the Empire State Building. If batteries scaled by Moore’s Law, a battery the size of a book, costing three cents, could power an A380 to Asia.

But only in the world of comic books does the physics of propulsion or energy production work like that. In our universe, power scales the other way.

An ant-size engine—which has been built—produces roughly 100,000 times lesspower than a Prius. An ant-size solar PV array (also feasible) produces a thousandfold less energy than an ant’s biological muscles. The energy equivalent of the aviation fuel actually used by an aircraft flying to Asia would take $60 million worth of Tesla-type batteries weighing five times more than that aircraft.[73]

The challenge in storing and processing information using the smallest possible amount of energy is distinct from the challenge of producing energy, or of moving or reshaping physical objects. The two domains entail different laws of physics.

The world of logic is rooted in simply knowing and storing the fact of the binary state of a switch—i.e., whether it is on or off. Logic engines don’t produce physical action but are designed to manipulate the idea of the numbers zero and one. Unlike engines that carry people, logic engines can use software to do things such as compress information through clever mathematics and thus reduce energy use. No comparable compression options exist in the world of humans and hardware.

Of course, wind turbines, solar cells, and batteries will continue to improve significantly in cost and performance; so will drilling rigs and combustion turbines (a subject taken up next). And, of course, Silicon Valley information technology will bring important, even dramatic, efficiency gains in the production and management of energy and physical goods (a prospect also taken up below). But the outcomes won’t be as miraculous as the invention of the integrated circuit, or the discovery of petroleum or nuclear fission.

Sliding Down the Renewable Asymptote

Forecasts for a continual rapid decline in costs for wind/solar/batteries are inspired by the gains that those technologies have already experienced. The first two decades of commercialization, after the 1980s, saw a 10-fold reduction in costs. But the path for improvements now follows what mathematicians call an asymptote; or, put in economic terms, improvements are subject to a law of diminishing returns where every incremental gain yields less progress than in the past (Figure 4).

This is a normal phenomenon in all physical systems. Throughout history, engineers have achieved big gains in the early years of a technology’s development, whether wind or gas turbines, steam or sailing ships, internal combustion or photovoltaic cells. Over time, engineers manage to approach nature’s limits. Bragging rights for gains in efficiency—or speed, or other equivalent metrics such as energy density (power per unit of weight or volume) then shrink from double-digit percentages to fractional percentage changes. Whether it’s solar, wind tech, or aircraft turbines, the gains in performance are now all measured in single-digit percentage gains. Such progress is economically meaningful but is not revolutionary.

The physics-constrained limits of energy systems are unequivocal. Solar arrays can’t convert more photons than those that arrive from the sun. Wind turbines can’t extract more energy than exists in the kinetic flows of moving air. Batteries arebound by the physical chemistry of the molecules chosen. Similarly, no matter how much better jet engines become, an A380 will never fly to the moon. An oil-burning engine can’t produce more energy than what is contained in the physical chemistry of hydrocarbons.

Combustion engines have what’s called a Carnot Efficiency Limit, which is anchored in the temperature of combustion and the energy available in the fuel. The limits are long established and well understood. In theory, at a high enough temperature, 80% of the chemical energy that exists in the fuel can be turned into power.[74] Using today’s high-temperature materials, the best hydrocarbon engines convert about 50%–60% to power. There’s still room to improve but nothing like the 10-fold to nearly hundredfold revolutionary advances achieved in the first couple of decades after their invention. Wind/solar technologies are now on the same place of that asymptotic technology curve.

For wind, the boundary is called the Betz Limit, which dictates how much of the kinetic energy in air a blade can capture; that limit is about 60%.[75] Capturing all the kinetic energy would mean, by definition, no air movement and thus nothing to capture. There needs to be wind for the turbine to turn. Modern turbines already exceed 45% conversion.[76] That leaves some real gains to be made but, as with combustion engines, nothing revolutionary.[77] Another 10-fold improvement is not possible.

For silicon photovoltaic (PV) cells, the physics boundary is called the Shockley-Queisser Limit: a maximum of about 33% of incoming photons are converted into electrons. State-of-the-art commercial PVs achieve just over 26% conversion efficiency—in other words, near the boundary. While researchers keep unearthing new non-silicon options that offer tantalizing performance improvements, all have similar physics boundaries, and none is remotely close to manufacturability at all—never mind at low costs.[78] There are no 10-fold gains left.[79]

Future advances in wind turbine and solar economics are now centered on incremental engineering improvements: economies of scale in making turbines enormous, taller than the Washington Monument, and similarly massive, square-mile utility-scale solar arrays. For both technologies, all the underlying key components—concrete, steel, and fiberglass for wind; and silicon, copper, and glass for solar—are all already in mass production and well down asymptotic cost curves in their own domains.

While there are no surprising gains in economies of scale available in the supply chain, that doesn’t mean that costs are immune to improvements. In fact, all manufacturing processes experience continual improvements in production efficiency as volumes rise. This experience curve is called Wright’s Law. (That “law” was first documented in 1936, as it related then to the challenge of manufacturing aircraft at costs that markets could tolerate. Analogously, while aviation took off and created a big, worldwide transportation industry, it didn’t eliminate automobiles, or the need for ships.) Experience leading to lower incremental costs is to be expected; but, again, that’s not the kind of revolutionary improvement that could make a new energy economy even remotely plausible.

As for modern batteries, there are still promising options for significant improvements in their underlying physical chemistry. New non-lithium materials in research labs offer as much as a 200% and even 300% gain in inherent performance.[80] Such gains nevertheless don’t constitute the kinds of 10-fold or hundredfold advances in the early days of combustion chemistry.[81] Prospective improvements will still leave batteries miles away from the real competition: petroleum.

There are no subsidies and no engineering from Silicon Valley or elsewhere that can close the physics-centric gap in energy densities between batteries and oil (Figure 5). The energy stored per pound is the critical metric for vehicles and, especially, aircraft. The maximum potential energy contained in oil molecules is about 1,500% greater, pound for pound, than the maximum in lithium chemistry.[82] That’s why the aircraft and rockets are powered by hydrocarbons. And that’s why a 20% improvement in oil propulsion (eminently feasible) is more valuable than a 200% improvement in batteries (still difficult).

Finally, when it comes to limits, it is relevant to note that the technologies that unlocked shale oil and gas are still in the early days of engineering development, unlike the older technologies of wind, solar, and batteries. Tenfold gains are still possible in terms of how much energy can be extracted by a rig from shale rock before approaching physics limits.[83] That fact helps explain why shale oil and gas have added 2,000% more to U.S. energy production over the past decade than have wind and solar combined.[84]

Digitalization Won’t Uberize the Energy Sector

Digital tools are already improving and can further improve all manner of efficiencies across entire swaths of the economy, and it is reasonable to expect that software will yet bring significant improvements in both the underlying efficiency of wind/solar/battery machines and in the efficiency of how such machines are integrated into infrastructures. Silicon logic has improved, for example, the control and thus the fuel efficiency of combustion engines, and it is doing the same for wind turbines. Similarly, software epitomized by Uber has shown that optimizing the efficiency of using expensive transportation assets lowers costs. Uberizing all manner of capital assets is inevitable.

Uberizing the electric grid without hydrocarbons is another matter entirely.

The peak demand problem that software can’t fix

In the energy world, one of the most vexing problems is in optimally matching electricity supply and demand (Figure 6). Here the data show that society and the electricity-consuming services that people like are generating a growing gap between peaks and valleys of demand. The net effect for a hydrocarbon-free grid will be to increase the need for batteries to meet those peaks.

All this has relevance for encouraging EVs. In terms of managing the inconvenient cyclical nature of demand, shifting transportation fuel use from oil to the grid will make peak management far more challenging. People tend to refuel when it’s convenient; that’s easy to accommodate with oil, given the ease of storage. EV refueling will exacerbate the already-episodic nature of grid demand.

To ameliorate this problem, one proposal is to encourage or even require off-peak EV fueling.[85] The jury is out on just how popular that will be or whether it will even be tolerated.

Although kilowatt-hours and cars—key targets in the new energy economy prescriptions—constitute only 60% of the energy economy, global demand for both is centuries away from saturation. Green enthusiasts make extravagant claims about the effect of Uber-like options and self-driving cars. However, the data show that the economic efficiencies from Uberizing have so far increased the use of cars and peak urban congestion.[86] Similarly, many analysts now see autonomous vehicles amplifying, not dampening, that effect.[87]

That’s because people, and thus markets, are focused on economic efficiency and not on energy efficiency. The former can be associated with reducing energy use; but it is also, and more often, associated with increased energy demand. Cars use more energy per mile than a horse, but the former offers enormous gains in economic efficiency. Computers, similarly, use far more energy than pencil-and-paper.

Uberizing improves energy efficiencies but increases demand

Every energy conversion in our universe entails built-in inefficiencies—converting heat to propulsion, carbohydrates to motion, photons to electrons, electrons to data, and so forth. All entail a certain energy cost, or waste, that can be reduced but never eliminated. But, in no small irony, history shows—as economists have often noted—that improvements in efficiency lead to increased, not decreased, energy consumption.

If at the dawn of the modern era, affordable steam engines had remained as inefficient as those first invented, they would never have proliferated, nor would the attendant economic gains and the associated rise in coal demand have happened. We see the same thing with modern combustion engines. Today’s aircraft, for example, are three times as energy-efficient as the first commercial passenger jets in the 1950s.[88] That didn’t reduce fuel use but propelled air traffic to soar and, with it, a fourfold rise in jet fuel burned.[89]

Similarly, it was the astounding gains in computing’s energy efficiency that drove the meteoric rise in data traffic on the Internet—which resulted in far more energy used by computing. Global computing and communications, all told, now consumes the energy equivalent of 3 billion barrels of oil per year, more energy than global aviation.[90]

The purpose of improving efficiency in the real world, as opposed to the policy world, is to reduce the cost of enjoying the benefits from an energy-consuming engine or machine. So long as people and businesses want more of the benefits, declining cost leads to increased demand that, on average, outstrips any “savings” from the efficiency gains. Figure 7 shows how this efficiency effect has played out for computing and air travel.[91]

Of course, the growth in demand growth for a specific product or service can subside in a (wealthy) society when limits are hit: the amount of food a person can eat, the miles per day an individual is willing to drive, the number of refrigerators or lightbulbs per household, etc. But a world of 8 billion people is a long way from reaching any such limits.

The macro picture of the relationship between efficiency and world energy demand is clear (Figure 8). Technology has continually improved society’s energy efficiency. But far from ending global energy growth, efficiency has enabled it. The improvements in cost and efficiency brought about through digital technologies will accelerate, not end, that trend.

Energy Revolutions Are Still Beyond the Horizon

When the world’s poorest 4 billion people increase their energy use to just 15% of the per-capita level of developed economies, global energy consumption will rise by the equivalent of adding an entire United States’ worth of demand.[92] In the face of such projections, there are proposals that governments should constrain demand, and even ban certain energy-consuming behaviors. One academic article proposed that the “sale of energy-hungry versions of a device or an application could be forbidden on the market, and the limitations could become gradually stricter from year to year, to stimulate energy-saving product lines.”[93] Others have offered proposals to “reduce dependency on energy” by restricting the sizes of infrastructures or requiring the use of mass transit or car pools.[94]

The issue here is not only that poorer people will inevitably want to—and will be able to—live more like wealthier people but that new inventions continually create new demands for energy. The invention of the aircraft means that every $1 billion in new jets produced leads to some $5 billion in aviation fuel consumed over two decades to operate them. Similarly, every $1 billion in data centers built will consume $7 billion in electricity over the same period.[95] The world is buying both at the rate of about $100 billion a year.[96]

The inexorable march of technology progress for things that use energy creates the seductive idea that something radically new is also inevitable in ways to produce energy. But sometimes, the old or established technology is the optimal solution and nearly immune to disruption. We still use stone, bricks, and concrete, all of which date to antiquity. We do so because they’re optimal, not “old.” So are the wheel, water pipes, electric wires … the list is long. Hydrocarbons are, so far, optimal ways to power most of what society needs and wants.

More than a decade ago, Google focused its vaunted engineering talent on a project called “RE<C,” seeking to develop renewable energy cheaper than coal. After the project was canceled in 2014, Google’s lead engineers wrote: “Incremental improvements to existing [energy] technologies aren’t enough; we need something truly disruptive… We don’t have the answers.”[97] Those engineers rediscovered the kinds of physics and scale realities highlighted in this paper.

An energy revolution will come only from the pursuit of basic sciences. Or, as Bill Gates has phrased it, the challenge calls for scientific “miracles.”[98] These will emerge from basic research, not from subsidies for yesterday’s technologies. The Internet didn’t emerge from subsidizing the dial-up phone, or the transistor from subsidizing vacuum tubes, or the automobile from subsidizing railroads.

However, 95% of private-sector R&D spending and the majority of government R&D is directed at “development” and not basic research.[99] If policymakers want a revolution in energy tech, the single most important action would be to radically refocus and expand support for basic scientific research.

Hydrocarbons—oil, natural gas, and coal—are the world’s principal energy resource today and will continue to be so in the foreseeable future. Wind turbines, solar arrays, and batteries, meanwhile, constitute a small source of energy, and physics dictates that they will remain so. Meanwhile, there is simply no possibility that the world is undergoing—or can undergo—a near-term transition to a “new energy economy.”

30 Year Anniversary of the UN 1989 “10 years to save the world” Climate Warning

Global warming was not reversed by the year 2000 – yet we are still here.

U.N. Predicts Disaster if Global Warming Not Checked
PETER JAMES SPIELMANN June 30, 1989

UNITED NATIONS (AP) _ A senior U.N. environmental official says entire nations could be wiped off the face of the Earth by rising sea levels if the global warming trend is not reversed by the year 2000.

Coastal flooding and crop failures would create an exodus of ″eco- refugees,′ ′ threatening political chaos, said Noel Brown, director of the New York office of the U.N. Environment Program, or UNEP.

He said governments have a 10-year window of opportunity to solve the greenhouse effect before it goes beyond human control.

As the warming melts polar icecaps, ocean levels will rise by up to three feet, enough to cover the Maldives and other flat island nations, Brown told The Associated Press in an interview on Wednesday.

Coastal regions will be inundated; one-sixth of Bangladesh could be flooded, displacing a fourth of its 90 million people. A fifth of Egypt’s arable land in the Nile Delta would be flooded, cutting off its food supply, according to a joint UNEP and U.S. Environmental Protection Agency study.

″Ecological refugees will become a major concern, and what’s worse is you may find that people can move to drier ground, but the soils and the natural resources may not support life. Africa doesn’t have to worry about land, but would you want to live in the Sahara?″ he said.

Read more: https://www.apnews.com/bd45c372caf118ec99964ea547880cd0

Link to a PDF copy of the AP article, in case the original is “disappeared”.

What other great examples of failed climate warnings can you remember?

Why do perfectly intelligent people believe in Climate Change?

Andy Edmonds, PhD

The operative word in this somewhat leading question is “believe”. Global warming is a belief system with all the characteristics of a religion.

Look at how non-believers are vilified. Look at how we are told we must make sacrifices to the god of global warming; to sacrifice little old ladies through their unaffordable heating bills; to sacrifice our countryside to windmills and solar farms; to sacrifice our cars and our mobility.

How did we get to this point in our supposedly rational age?

The answer, I believe, lies in the history of ideas themselves. To grossly, but unavoidably, oversimplify things, the argument is as follows:

One of the major purposes of religion is to explain the seemingly random events that occur to us as individuals and societies.

The demise of Christianity in the West has led to, or was caused by, a rise in scientific rationalism. The latter has given us better explanations for the events in the world, and held out the promise, as espoused by Laplace in 1813, of total predictability, of understanding everything scientifically.

Thus, one all-powerful God has replaced another.

The success of scientific rationalism has spawned political and social ideas that ape the promise of predictability and explainability that Science promises in the physical world.

Unfortunately, since the start of the 20th century, various chinks in the scientific rationalists’ armour have appeared. Mathematical research encouraged by the growth of computing has shown conclusively that parts of the world and the things in it are not predictable and computable without infinite computing resources.

The God of Scientific rationalism is not all-powerful.

Climate, it turns out, is one of the things, like weather, that cannot in practice be predicted long-term. Other examples are social, financial and political phenomena.

It would take many thousands of words to justify the last claim, but it is true, and a fact well known to researchers in Complexity.

A God is, of course, either all powerful or a false God. There can be no half measures, and a Scientific area of study that cannot make predictions has no value.

To believe in climate change is to believe in the God of Scientific rationalism. To embrace the unfortunately complex idea that parts of our world are ultimately and forever unpredictable is to give up on this God.

Most people, while capable of deep thought at times, live life with a set of heuristics that they use to simplify things. The idea that science is completely trustable, and capable of predicting anything is one of these heuristics. The idea that this might not be true is an exceedingly tough thing to sell.

Although there is some element of conspiracy in the global warming scare, (Climate scientists have wilfully ignored the evidence that their predictions are worthless), Global warming has been just very convenient for those with a religious predisposition and with a fear of Chaos. Unfortunately, these people include politicians and princes. People who are already predisposed to an ideology, to simple explanations for complex things, find Global warming very seductive.

So, while the scientific basis for belief in climate change crumbles, the religious argument increases in strength.

Because letting go of these ideas will cause so much psychological distress to the holders, they will hang on to them ’til the last minute. At the time of writing, the UK’s outgoing prime minister just committed the UK to zero carbon by 2050, without a vote or any democratic oversight, and at a cost of many trillion pounds.

This was a religious act. Other religious acts in the past have given us the great cathedrals, the Parthenon and Angkor Wat. On the other hand, some have given us the inquisition and jihad.

This decision definitely falls in the latter category.

I’m sure, dear readers, that many of you will profoundly disagree with the preceding ideas. If so, all that I ask is that you question yourselves as to why. Are my arguments rubbish? Or is it the uncomfortable feeling of having cherished ideas challenged? If it’s the latter, you’ve got religion, my friend.

Andy Edmonds gained his PhD in the analysis of time series for Chaos in 1996. Although not a climate scientist, he is an expert in the mathematics and techniques underlying the modelling of phenomena such as climate.

Do any of these “climate change” eggheads realize how stupid they sound?

MrEricksonrules

“The sky is falling, the sky is falling!”

“Climate change!  Global warming!  The ice is melting!  The oceans are rising!

fresh 9

Although this is a recurring occurrence for these alarmist propagandists, most recently, I’m referring to a couple of articles that I came across.

The first article is by Christopher Carbone of Fox News, and the headline states, “Mysterious freshwater reservoir found hidden beneath the ocean!”

My first thought is, “okay, this sounds pretty interesting,” but the more I think about it, the less surprised I am by the discovery.

But they’ve peaked my interest…, so let’s proceed.

My next thought is, “Aren’t most things in life and our planet “mysterious?”

I would think the word “mysterious” would be a word that scientists would not be too fond of, however, as it seems to imply something not very scientific, but more supernatural, more beyond our understanding.

The truth is that there is a heck of a lot more that scientists don’t understand than they do understand.

Carbone continues, “Scientists discover world’s largest freshwater aquifer underneath the ocean floor.”

“Surveying the sub-seafloor off the eastern coast of the United States, researchers at Columbia University uncovered what appears to be the world’s largest freshwater aquifer. Believed to hold at least 670 cubic miles of fresh water, the discovery could usher in similar discoveries for other regions throughout the world.”

fresh 1

“The surprising discovery, from a new survey of the sub-seafloor off the northeast U.S. coast by researchers from Columbia University, appears to be the largest formation of this type anywhere in the world — stretching from Massachusetts to New Jersey and extending continuously out about 50 miles to the edge of the continental shelf.”

“Researchers said that if it was discovered on the surface it would create a lake covering some 15,000 square miles.”

That would be about half the size of Lake Superior, or about two-thirds the size of Lake Michigan.

‘“We knew there was fresh water down there in isolated places, but we did not know the extent or geometry,’ lead author Chloe Gustafson, a PhD. candidate at Columbia University’s Lamont-Doherty Earth Observatory, said in a press statement.”

Okay…, this is all very well and good…, but I would have to question whether “we” knew this “fresh water” was down there, or if “we” only suspected it.  I don’t ever recall hearing anything about this type of thing before.

But here’s the kicker that justifies the use of the term “egghead.”

“Scientists also said that if the water was to ever be processed for consumption, it would need to be desalinated.”

Wait…, what?

Desalinated?

You “scientists” do understand that if the water would need to be “desalinated,” THEN IT’S NOT FRESH WATER!  IT’S SALT WATER!

I’m sorry, but am I missing something?

“The study was [original] published in the journal ‘Scientific Reports.’”

And none of the other “scientists” felt it necessary to point out that referring to salt water as fresh water kind of changes the whole concept of the report?

Brilliant.

fresh 4

Next we have an article by Karl Mathiesen for “The Guardian” website that asks, “Why is Antarctic sea ice at record levels despite global warming?”

Good question!

How dare this ice act in a way that contradicts all of our “climate change” claims!

“While Arctic sea ice continues to decline, Antarctic levels are confounding the world’s most trusted climate models with record highs for the third year running.”

fresh 2

So the Earth is “confounding” “the world’s most trusted climate models” with its ice growth? And for the third year in a row?

This sure doesn’t jive with the “climate change propaganda” I’ve been hearing over the past couple of years.

How about you?

And doesn’t it make sense that while the Arctic ice levels are in decline, the Antarctic ice levels are increasing?

You know…, I bet if you looked back in history, at times when the Antarctic ice levels were in decline, the Arctic ice levels were on the rise.

Just a guess.

Nothing scientific, but…, hey…, at least their claims and my claims would have that in common!

Mine would just make more sense, that’s all!

fresh 7

“Antarctic ice floes extended further than ever recorded this southern winter, confounding the world’s most-trusted climate models.”

“Ice floes extended further than EVER recorded!”

“Ever” is a long time.

‘“It’s not expected,’ says Professor John Turner, a climate expert at the British Antarctic Survey. ‘The world’s best 50 models were run and 95% of them have Antarctic sea ice decreasing over the past 30 years.’”

Like Gomer used to say, “Surprise, surprise, surprise.”

If those are your “50 best models,” and they are all pathetically wrong, what are you basing your claims on and why should anyone listen to anything you have to say?

Just sayin’.

“But Dr. Claire Parkinson, a senior scientist at Nasa’s Goddard Space Flight Centre, says increasing Antarctic ice does not contradict the general warming trend, ‘Not every location on the Earth is having the same responses to climate changes. The fact that ice in one part of the world is doing one thing and in another part ice is doing another is not surprising. The Earth is large and as the climate changes it is normal to see different things going on,’ says Parkinson.”

fresh 8

Wow.  You are wise Dr. Claire.  I’m pretty sure that most 5th graders could have made those deductions.

And basically what you’re saying is that no matter what happens with the Earth’s climate, we can twist it around to support our claims of global warming.

The “climate” changes all of the time, and we’ll give you that.  It’s been changing since the beginning of time, and all by itself, with no help from humans.

fresh 6

“In a video made by Eco Audit reader and journalist Fraser Johnston, Dr. Guy Williams, a sea ice scientist at the Tasmanian Institute for Marine and Antarctic Studies, says that even though it had fooled climate models the increasing sea ice was well understood by scientists.”

‘“In some ways it’s a bit counterintuitive for people trying to understand how global warming is affecting our polar regions, but in fact it’s actually completely in line with how climate scientists expect Antarctica and the Southern Ocean to respond. Particularly in respect to increased winds and increased melt water,’ said Williams.”

Okay…, so these ice occurrences are “well understood” and “completely in line with how climate scientists expect Antarctica and the Southern Ocean to respond,” yet earlier, Professor John Turner was quoted as saying these results were “not expected.”

So what is it?  Was this ice situation expected by you “scientists” or not?

It kind of sucks when reality doesn’t line up with your propaganda, doesn’t it, docs?

I get the feeling that the next “climate change” study that we get to read about will being with the words, “Once upon a time…”

fresh 5

 

NOTE:  If you’re not already “following” me and you liked my blog(s) today, please “click” on the comment icon just to the right of the date at the bottom of this article.  From there you can let me know you “like” my blog, leave a comment or click the “Follow” button which will keep you up to date on all of my latest posts.

Thank you, MrEricksonRules.

MIT Doctorate Climate Scientist Slams GW Claims: Based On “Untrustworthy, Falsified Data”…”No Scientific Value”!

In a newly released Kindle book that is set to peeve established climate science, an MIT doctorate climate researcher blasts alarmist claims of a warming planet and illustrates how temperature data are untrustworthy and far too scant to draw sound conclusions.

By Kirye and Pierre Gosselin

Dr. Kiminori Itoh just brought to our attention a recently released Kindle version Japanese climate skeptical book with the title: kikoukagakushanokokuhaku chikyuuonndannkahamikennshounokasetsuauthored by Dr. Mototaka Nakamura.an scientist who received doctorate from MIT.

The book’s title translated in English: “A climate scientist’s profession – Global warming theory is unproven, only a hypothesis“.

Climate scientist Dr. Mototaka Nakamura’s recent book blasts global warming data as “untrustworthy”, “falsified”.  Image: http://iprc.soest.hawaii.edu/people/nakamura.php

In his book, Dr. Nakamura explains why the data foundation underpinning global warming science is “untrustworthy” and cannot be relied on.

“Not backed by demonstrable data”

He writes that although many people, including a lot of climate researchers, believe it is a confirmed fact that global surface mean temperatures have been rising since Industrial Revolution, it is however “not backed by demonstrable data”. He points out:

Global mean temperatures before 1980 are based on untrustworthy data. Before full planet surface observation by satellite began in 1980, only a small part of the Earth had been observed for temperatures with only a certain amount of accuracy and frequency. Across the globe, only North America and Western Europe have trustworthy temperature data dating back to the 19th century.”

Prestigious career

Dr. Nakamura received a Doctorate of Science from the Massachusetts Institute of Technology (MIT), and for nearly 25 years specialized in abnormal weather and climate change at prestigious institutions that included MIT, Georgia Institute of Technology, NASA, Jet Propulsion Laboratory, California Institute of Technology, JAMSTEC and Duke University.

Failed climate models

Nakamura’s book demolishes “the lie of critical global warming due to increasing carbon dioxide”, exposes the great uncertainty of “global warming in the past 100 years” and points out the glaring failure of climate models.

Only 5% of Earth’s surface adequately measured over past 100 yrs,,,,,,,,,,,,,,

According to Dr. Nakamura, the temperature data are woefully lacking and do
not allow in any way the drawing of any useful conclusions.

Presently the book is available in Japanese only. What follows are translated/paraphrased excerpts.

For example, Dr. Nakamura illustrates how scant the global temperature data really are, and writes that over the last 100 years “only 5 percent of the Earth’s area is able show the mean surface temperature with any certain degree of confidence.”

Ocean data extremely scant…

Then there’s the desolate amount of data from the massive oceans. Later Dr. Nakamura describes how the precision of the observed mean temperature from the ocean surface, which accounts for roughly 75% of the Earth’s surface, are questionable to an extreme.

He writes, “The pre-1980 temperature data from the sea and water are very scant” and that the methodology used for recording them totally lacks adequacy.

To top it off: “The climate datasets used for the sea surface water temperature data have added various adjustments to the raw data.”

1 station per 10,000 sq km almost meaningless

Dr. Nakamura also describes how the number of surface stations used globally cannot provide any real accurate temperature picture. He writes: “Experts cannot just decide that 10,000 sq km per station is representative of temperature.”

Later he explains: “If you accept the Earth surface mean temperature’s warming since the Industrial Revolution as the truth, it means you agree with the idea that the Earth surface mean temperature rise can be determined by a biased tiny region on the globe. It is nonsense. Looking at the regions with long term temperature data, you can see that some regions warmed, and some other regions cooled.

Nakamura’s harsh judgement: “No scientific value”

Finally, Nakamura blasts the ongoing data adjustments: “Furthermore, more recently, experts have added new adjustments which have the helpful effect of making the Earth seem to continue warming”. The talented Japanese scientist deems this “data falsification”.

He concludes:

Therefore, the global surface mean temperature change data no longer have any scientific value and are nothing except a propaganda tool to the public.”

The Wild Winter of 2018/19 in North America

The snowcover in North America ranked first or second in 50+ years from September to November and bounced to 4th in February.

Screen Shot 2019-06-20 at 5.08.40 AM.png

namgnld_season4.png

It fell back but still above trendline in winter. 2009/10 was the record high.

namgnld_season1.png

It lingers in the west with as much as 4783% of normal in the central Rockies. Note the below normal SWE in the Cascades and northern Rockies this June.

Screen Shot 2019-06-20 at 5.07.15 AM.png

As of two days ago according to Snow brains, skiers were still skiing on 100″ of snow on Mammoth Mountain – 292% of normal.

Screen Shot 2019-06-20 at 5.33.07 AM.png

More snow will fall as the solstice passes.

ecmwf-west-total_snow_10to1-1852800.png

​COLORADO’S STATEWIDE SNOWPACK HOLDING STRONG AT 1301% OF NORMAL — GRAND SOLAR MINIMUM

JUNE 20, 2019 CAP ALLON

Following an historically snowy winter and spring, Colorado’s snowpack currently stands at 1301% of normal, with highs peaking at 3328% in the San Juan Mountains, according to Natural Resources Conservation Service data.

As of Wednesday, Colorado’s snow-water equivalent was 5.2 inches. The median for June 19, based on 30 years of record-keeping, is just 0.2 inches.

“By June 19, the snow is usually all melted out, at least from where we measure,” said Peter Goble, a climatologist with the Colorado Climate Center. But as of Wednesday, a staggering 18% of this season’s snow has yet to melt.

This past winter brought one of the best snowpacks to Colorado in over 40 years. Then, the coldest May since 1995 (solar minimum of cycle 22) set the conditions for the pack to linger.
Colorado-Snowpack-June-19-1
Furthermore, all that spring runoff means Colorado’s reservoirs are now expected to completely refill — the Lemon and Vallecito reservoirs, for example, are already at about 90% capacity.

“It looks like many reservoirs in the state will fill,” Goble said. “Even some of the ones that were critically low over the winter following the drought in 2018.”

AGW Alarmists will have to pick a different cherry to cry about.

In addition, winter still hasn’t completely released it’s icy grip on Southwest Colorado — the NWS is calling for more anomalous cold and even snow in the high country through Saturday night.

—————–

The Finnish Meteorological institute show the total snow mass for the Northern Hemisphere was well above the average for 1982-2012 in late winter and early spring.

fmi_swe_tracker.jpg

Snow lingers in the Wisconsin cities of Eau Claire and Appleton, even with Summer just days away

SNOW LINGERS IN THE WISCONSIN CITIES OF EAU CLAIRE AND APPLETON, EVEN WITH SUMMER JUST DAYS AWAY

JUNE 19, 2019 CAP ALLON

This year’s record-breaking winter dumped almost 100 inches of snow on Wisconsin’s Eau Claire and Appleton, and for parts of those cities, that snow is proving particularly stubborn, refusing to melt even with the start of summer now just days away.

Screen Shot 2019-06-20 at 5.31.06 AM.png

Eau Claire’s Galloway Street is one of those places. The street was a dumping ground for all the snow that accumulated roadside throughout the winter. Persistent cold in the city has meant the pile has stuck around a lot longer than normal:

WEAU 13 News@WEAU13News

This year’s record-breaking winter dumped almost 100 inches of snow on Eau Claire. For one part of the city, the snow still isn’t gone. https://www.weau.com/content/news/For-parts-Eau-Claire-there-are-still-snow-piles-waiting-to-melt-511425652.html

6:37 PM – Jun 17, 2019

EAU CLAIR SNOW.png

Snow remains in parts of Eau Claire mid-June

This year’s record-breaking winter dumped almost 100 inches of snow on Eau Claire. For one part of the city, the snow still isn’t gone.

The story is the same for towns and cities across much of Wisconsin, including in Appleton — located a 3 hour drive east of Eau Claire.

The Appleton Post Crescent reports that the same parking lot held powder into June last year as well, but that was thought to be due to the 20-plus inches of snow that fell during a record-setting mid-April blizzard.

The city doesn’t have the same excuse this year.

It’s the anomalous cold keeping the snowpack in place this time around, which correlates neatly with the sun entering its next Grand Solar Minimum cycle. The cold times are returning. Even NASA agrees:

NASA SOLAR.png


Remember 2015 in Boston where after 110 inches of snow – most of it falling in 5 frigid weeks at the end of winter, snow piles remained well into summer.

Rutgers/NOAA shows the spring snow is for the 3rd straight year above trendline. Note that NOAA advised around 2000 on their web site not to compare spring and summer numbers after 2000 with prior years as their methods changed towards more of an automated – albedo based approach and any snowpack beneath conifers or in the open not refreshed with new snow may be missed. Strong blocking in the spring of 2010 (after record snow in the Mid Atlantic) had the snow disappear very early from eastern Canada and the northeast US mountains, a warm 2012, and 2016’s strong El Nino were lean years.

Crop Failure Year Looms – Ice Age Pattern?

Written by Musings from the Chiefio

No crop damage from SE rain | Farm Online

There has been a shift in the weather toward the Little Ice Age pattern, with big storms, late heavy rains, flooding, and even snow into the start of Summer / late Spring at higher elevations.

Not just in the USA, but all over. Europe, China, Russia, Australia, New Zealand, South America. This has resulted in lots of crop losses, very late planting (or even not planting), and price rises.

Some long time ago I did a story about hay. Most city folks don’t think much about hay, but it is what gets your grazing farm animals through the winter and onto fresh spring pasture. Horses and cows are called “hay burners” for a reason. While lately we have gone to a more exotic collection of feeds, including DDG Distillers Dried Grains from making all that ethanol for diluting gasoline, hay is still an essential.

We entered this year with low hay inventory due to low rainfall in the northern hay producing regions in prior years. Then this year has been so wet that transporting and harvesting hay have been problematic. Finally, a very late start to spring pasture growth means feeding hay longer… when none can be had… That’s a problem.

So hay prices have shot way up. Folks who feed hay to cattle are selling the animals to meat packers early (so buy and freeze some beef now…) while folks who have horses are paying any price to keep them fed and bedded (much hay is needed to keep barns functioning even if not eaten).

It is highly likely there will be a big spike in meat prices after the effects work through the system. Now add in that China is having a terrible time with swine flu and are trying to buy up replacement pork / pigs from all over the world (so pork prices will not be low any time soon) and chickens need “chicken feed” that is largely corn and soybeans (both late to plant so likely a low yield) and you can see where this is going.

Here’s where we were 3 months ago. It has not gotten better.

https://www.agriculture.com/news/crops/hay-shortage-grows-prices-nearly-double

(All CAPS theirs)

HAY SHORTAGE GROWS, PRICES NEARLY DOUBLE
HAY BUYERS NEED WINTER TO GET OVER SOON.
By Mike McGinnis
4/18/2018

DES MOINES, Iowa — As winter storms continue to pound the upper Midwest, cow/calf and feedlot operators are running out of hay to feed their animals.

With snow stunting the growth of spring pastures, the depth of the hay shortage that started in the drought-stricken fall of 2017 has been exacerbated.

Usually cattle farmers can kick animals out to pasture May 1, but that will not be the case this year.

So, the need for hay is extending further into spring than normal.

Paul McGill, owner of Rock Valley, Iowa, Hay Auction Co., sells hay to buyers in Iowa and Minnesota. “We need winter to get over soon,” McGill says.

Well the rains and flooding have continued across the Midwest. Some parts have dried enough they can likely start to pasture the animals, but a lot of the land is still flooded or so soggy a cow will get stuck. In others, it may be dry but a couple of months of growth has been lost. It isn’t over even when the water dries off.

HAY PRICES SURGE
Of course, this a supply/demand story right now for the hay market.

Large round bales of hay are selling for $75 to $90 per ton higher than a year ago, McGill says.

Specifically, alfalfa-grade hay bales are priced between $140 and $165 per ton, while grass, midquality hay bales are selling for $125 to $150 per ton.

This week’s blizzard cut McGill’s northwest Iowa auction company’s sales of hay that it does have to offer.

“On Monday, we moved only 14 semi-loads of large bales vs. 92 semi-loads a week ago. Since the first of the year, we have seen sales below average,” McGill says.

There is some hay around, it is just hard to get to it, he says.

What’s the government got to say?

https://www.ams.usda.gov/mnreports/sf_gr315.txt

Randomely taking the first report from South Dakota (that ought not to be flooded by the Mississippi…) It looks like they have started getting hay loads with 30, double 2 weeks ago and much more than last year. Alfalfa is now $165 / ton for new crop when it was $140 – $165 just a couple of months ago. The mid-grade “good” quality “grass” is at $145 when it had been $125-$150. That alfalfa in “fair” condition is available at $115 +/- so somebody will be eating crummy hay…

I’ve bolded a couple of places where they say some kinds of cheaper hay are now “scarce”. One of them says “scare” and I’ve got to wonder if that’s a typo or a slip of the emotional state… Also note these prices are FOB SD so trucking it to Iowa is going to cost… (Iowa gets a lot of hay from South Dakota)

SF_GR315
Sioux Falls, SD Tues June 11, 2019 USDA-SD Dept of Ag Market News

Corsica, SD Hay and Straw Auction for Monday, June 10, 2019

Receipts: 30 Loads Two Weeks Ago: 15 Loads Last Year: 13 Loads

All prices dollars per ton FOB Corsica, SD.

One load Small Squares equals approximately 5 tons; Large Squares and
Large Rounds range from 10-25 tons per load.

Alfalfa: Premium: Large Rounds, 1 load 162.50 (New Crop). Good:
Small Squares, 1 load $5.10/bale (New Crop); Large Rounds, 3 loads
122.50-127.50 (1 load 127.50 New Crop 10-15% Moisture). Fair: Large
Rounds, 8 loads 112.50-117.50. Utility: Large Rounds, 1 load 102.50.

Grass: Good: Large Rounds, 1 load 145.00. Fair: Large Rounds, 8
loads 110.00-127.50. Utility: Large Rounds, 3 loads 97.50-102.50.

Straw: Scare.

Millet Hay: Large Rounds, 1 load 87.50-90.00.

Corn Stalks: Scarce.

      Alfalfa guidelines (domestic livestock use and not more than 10 pct 
grass)
Quality       ADF      NDF       RFV       TDN-100 pct   TDN-90 pct   CP
Supreme       <27      185         >62          >55.9       >22
Premium      27-29    34-36    170-185    60.5-62        54.5-55.9  20-22
Good         29-32    36-40    150-170      58-60        52.5-54.5  18-20
Fair         32-35    40-44    130-150      56-58        50.5-52.5  16-18
Utility       >35      >44      <130         <56          <50.5       <16

RFV calculated using the WI/MN formula. TDN calculated using the
western formula. Quantitative factors are approximate and many factors
can affect feeding value. Values based on 100 pct dry matter.

Quantitative factors are approximate, and many factors can affect
feeding value. Values based on 100 pct dry matter. End usage may
influence hay price or value more than testing results.

   Grass Hay guidelines
Quality       Crude Protein Percent 
Premium            Over 13
Good                  9-13
Fair                   5-9
Utility            Under 5

Source: USDA-SD Dept of Ag Market News Service, Sioux Falls, SD
605-372-8350
http://www.ams.usda.gov/mnreports/SF_GR315.txt
http://www.ams.usda.gov/lpsmarketnewspage

0837c rmk

Then Ice Age Farmer has a couple of videos on corn, soybean, fruit and more. He can be a bit prone to “talking things up” but has a good collection of sources. And yes, I do think you ought to have some kind of food storage system (covered in depth in prior articles here: https://chiefio.wordpress.com/category/emergency-preparation-and-risks/ on “Dry Canning” and Food Storage in jars).

I do not think we’ll see much more than a meat price hike and some expensive farm feed. Since about 1/3 of corn goes into making gasoline that’s not as effective (hey, I’ve measured my mpg loss…) we can make up for a 1/3 loss of the corn crop by just putting the ‘ethanol mandate’ on hold for a while. Premium gas will likely take a price hike as a lot of ethanol is used to blend it for higher octane (with cheaper gasoline base). Still, if folks are stupid, and our law givers are, they will do nothing but talk and fools will moan about “Global Warming”… so better to take care of it yourself.

I do believe in growing some percentage of your own food, if at all possible. I’ll have an update on my first hydroponic bed later. In just 2 weeks I have one lettuce transplant about ready for the first harvest! Others about 2 weeks behind it.

Here’s a couple of his videos:

Flooding in the USA and grain:

Fruit in China (& more):

 Subscribe to feed