What Percent Of Global Surface Temperature Data Is “Estimated”?

Over at Watts Up With That?, guest writer John Goetz tells us (via Public Secrets)

As the debate over whether or not this year will be the hottest year ever burns on, it is worth revisiting a large part of the data used to make this determination: GHCN v3.

The charts in this post use the dataset downloaded at approximately 2:00 PM on 9/23/2015 from the GHCN FTP Site.

The monthly GHCN V3 temperature record that is used by GISS undergoes an adjustment process after quality-control checks are done. The adjustments that are done are described at a high-level here.

The adjustments are somewhat controversial, because they take presumably raw and accurate data, run it through one or more mathematical models, and produce an estimate of what the temperature might have been given a set of conditions. For example, the time of observation adjustment (TOB) takes a raw data point at, say 7 AM, and produces an estimate of what the temperature might have been at midnight. The skill of that model is nearly impossible to determine on a monthly basis, but it is unlikely to be consistently producing a result that is accurate to the 1/100th degree that is stored in the record.

A simple case in point. The Berlin-Tempel station (GHCN ID 61710384000) began reporting temperatures in January, 1701 and continues to report them today. Through December, 1705 it was the only station in the GHCN record reporting temperatures. Forty-eight of the sixty possible months during that time period reported an unflagged (passed quality-control checks) raw average temperature, and the remaining 12 months reported no temperature. Every one of those 48 months was estimated downward by the adjustment models exactly 0.14 C. In January, 1706 a second station was added to the network – De Bilt (GHCN ID 63306260000). For the next 37 years it reported a valid temperature every month and in most of those months it was the only GHCN station reporting a temperature. The temperature for each one of those months was estimated downward by exactly 0.03 C.

Is it possible that the models skillfully estimated the “correct” temperature at those two stations over the course of forty plus years using just two constants? Anything is possible, but it is highly unlikely.

Let’s not forget that 95% of the models failed to predict the Pause. The majority of models tend to be wrong, hence the reason members of the Cult of Climastrology are using the models to predict doom in 2050 and 2100.

Mr. Goetz provides lots of charts and data, leading to the conclusion

Overall, from 1880 to the present, approximately 66% of the temperature data in the adjusted GHCN temperature data consists of estimated values produced by adjustment models, while 34% of the data are raw values retained from direct measurements. The rural split is 60% estimated, 40% retained. The non-rural split is 68% estimated, 32% retained. Total non-rural measurements outpace rural measurements by a factor of 3x.

That’s right, 66% percent is estimated. We shouldn’t forget that not everywhere has actual measuring stations. A goodly chunk of Siberia, the Arctic, Antarctica, Greenland, Africa have none. The number of ground stations has declined, especially rural ones, which give a much better real value than those in more suburban/urban areas, which are susceptible to UHI/land use. Of course, there are massive adjustments applied to the rural ones, mostly up. Hence the reason the satellite data is so important. And what do the satellites show? A pause/slight uptick over the last 12-26 years. RSS shows an almost 19 year pause.

Warming itself is not proof of anything other than the Earth being in a Holocene warm period. Creating estimated values, ones that are seemingly much higher than reality, is a way of creating alarm that this is all Mankind’s fault. Junk science.

Crossed at Pirate’s Cove. Follow me on Twitter @WilliamTeach.

Leave a Comment

Share this!

Enjoy reading? Share it with your friends!

Send this to a friend