Measurement instruments are not evenly distributed throughout the world.
There are areas that are very well provided with both land-based and marine measurement stations, such as the UK and the US.
Other areas are well supplied with land-based stations but have no readings at sea (East Asia and the Mediterranean coast).
Alaska is well provided with marine sensors but has very few land-based sensors.
Lastly, huge areas (the Indian Ocean, Australia, the North Pole, the North Atlantic and Canada) have very few sensors, on land or at sea.
Furthermore, the NOAA is using fewer and fewer stations to establish a global temperature profile,
justifying this by technological advances and the difficulty of accessing data from old stations.
Let us make a rough analysis of how well stations are distributed.
Let us say that the information provided by a sensor is representative of weather conditions in the surrounding 100 km2.
The Earth has a total surface area of approximately 500 million km2;
this means that a reliable global analysis would require at least five million sensors,
which is 1,600 times more than the 3,000 stations being used at the moment.
And that is simply for the calculation of surface temperatures.
This distribution would have to be repeated at every layer of the atmosphere and every depth of the seas.
This simple calculation clearly demonstrates that there are not enough stations to model the surface temperature of the globe,
and satellites cannot replace surface stations.
The reduction in the number of sensors being used is fundamentally unsound:
temperature varies from one place to another, from one hour to the next, and this natural variability can be tracked only by a very dense network of sensors
Average annual temperatures are given on the NOAA site, in climate information sheets on the ‗Climate Monitoring‘ page.
The annual figures published by the NOAA are mostly data in the form of ‗temperature anomalies‘ (this is explained later, in section D,‗Methodology: thinking in terms of temperature anomalies‘).
A temperature anomaly is the difference between the average temperature for the year in question and a long-term average (from 1880 to 2000), which serves as the baseline. According to NASA and the NOAA, these data are more appropriate
16 SCM SA White paper "Global Warming", 2015/09
for calculating averages over space and time because they are representative over much larger areas and longer periods than absolute temperatures (the explanation provided by the NOAA is given later).
However, these data are not very clear for the reader because these annual anomalies are calculated in relation to a ‗sliding‘ baseline which changes every year.
For example, the anomaly given for 2005 is in relation to the average between 1880 and 2004, the anomaly for 2006 is in relation to the average between 1880 and 2005, and so on.
Worse still, data are sometimes referenced in relation to the period 1961-1990. Although using a baseline to establish long-term comparisons might initially seem to be a good idea, it loses all meaning if the baseline itself is variable.
It is fascinating to see that, on such a heavily debated subject, nowhere on the American Government site is there any mention of a simple, global figure: for year N, the averagetemperature is so much.
This in itself is enough to set off alarm bells for any mildly curious scientist.
The data on global annual averages are very difficult to obtain, even for recent periods,
because of the varied formats of NOAA information sheets.