One of the fundamentals of the consensus approach to climate change is that increasing temperature should lead to increasing water vapour and cloudiness. One type of data measured but not readily available is ‘hours of bright sun’. Initially it was measured by the Campbell-Stokes solar recorder, developed in middle of the 19th century. It uses a glass sphere to direct sunlight on to specially prepared card which shows a ‘burn’ mark when the sun is shining. Recently, radiation is measured by more modern methods. The hours of bright sun varies inversely with cloudiness.

Data on hours of bright sun, among other parameters, are posted on the website of Hungarian Met Office for four climate stations for the period 1910 to 2000 ( 

The following chart shows data for 4 sites for the whole of the period when they all have hours of sun data. There appears to be a break in the data measurement method around 1970; before that period there is a greater variability and less consistency between the stations, after that the data seem to be more consistent. (Note that we have presented the full data set even though part of it is not consistent with later data and could legitimately be excluded.) From 1970 to 2000 the data show a rising trend of 0.018 hours per year. This is equivalent to an increase of 0.54 hours over the 30 year period relative to average of 5.3 hours of sun per day.

Another similar data set is available from the Australian Bureau of Meteorology website at Here the data are expressed as daily solar exposure in MJ m-2. In this case they are measuring solar radiation which also has an inverse relationship with cloud cover. We have chosen the 5 stations because they have data and are in different parts of Australia. The trend line shows a steady climb over the 21 years with data again indicating a reduction in cloudiness. (To create the trend line we have replaced the 2005 data which has missing data for all stations by the average of preceding and following year). The trend is equivalent to an increase of 0.45 MJm-2 over the 21 years relative to average of 17.9 MJm-2.

The final data set we consider is from the UK and was downloaded from the Met Office web site at From the stations on the web site we extracted monthly hours of sun data for 10 stations which were more-or-less complete from 1931 to 2011. The next chart shows the data for all 10 stations and the average. There is some sign of trend in the data with a minimum around 1970/80.

As this graph is a bit confusing we have plotted the average of the 10 stations on a separate graph and also a 20-year moving trend line. A moving trend line calculates the slope of the 20-year trend every year from the 20th year with data up to the end of the data set. This shows that in the 20 years up to 1966, the trend was -0.037 hours per year. In the 20 years preceding 2000 the trend was +0.041 hours a year. Given that hours of sun varies inversely with cloudiness, this demonstrates that not only does cloudiness decrease with increasing temperature but increases with decreasing temperature.

These charts using disparate data sets but they all show one thing; temperature and cloudiness are closely related.


We have earlier commented on trends in temperature. (When is a trend not a trend?) Here we look quite simply at the 30-year trend line from three different temperature data sets and a 23-model ensemble hind-cast/projection. The trends were calculated using LINEST function in Excel. The year shown in the chart is the final year of each 30-year regression period. As can be seen the 30-year trend is positive for most of the last hundred or so years. It reached a peak in 2004 or 2005, depending on the data series, and has since declined slightly. Whilst the three data sets, not surprisingly, show very similar trends, the modelled trends are quite different.

Since 1883 there has been a peak or a trough in the trend line roughly every 30 years. Your guess as to what will happen in the future is as good as mine.

On 25 March 2012, the above paragraph was hanged with '30' replacing '25. The following was also added.

In the next graph we look at the 30-ear trend for 7 models. The choice of models was based on those listed on Table 6 of the IPCC “General Guidelines on the Use of Scenario Data for Climate Impact and Adaptation Assessment”, Version 2, June 2007. Where a model has more than one published simulation its results were average before being include in the graph. The siulation results come from the climate explorer site.

Elsewhere we have shown the results of the same seven models. Showing the difference as 30-year trends appears to demonstrate how widely different the simulations of the models were.


The Georgia Institute of Technology has recently released details of a study of the relationship between Arctic Sea Ice and Northern Hemisphere Snow Cover. The press release summarises it as “The researchers analyzed observational data collected between 1979 and 2010 and found that a decrease in autumn Arctic sea ice of 1 million square kilometers -- the size of the surface area of Egypt -- corresponded to significantly above-normal winter snow cover in large parts of the northern United States, northwestern and central Europe, and northern and central China.”

This goes some way to explaining the fact that whereas Arctic Sea Ice has been tending to decrease (at 53,000 km2/year) snow coverage has declined more slowly (at 22,000 km2/year). The importance of snow and ice is their role in the albedo feed-back mechanism. Snow reflects almost all the incoming energy, water and land (at least the northern boreal forests where most snow falls) reflect about 10%. So, other things being equal, the influence of snow and ice are equivalent. But, and it’s a big but, other things are not equal. Both sea ice and snow cover vary seasonally. The following chart shows average monthly values for the period 1978 to 2011. This shows that in winter snow covers a much larger area than sea ice.

However the albedo effect is only applicable when the sun is above the horizon. The next chart shows the areas adjusted for solar angle. In this case we have assumed a latitude of 80 °N for sea ice and 70 °N for snow and multiplied areas in the first chart by the sine of the sun angle at midday on the 15th of each month. (Calculated using the tool at This presents a very different picture and suggests that the influence of snow and ice are equivalent – with snow being perhaps more predominant.

This calculation is subject to a large number of caveats. The sea ice area is based on NSIDC ‘sea ice extent’ which shows “the total area of ocean covered with at least 15 percent ice”. This is reasonable as a metric since sea ice is almost 90% below water but of course such ice is not reflecting radiation. The use of 70 °N and 80 °N respectively and mid-day sun angle are also only approximate. Ideally it would be necessary to track areas of ice and snow at different times of the year, at different latitudes and the energy reflected at different times of day. 

The final chart shows the solar-angle adjusted ice and snow area. This was calculated as for the previous chart. It shows that despite the snow area being larger and reducing less than the albedo adjusted ice area has shown a steady decline.



The blog has a thread posted by Barry Bickmore related to an article which appeared in the Wall Street Journal (WSJ). The original article was written by a group of eminent scientists with little specific expertise in the science of climate change. To summarise in over-simplistic terms they said that they could accept Anthropogenic Global Warming (AGW) but not Catastrophic Anthropogenic Global Warming (CAGW). The WSJ published a response by a group of equally eminent climate scientists who supported CAGW.

The posting of Barry Bickmore looks at some of the claims in more detail than would be possible in a newspaper column. One point the first group of scientists had made was that climate models had not captured the recent stasis in temperatures. Bickmore’s response was that “individual models actually predict that the temperature will go up and down for a few years at a time, but the long-term slope (30 years or more) will be about what those straight lines say.” Below we show the annual temperatures expressed as degrees Celsius for 23 models (downloaded from the ClimateExplorer site) the maximum, minimum and average for these 23 models and the temperature from the HadCRU3 data series. As the HadCRU3 series only gives temperature anomalies we have adjusted it to give the same mean as the models for the period of overlap.

First of all it can be seen the chart supports Bickmore’s point. The average of the models follows the general trend of temperature from 1900 to the present and many of the individual models have periods with little or no increase even after the effect of CO2 kicked in from the mid-1970s onward. The current period may have some similarities to the period 1910 to 1970 when a strong Atlantic Multidecadal Oscillation led to an enhanced temperature increase for the first half of that period and to period of stasis for the second half. As the period of stasis, but with wide variations, lasted for about a quarter of a century we may have another decade or so of level temperatures with it still being possible to defend the models. In that period it is likely that models will improve and that the models themselves will be better able to represent such periods.

What is surprising is the difference between the models. The average temperature of the ‘hottest’ model is 15.4 °C and of the ‘coolest’ is 12.4 °C. This is, if my maths is correct, equivalent to a difference of 15.92 W/m², an order of magnitude larger than typical anthropogenic forcing estimates.

Although Bickmore doesn’t mention temperature we have produced a similar chart for precipitation.

This also shows a large difference between the models but, unlike temperature simulation, little evidence that the underlying trend has been captured. In this case the ‘wettest’ model has an average precipitation of 1184 mm/year and the ‘driest’ a precipitation of 918 mm/year. This is equivalent to 19.0 W/m², again large compared to anthropogenic forcing.


All scientists are well aware of the mantra “Correlation does not imply causation” and people have identified examples to illustrate this. Two spurious correlations I remember were one between the number of missionaries sent to Jamaica and the number of illegitimate babies born the following year, and another between the number of nesting storks and the number of babies born in some German towns. That said, correlation can indicate possible links; cancer and smoking being a case in point. When a physical explanation for a phenomenon is known then regression can also indicate the relative importance of different factors.

Scafetta’s models discussed at are examples of regression models which assume that global temperature can be explained by astronomical and climatic cycles superimposed on an underlying but unexplained trend. With some justification they have been dubbed ‘climastrology’.

To look at this in a bit more detail I’ve been playing around with two simple regression models. The first used three parameters: sunspots (SS - as a proxy for solar radiation), optical thickness (OT - to represent aerosols from volcanoes and elsewhere) and Atlantic Multidecadal Oscillation (AMO, as a representative climatic cycle). This is similar to the model Foster and Rahmstorf used to determine the underlying temperature trend from 1979 to 2010 with AMO replacing the El Nino index. The most questionable parameter in both cases is the use of natural climatic oscillations as independent variables. The regression parameters were calculated using the HadCRU3 temperature base and the LINEST function in Excel.

The resulting equation was:

Temperature = 0.62 * AMO + 0.00098*SS + 0.092*OT – 0.44

The equation has an r2 value of 0.19 and a standard error estimate of 0.24 °C. One anomaly is that the coefficient for the Optical Thickness is positive – implying that volcanoes would increase temperature! From these statistics you would not expect the agreement to be very good and it is not.

I then tried a four parameter regression, adding CO2. This gave the following equation.

Temperature = 0.49 * AMO + 0.00035 * SS -0.15 * OT + 0.0082 * CO2 - 2.98
n this case the r2 value is 0.89 and the standard error of estimate is 0.091 °C. The coefficients are now in the right direction. For comparison I have also plotted the ensemble of IPCC climate models in this case the standard error of estimate is 0.14, rather worse than the regression model.

The regression model does as well as the IPCC ensemble in places where the IPCC ensemble performs well (the increase from 1970 to 2000)and the regression model does better in places where the IPCC ensemble is known to underperform ( the increase from 1910 to 1945, the slight decline from 1945 to 1970 and the levelling off from 2000 to 2011). It is clear that the IPCC models will improve dramatically when they are able to simulate climate oscillations.

Ater I had developed this model I remembered that CO2 is not the only greenhouse gas so I modified it to use CO2 equivalent (CO2Eqv). The data were taken mainly from the GISS site ( updated to 2011. The new model gave the following regression:

Temperature = 0.55 * AMO + 0.000049 * SS -0.35 * OT + 0.0177 * CO2Eqv - 2.98

In this case the r2 value is 0.90 and the standard error of estimate is 0.087 °C - a slight improvement. One intersting difference between the models is that in this model the influence of sunspots is considerably reduced and that of optical thickness (aerosols) is increased.

A final point – which may or may not be significant but I throw it in for fun. The coefficient for CO2Eqv is 0.0177. During that period CO2Eqv increased from 537.4 to 956.8 ppm and accounted for 0.74 °C of warming. The ratio of the increase of CO2Eqv was 1.78. This implies a CO2Eqv sensitivity of 0.89 °C for a CO2Eqv doubling (0.74 * log2)/log(1.78)), a figure close to most estimates. Given the warning about not reading too much into correlations with which I started this piece it should be treated with caution.

The modelling using CO2 equivalent was added after the initial post. The final figure with corrected labelling was replaced on 4 March.


In the IPCC Technical Assessment Report of 2007 many of the graphs used smoothed values of data series. This is perfectly valid as it facilitates seeing trends among the year-to-year fluctuations. The methods used are described in 'Appendix 3.A: Low-Pass Filters and Linear Trends'. They use for annual data as filter which has 13 weights 1/576 [1-6-19-42-71-96-106-96-71-42-19-6-1]. An example of this is in figure 3.8. It is interesting to note that have used a algorithm which allows smoothing right to the end of the data series.