• Default
  • Blue
  • Green
  • Red
  • Black
myExtraContent1 (only enabled when style-switcher is on)
myExtraContent2 (only enabled when clock bar is on)
myExtraContent5 (reserved for mega-menu navigation option)
myExtraContent7
myExtraContent8 (only enabled when header search bar is on)
myExtraContent9
myExtraContent10 (used for the content of a second sidebar container)
myExtraContent11


myExtraContent12

Discussion Posts by www.climatedata.info



Peer Review

Peer review is the process publishers of scientific journals use to ensure that the papers they publish are of an acceptable standard. Basically when they receive a paper for publication they send it off to a small group of people who have already published papers in the same field, the ‘peers’ of the author, for an opinion. If it is favourable they publish it; if not they reject it.

Much of the discussion around the leaked emails from the Climatic Research Unit at the University of East Anglia focussed on the peer review process. It appeared that some climate change scientists tried hard to stop sceptical papers being published or, if they had been published, from appearing in IPCC reports. At the same time these very climate scientists were citing an absence of critical peer reviewed papers as evidence that the scientific consensus accepted the concept of global warming.

One of the factors behind this was clearly shown in the Wegman report into the validity of the temperature “Hockey-stick” which showed that many papers on paleoclimatology were published by a relatively small group who co-authored papers with each other.

It is clearly not in the interest of Science (with a capital S) that valid criticisms should be suppressed; but also it is not in the interest of the reputation of a journal that it publishes below standard papers. Journals should have a clearly stated policy on the standards they expect for publication: awareness and understanding of the work of other experts, a clear statement of the advances or differences relative to other published work, a description of the data and analytical methods used. Provided that a paper meets these criteria it should get published.

Whilst it is relevant for a journal to identify the areas of research in which it is seeking papers it is invidious for it to identify what ‘line’ it takes. For example the British Royal Society on its web site has a statement showing that it clearly accepts that climate change is caused by people. This is wrong. It is of course perfectly reasonable for distinguished Fellows of the Royal Society to hold views on climate change, to express these views and even to write The Times with the letters FRS after their names. But, no matter how many of the Fellows hold a particular view it should never become the view of the society itself.

Often the reason that critics use the blogosphere is their comments relate to (sometimes blatant) flaws in the original paper. Journals are understandably reluctant to publish such criticism as it reveals flaws in their review process. (For a humorous take on this see: http://www.scribd.com/doc/18773744/How-to-Publish-a-Scientific-Comment-in-1-2-3-Easy-Steps)

An egregious example was the use of inverted sediment data. A paper (Mann et al, Proxy-based reconstructions of hemispheric and global surface temperature variations over the past two millennia, PNAS, 2008) was recently published on the assumption that extra lake sediment was indicative of increased temperature (whereas the converse is true). What is more in this case the amount of sediment had been artificially increased in recent years by nearby road building and agricultural work so any conclusions would have been false. To expect a sceptic to produce a paper for peer review reworking the misleading data is obviously not justified.

A related, but also important aspect of testing the validity of published papers concerns the release of data. In the past a scientist would do an experiment and report the results in a journal. Other scientists would try to replicate the result. If they got the same result, the findings of the initial experiment would be confirmed; if not, they would be rejected. In the case of climate science ‘the experiment’ consists of collecting, processing and analysing data. The hard work is in the collection and processing of the data; the interesting work is in the analysis. This, might for example, mean travelling around remote parts of Siberia drilling holes in tree rings, carefully preserving the samples and analysing them later in a laboratory to quantify growth rates as a proxy for temperature. It is therefore understandable that after doing all the hard work scientists would be reluctant to give their data away to all and sundry who could enjoy analysing them. It’s a bit like asking a ‘traditional’ scientist if you can come round and work in the laboratory that was used for the initial experiment.

On the other hand in most cases the cost of collecting the data came from the public purse so whilst the original worker might feel they have intellectual ownership of the data in reality it should be in the public domain.
We believe that data should be made available at the time of publication but accept that it is reasonable to make a small charge (similar to that made for copies of article to non-subscribers of a journal), to require a clear statement of why the data are needed and an acknowledgement of the source in any subsequent publication.
Comments (1)
See Older Posts...
myExtraContent13
myExtraContent15