Climategate, Global Warming, emails, the 'hockey stick',data manipulation and Copenhagen...
Unless you have been living under a rock for the past ten years, you will have no problem recognizing the term 'Global Warming' and what it implies; i.e. that human beings have somehow through their polluting industrialization put enough warming chemicals into the atmosphere to put the normal environmental processes into an upward tail spin of increasing temperature.
The trouble is that this may be right, but some of the original science behind the whole theory (and it is a theory, it cannot logically be completely proven until it does happen or not) is becoming discredited and looking rather 'shonky' (nice Australian slang for completely broken). So whats going on? (Note: the video below is a rather soundbite focused, but indicates the significance of what has happened)
Lack of core data availability and data 'selection'
It appears that a fair chunk of the founding papers behind the Global Warming theory were created by a set of climate scientists using sets of data which all overlapped in their usage of one particular set of data, namely tree ring growth data from the Yamal Peninsula in Siberia. This data set showed pronounced and dramatic uptick in temperatures. Although until recently other academics were not able to get their hands on this core data to reproduce the results using independent techniques. This lack of peer review has only come light due to one Canadian mathematicians attempt to reproduce the results.
Also there appears to problems in transparency and repeatability with other core data sets, that other scientists have often used as the basis for their models as well. Quoting Judy Curry (full article reference at the bottom):
The HADCRU surface climate dataset and the paleoclimate dataset that has gone into the various “hockeystick” analyses stand out as lacking such transparency. Much of the paleoclimate data and metadata has become available only because of continued public pressure from Steve McIntyre. Datasets that were processed and developed decades ago and that are now regarded as essential elements of the climate data record often contain elements whose raw data or metadata were not preserved (this appears to be the case with HADCRUT).
Basically is no longer possible to go back to the core data and reproduce the models independently.
The Climategate Scandalous emails
The other 'smoking gun' in this story is the fact that a possible hacker managed to break into the CRU (Climatic Research Unit, University of East Anglia) and download ten years worth of emails between the core set of academics behind the Global Warming theory (we say possible hacker, as given the way the emails have been selected, it could indicate also an inside job via a whistle-blower). These are on display for anybody to have a read through currently and we advise you to have a look as well. Apart from the usual academic banter and competitiveness there appears to be an underlying 'bias' towards supporting their theories at all costs, rather than objective and clear consideration of all the facts in play.
Also made available was the source code and comments to some of the models they were running, which shows what looks like a lot of 'forcing' of data to produce the required results (as in overly selective use of or undue adjustment to data).
A bit of a 'right wing' video, but focus on the core source code comments.
So why all the fuss?
If these were just a group of climate change researchers working on their own, it would be not that much of an issue BUT due to the following facts:
CRU takes raw data as measured from various sources, processes and 'cleans' it and makes it available for monthly updated download on their FTP site for other climate change academics to use. So, if CRU is found to be in error, then all these dependent academics must be considered in error as well.
CRU processed data is used to calibrate satellite proxy temperature readings, so if CRU is found to be in error, then all usage of said calibrated satellite proxy temperature data readings will be in error also.
CRU data is one of four key data sets used by the majority of climate change researchers - 2 ground based, 2 satellite data based (to which said calibrations have been applied) - in essence up to 75% of the climate data in use is at risk of being shown to be invalid if CRU are found to be in error.
These 4 data sets have been used by the IPCC (Intergovernmental Panel on Climate Change) as core data in their research and conclusions - so if the CRU are found to be in error, the IPCC conclusions are also likely to be in error.
So the potential is that a large slice of the significant and noteworthy research undertaken by the climate researchers is at risk of being shown to be faulty.
The other problem with this is how it has effected research into other areas of climate. In essence the 'success' of this particular branch of climatic research has resulted in more and more focus on it - starving other branches of climatic research of much needed funds. So if there was alternative evidence (one way or another) to be found concerning the climate, that has been essentially silenced. So by this 'over focus' on one avenue of inquiry the whole balance of the validity of all the research comes into question.
The other problem with all of this is such research is not just an academic exercise, it has become the Rosetta stone of a whole political and business movement; which is in the process of getting us all to sign up to carbon trading as a way to solve the man made climate change problem. There may well be man made climate change, unfortunately, given what the CRU data has shown, it would be most unwise to be committing the whole planet to this unless the science behind has been shown to be rigorous and as exact as possible. We at least owe this to all the other people and causes that could have benefited instead; its a very large opportunity cost indeed to be spending if we have got it wrong. Its a bit like giving the SETI institute several billions of dollars to spend on preparing for the arrival of ET, you would consider that a 'bad spend'; yet given the above currently they have a better chance of finding ET then we have of working out what the climate is doing given the potentially poor science to date.
BTW: This article is being updated on an almost daily basis in the links section at the end, here we put significant events as they occur to save you having to search through it all.
Update (14th Jan 2010)
It looks like NASA has been caught 'cherry picking' weather stations to create artificially warmed temperature records.
UK press coverage of parliamentary review. Does not seem to be going at all well for the CRU and Prof Jones; reasonable chance they might be found at fault it seems.
ClimateGate Book now available
Climategate: The Crutape Letters (Volume 1) by Steve Mosher and Thomas W. Fuller is now available and goes into exhaustive detailed analysis of the CRU emails and data files.
The Climategate scandal covered from beginning to end--from 'Hide the Decline' to the current day. Written by two authors who were on the scene--Steven Mosher and Tom Fuller--Climategate takes you behind that scene and shows what happened and why. For those who have heard that the emails were taken out of context--we provide that context and show it is worse when context is provided. For those who have heard that this is a tempest in a teacup--we show why it will swamp the conventional wisdom on climate change. And for those who have heard that this scandal is just 'boys being boys'--well, boy. It's as seamy as what happened on Wall Street.
Main Stream Media picking up the story (2nd Feb 2010)
It looks like, in the UK at least, that the Main Stream Media (as in traditional print media, rather than online blogs and forums) are starting to cover this side of the discussions. See here for 'amazed' online coverage of this fact (i.e. coverage in The Independent and The Guardian who were previously ignoring the 'anti' side).
Something which requires looking into is how statistical methods on confidence are being used in these models. Basically, when you measure something there is a possibility of two errors creeping into the measurement:
Measurement error itself, i.e how accurate the equipment or process is in actually getting a valid reading
Rounding error, this occurs when a reading is taken to certain number of decimal places when the 'real' value itself has a much higher accuracy (the car is doing 79 km/h compared to the car actually doing 79.99 km/h).
So a measurement or data point in a series actually represents part of a possible range of equivalent values of which the measurement is but one. The other thing to remember is that the underlying error rate in data is not something you can 'factor out', it carries through the model all the way to the final results. You can apply averaging across a set of measurements to improve accuracy but this only works if:
The set of measurements occur in the same environment and on the same equipment.
Any corrected values (like transcription errors, etc) are factored out of the error calculations (you cannot assume your correction of the transcription error is actually correct; it could be out of 10x, out by 10+/-, 4 read as 9, 6 read as 0, transposition, etc).
Averaging across a time series carries up the derived standard deviation of the series with it through its usage.
Also depending on how the data gets used, in certain cases the error rates can sum; in the same way being constantly 1% degree off in a heading out at sea can result you being way off course in a short time... See this paper for a more detailed analysis of the possible pitfalls with models.
Note: this is not say this wasn't being done, rather if any one part of the statistical data processing is in error it effectively calls into doubt the whole. This is why this could be so damaging as the CRU provided monthly temperature records to a whole group of academics based on processing they did from a large set of sources around the world.
Also not mentioned in most of the discussions online is the effect of 'joining' two data sets together and what that means for confidence and accuracy. For instance it appears that the tree ring data after 1960 diverged from the direct temperature measures, so the real temperature measurements were used for that period. Question is, does this mean the same 'significance' in the model was given to the earlier proxy data as the later real data? Suspect this will only be answered when statisticians and engineering modeling specialists work through the models.
Conclusion
What is there to conclude? Well
The press, big business, pots of money, vested interest and academia just do not mix well in this space;
We only have ourselves to blame - if more people actually stood up and asked simple questions like 'why', 'can you prove it?' and 'has this actually been really independently peer reviewed?' we might not have got ourselves in this mess.
For such a critical area of research the peer review process as operated has been shown to be inadequate. A much higher degree of reviewing and disclosure must be enacted.
Technology, has to a certain extent, made this situation possible. Think about it, your average desktop computer has now the same computing power as a 10 year old mainframe computer. What this means for running climatic models and simulations is that you can either go more detailed, or run a model many times in the same total time. This makes it that bit easier to 'tweak and tune' models and simulations to get the results you want or subconsciously want.. See the links below on the code comments to see examples of this effect.
In an ideal world all the originally measured source data should be made freely publicly available. i.e. the data before any adjustment. Also information provided per data source on error rates. Also a 'standard' should be created for indexing this data (Google or Yahoo up to the job?).
Similarly the adjusted data should be made available to the public. Although there will need to be careful 'control scope' analysis done to prevent any one data set becoming too dependent upon other data sets for adjustments and corrections - ideally there should be at least 3 'vertical' stacks of climatic data that are totally independent of each other - this way errors or bias in any one will not be transfered into the majority; making it easier to spot.
In an ideal world all publicly funded climate models source code should be hosted on public open source project sites. Thereby ensuring complete transparency and traceability.
Something is going on with our environment, whether that is a good thing or not is still unknown.
Is managing the CO2 levels really the right climatic lever to tweaking? If a lot of the models have been fixating on this as the way to manage the problem and those models are based on the work above that is in doubt... This really all needs a complete disclosure and reanalysis to find out what is really going on. For instance Nitrous Oxide is 298 times a more powerful greenhouse gas than CO2, accounting for 6% of the greenhouse effect (we produce 30% of the N2O), yet you hear absolutely nothing about this. It also attacks the ozone layer.
So still keeping doing what you can to reduce your adverse effect on the environment, on the outside chance that they are right (plus cutting down your environmental pollution is a good thing anyways, you don't need a climate model to work that out); but do keep an open mind on this whole situation. The last thing we and the planet need is people jumping to the wrong conclusions for the wrong reasons; there is not much slack for the taking if we are wrong.
What can I do about it?
We hope this has got you thinking and wondering about the science and the motivations in this space. We suggest you just do the simple thing of discussing this with your fellow humans; then, like us, you will be utterly amazed by the almost total lack of awareness on this issue in the public at large. Remember, you don't need a PhD to follow this story, the science is just another 'tool' being used (or misused) by people to their own ends. The onus is on the scientists to be held equally accountable to you as anybody else.
Why is this on here?
I've noticed some people have been commenting on other blogs (thanks for the links and keep them coming) that it is strange for such an article to appear on an Eco focused website. The actual real strangeness to me perhaps is that no other Eco focused websites seem to be doing the same. The 'mission statement' behind this site is to provide the facts and educate people on how to be more eco friendly without it costing the Earth (in all senses) - so to me this coverage is right in its remit. Have a look at the other articles to see what I mean.
(BTW This site is not funded by any denialist or pro/anti AGW groups - we just want to get to the facts in this issue and keep people informed).
Next page: Related Articles, references and blow by blow coverage.