Well, bumps is over for another year, so some kind of normal service can resume.
WUWT was pushing “Investigation of methods for hydroclimatic data homogenization” by Steirou and Koutsoyiannis. Why? Because it appeared to show problems with station data homogenisation. There was some comedy before AW realised it wasn’t a peer-reviewed paper, just a conference presentation, although he’s still misleadingly calling it a “paper”. Open Mind has largely done this, but what I wanted to point to was a trail of (sane) blog conversations.
Marcel Crok has a largely uncritical post (uncritical meaning that he is mostly just reporting their results, and repeating their errors like Their analysis shows that 2/3 of the stations are adjusted upwards, where the expected proportion would be 1/2, not actually thinking about their results). Crok has another post about the history of the original work, and also he gets a bit sniffy about AW or SM not giving him (Crok) credit for discovering the paper. Expecting good behaviour from either of those two seems rather optimistic to me, but its nice to see not everyone has lost hope.
But a far more interesting analysis is presented by Victor Venema who points out the obvious – that the entire basis for the presentation,
In 2/3 of the stations examined the homogenization procedure increased positive temperature trends, decreased negative trends or changed negative trends to positive [but] The expected proportion would be 1/2.
is junk. Or in his words, “plainly wrong”. You can read the obvious reasons why at his posting, or you can think them up for yourself. Then you can spend some time wondering why S+K didn’t do the same.
Now K has written a follow-up posting at Croc’s which starts off saying “isn’t blog science wonderful? We can all learn so much” and then follows up by learning nothing and simply repeating his errors:
when [VV] says that our (prior) estimates of the expected proportions of data corrected upward and corrected downward should be 1/2 is “plainly wrong”, he would be more convincing if he gave his own estimate, rather than telling there are biases. Also, I am surprised to see that he criticizes our statement that homogenization practices often lead to false results. Is he so sure that they always give correct results?
K simply isn’t thinking. For the first, it is easy to see that the assertion that 50% is the right answer is wrong, without having to propose a different right answer. For the second, the opposite of “homogenization practices often lead to false results” is not “they always give correct results”.
K claims to be writing this stuff up for peer-reviewed submission. Wouldn’t it be nice if he submitted it to one of the open-review EGU journals? After all, he presented it at an EGU session, so it would be entirely appropriate.