The fruitarians are lazy

No no don’t go away, there’s actually some science in this post, courtesy of the increasingly-heavyweight Nick Stokes. Or, perhaps more fairly, whatever science there is comes from NS. But there’s a lot of snark too, as I hope you’d expect. That comes from me. The title isn’t quite right; I could have tried anopsologists but I bet you wouldn’t have recognised that – I wouldn’t have, until I looked it up. But let me attempt to come to the point. Over at JoNova is yet another of those tedious posts where they complain that a weather station has been “adjusted” to show warming (and they’re still gnawing on the same dry bone), and that one should only ever eat raw data because only raw data is good for you. If you find raw “skepticism” too raw, you could try the WUWT echo chamber which, errm, just echoes JoNova.

In this case the station in question is Amberly, and if you care about what’s actually gone on then you should of course read Nick Stokes and if that isn’t enough you could read part 2. Nick also has a convenient post showing the affects of adjustment on trends, which is reasonably convenient for pushing at people who claim (without bothering to check, obviously) that adjustments always push the trends upwards. Or this one.

JoNova has a “home crowd” who can be relied on to say the right things, much like some of the WUWT regulars. Here’s one saying the artificial concept of homogenisation (which literally means removing all difference), is an act of unthinking vandalism which is probably typical enough; I don’t think I’ll bother quote any more. What’s unfortunately-not-at-all-surprising in this case is that they’re all mouthing off without having bothered to do even the most basic of checks. Is there good metadata for this station, to rule out a move or other change that might justify an adjustment? No. Is that the Bureau Of Meteorology’s fault? Not really; the station was run by the RAAF when the “move” occurred. That doesn’t stop some of the commentators there actually literally accusing the BoM of criminal behaviour.

Trivia: Evan Jones makes a cameo appearance, saying “I hate and despise homogenization with an ice-cold passion” which is jolly fun, but he doesn’t tell us why. But it sounds like he’s a fruitarian.

Refs

* If you’re actually interested in the details of homogenisation, then VV is a good place to go. This one might be a good start.
* Oh no, now I need to eat my words! – ATTP on internal variability.
* Constructive and External Validity for Climate Modeling – Serendipity
* A database with parallel climate measurements – VV.
* Climate sceptics see a conspiracy in Australia’s record breaking heat – Graun

236 thoughts on “The fruitarians are lazy”

  1. You can get free software at http://www.homogenisation.org.)

    I remember books, movies, even TV episodes from my arguably misspent youth concerning the identification of outliers. The question as to whether said outliers would be “corrected” or “dropped” made for amusing variation.

    Thing is, you start with wanting to remove all of the stations that don’t sing the same song. (And the less metadata, the better.) Whereas I want to find well sited stations with good metadata, no moves, and no TOBS shifts — and then let them sing whatever song they want.

    After my version of Big Brother, there are a lot fewer stations left standing. But those that are, are good. You have ten times as many, but four out of five no good and homogenization causes them to corrupt the others:

    Your trend result is higher than if you just took an average. And the trend average is way to high, to begin with, thanks to microsite.

    Like

  2. Evan, if the new paper bears the same logic as AW’s Antarctica post – you might as well start building a good alias now. As one commenter said at HW:

    “So in one graph that they claim shows a cooling Antarctica, we have

    1 – RSS, an outlier data set
    2 – Tropospheric temps, not surface temps
    3 – Just 60S to 70S

    Is there any measure that would be less representative of antarctica than that?”

    Please tell us again how AW is really interested in the science.

    Like

  3. Besides, RSS does cover nearly all the sea ice area (which is the actual subject). It is also followed immediately by UAH, which covers from 60° to -85° and shows the exact same thing.

    And considering the article concerns winds, why is LT inappropriate? Especially in light of the — totally — inadequate (to understate it) surface station coverage.

    P.S., I am not interested in discussing criticisms of every little blog post Anthony ever made. And I only post under my own name.

    Like

  4. if it starts to cool, which it may

    LST, only (which is all the USHCN measures, anyway). And you can confine that further US, since USHCN is CONUS only.

    Like

  5. Let me ignore all the other statements I do not agree with. You make one central statement I would love to understand the basis for.

    Evan Jones: “”But bad, unchanging microsite is is a subtle trend bias that seeps into every pairwise comparison they made.”

    I have asked several times for evidence for this statement and have the feeling I have never gotten one. What is your evidence for the claim that the problem is a gradual change in the temperature at the bad stations?

    Just finding a trend bias is no evidence what so ever. You can get a different trend due to a jump or due to a gradual change.

    Like

  6. It seems pretty clear to me: When pairwise comparisons are made, microsite is not currently taken into account, stations with good microsite are paired with those that are bad.

    Therefore, the bias is transmitted.

    If one wants to avoid this, one could either drop all the bad stations and homogenize only the good ones, or pairwise compare (both bad and good) with good stations only, using the good station average as a basis to establish outliers.

    The former would result in a homogenized set of Class 12. the latter would be a homogenized full set, but one adjusted to conform with 12 stations.

    As it is, 4 out of 5 pairwise comparisons are with stations with poor microsite. This not only occurs during overall homogenization, but also with MMTS-adjustment, which is also based on pairwise comparison. (I can easily see why Menne is playing with combining the two processes.)

    Like

  7. By the way, Victor, I am coming up with interesting results in my regional-as-pairwise attempts to correct for MMTS. I am going 5 years per side (which is plenty, for a jump), which effectively homogenizes a third of our study period for three quarter of our stations. You fiend.

    I am not going to overdo it. One recursive only. I have made the first pairwise adjustments. USHCN has near-perfect metadata for MMTS conversion (and always did), so I need not (and must not) go hunting hobgoblins. Next time, maybe.

    I will now do the second (and final) pairwise between raw and 1st-pairwise-adjusted and that will be all the smoothing I can handle. (I wouldn’t even go that far, but I want to make some accounting for overlapping conversion times. Less of a problem for the Class345s because there are so many.)

    That will produce the final adjusted data. And lord have pity on my miserable homogenizing soul.

    As an aside, it will also be a crude but interesting look at what the differences, if any, there are between regions. (Uniformly pouring on MMTS adjustments out of Hubbard or Menne won’t really do this.)

    Like

  8. I am, of course, splitting the stations into compliant (Class12) and non-compliant groups (Class 345) and only homogenizing each with its own kind, never with the other. It will be interesting to see if MMTS effect on compliant is the same as on non-compliant stations.

    Like

  9. Yer honner,

    Stipulating that microsite is as I say, the question is answered each and every time the plaintiff pairwise-compares a Class 12 site with a Class 345 site.

    Try this test: Homogenize the Class 12s separately, pairwised only with other Class 12s. If the result is the same as for Class 2s homogenized the way NOAA does it, you will have proven your point. If the result is significantly different, then I have proven mine.

    I haven’t been plugging stuff into homogenization software. I am doing this myself, by hand, piece by piece, step by step. I can see what the numbers are doing every step of the way — and why.

    Besides, why would you object to binning GHCN by microsite. If I’m right, then it is completely necessary. And if I’m wrong, then no harm done. All you’d have to do is find someone to rate them . . . #B^)

    Like

  10. Seriously, my dear V^2, surely you can see that — IF — microsite does transmit a trend bias, that this will seep into all pairwise comparisons between well sited and pootly sited stations.

    If there is no bias, of course, this will have no effect. I am not talking “S”– but if Delta-S exists — if there is a delta — then surely you must understand that this will introduce a bias into such cross-comparisons.

    Do we at least agree this far — with me well understanding that you do not necessarily accept that microsite is a bias in the first place?

    Like

  11. Okay, the new study on tornadoes sure looks as if it could have profited from pre-pub independent review.

    Also, a revision to the above: it looks as if I am going to have to do the MMTS for all stations of all classes: There aren’t enough Class 12 stations per region to adequately cross-compare with each other (when I did so, the average was +.193/decade adjusted vs. + +.192 raw).

    This will be criticized, so I will cave and do both well and poorly sited stations together and let the chips fall as they may.

    Like

  12. It’s not pretty when the anti-science denial thugs try to setup fake-front “legit” orgs to launder their ideas.

    How many have they burned through now? From SEPP and The Tobacco Institute to Heartland, the NIPCC and now Anthony’s little coin collector OAS. You have to be pretty gullible to see this as anything but exploitation of the mentally deficient.

    PT Barnum said there’s a sucker born every minute. What he didn’t know is that they’d tend to gather together under the umbrella of science denialism.

    Sou and Greg Laden have already put their take in.

    Like

  13. Evan writes: “Okay, the new study on tornadoes sure looks as if it could have profited from pre-pub independent review.”

    Perhaps you ought to read it. Or make an actual factual criticism. AW and Joe D’Aleo apparently didn’t read it before they issued an official OAS statement!? Else they just failed to understand it.

    Joe D’Aleo has been caught peddlling crap many times before, but this time he and AW manage to use 4 different graphs all of which are irrelevant to the subject of the paper being discussed. In fact, reading their statement you can’t actually tell what the paper had to say. They never mention its main conclusion. LOL

    Oh, and they also wrote: “It is the opinion of The OAS that this sort of methodology to remove a portion of a dataset to cite a result is unsupportable and without justification.” Please bear that in mind as you slice and dice the temperature data sets. LOL. This level of stupidity and hypocrisy ought to win an award.

    Like

  14. For those who haven’t seen the paper, Peak tornado activity is occurring earlier in the heart of “Tornado Alley”, John A. Long and Paul C. Stoy
    GRL,10 September 2014, DOI: 10.1002/2014GL061385, it says:

    “We demonstrate that peak tornado activity has shifted 7 days earlier in the year over the past six decades in the central and southern US Great Plains, the area with the highest global incidence of tornado activity. “

    A fairly commonsense result given the dozens of phenological studies that show that spring is arriving earlier each decade. Obviously the tornado season is tied to the circulation patterns that develop in late spring/early summer and if they’re occurring earlier, then tornados should as well. D’Aleo and Watts apparently misinterpreted this as saying more tornados are occurring … reading is hard …. for deniers.

    Like

  15. Interesting that an earlier season does not result in an increase in tornadoes. Does it end earlier? Or is it just longer but less intense?

    Like

  16. I think you are missing the point of the “irrelevancy”. it’s not irrelevant:

    Climatic scale detection of earlier onset of tornado activity cannot be dependent upon removal of a portion of the dataset.

    Therefore the official earlier tornado season may be an artifact of improved detection.

    Like

  17. It isn’t dependent on the 3 outliers that were removed.

    Improved detection increases the number of tornadoes in a given year due to increasing the probability of detection. There is no reason to believe that tornadoes early (late) in a given year are more (less) likely to be detected.

    Like

  18. Evan – remember that they’re looking at the *peak* activity. The center of the distribution. We might expect that since more tornadoes are detected the very earliest and the very latest dates would change, but why would the center of the distribution shift? There is no logical reason. Simply stating more tornadoes are detected is irrelevant to where the center of the distribution resides.

    Like

  19. Evan: “Okay, the new study on tornadoes sure looks as if it could have profited from pre-pub independent review.”

    Kevin: Perhaps you ought to read it. Or make an actual factual criticism.

    Evan still hasn’t learned that Anthony Watts is totally untrustworthy.

    Anthony says there’s some kind of problem with Long & Stoy 2014. But the problem is with Anthony’s lack of understanding, not with the paper itself. Nonetheless, Evan accepts Anthony’s judgement and, without checking it for himself, repeats it over here.

    Anthony Watts is a fool. Trusting Anthony Watts as a source makes you look like a fool too.

    Actually, it’s rather depressing to read the comments in the “OAS” thread about this paper at WUWT. There are literally dozens of comments that just mindlessly cheer on Anthony’s nonsense. Then Harold Brooks (an actual scientist) appears, and explains to Anthony exactly why he’s wrong. A few people seem to get it, but mostly Brooks’s comment is just wasted on them. Anthony, true to form, refuses to admit his own error and engages in a lot of handwaving instead.

    Pretty typical day at WUWT, all in all.

    Like

  20. Evan Jones: “The evidence is, simply, that I have made the year-to-year graphs and the divergence appears gradual, not jumps.”

    Do I recall it correctly that your claim is that during a period of 30 years, the well-sited stations have a trend of +0.185C/decade and the badly-sited stations have a trend of +0.335C/decade? The difference over the full 30-year period is thus (0.335-0.185)*3= 0.45°C.

    I do not think you can eye ball whether a time series has two inhomogeneities (a 30 year period will on average contain 2 inhomogeneities) of 0.2°C or a gradual trend of 0.45°C over 30 years. You would have to eye ball that in a station temperature time series with a standard deviation of about 0.7°C and a trend of about 1 degree Celsius. I think you should probably use nearby reference stations to reduce the noise and see the non-climatic changes more clearly. And you should use a statistical test to see if the statistical trend model is clearly superior over a statistical step model.

    Like

  21. Well, you have to remember that I have skinned out all the stations with recorded TOBS flips and moves. We are only using the 400 cleanest (in terms of metadata) remaining.

    So that really, really cuts down the jump rate and isolates microsite for study purposes. Whatever the cause of the divergence is, it is not TOBS or moves, and we adjust for MMTS..

    Like

  22. We are only using the 400 cleanest (in terms of metadata) remaining.

    So, is it possible what you’ve done is identified the 400 stations with the worst metadata records?

    Like

  23. Paul S, isn’t it touching. Almost as if they do not realize that the metadata also comes from evil climatologists.

    Like

  24. Not seeing a specific definition of ‘clean metadata’ I don’t want to jump the gun, but oftentimes ‘clean’ records are the ones where problems haven’t been identified *YET* 🙂

    Like

  25. THESE KILLING- AND DESTROYING-HABITS ARE TO AVOID ??? UNBEARABLY FOR CIVILIZED BRAINHAVING SOCIETIES RATHER THAN EXEMPLARY FOR CHILDREN.

    Like

  26. More knowledgeable people may correct me, but as far as I know it does matter where the tree rings come from. If you want to use tree rings to measure temperature, you have to take trees from regions where growth is mainly limited by temperature (near the tree line).

    Like

  27. Victor, I think you’re missing the point. The tree ring widths themselves tell us nothing directly about temperature, precipitation, or any other environmental factor. They’re just a normalized record of various growth width ratios.

    The analysis tells us whether the record is correlated to temperature, precipitation, PDSI, or any other factor under study. It’s possible for one data set to be highly correlated to more than one of these factors. It’s possible that the ratios aren’t correlated to *any* of the factors of interest. Again, it’s the analysis of the proxy record that determines its fitness – not how it’s used initially or how we briefly abbreviate and categorize it in a list.

    So the notion that Brandon latched onto – that MBH98’s proxy data was flawed because it used proxies marked in a list as ‘precip’ – is just ignorance.

    Like

  28. Come, now, Victor, you know that I have always given NOAA fair credit for their metadata, and for the fact that they oversample.

    Not seeing a specific definition of ‘clean metadata’ I don’t want to jump the gun, but oftentimes ‘clean’ records are the ones where problems haven’t been identified *YET* 🙂

    Poorly defined on my part. What I mean is stations that have not moved and are not subject to TOBS bias.

    So, is it possible what you’ve done is identified the 400 stations with the worst metadata records?

    An amusing concept.

    But even the stations with poor metadata support our hypothesis. The stations we dropped for poor metadata have even lower trends than the ones we kept, and the gap between well and poorly sited dropped stations is the same as for the ones we did not drop.

    Like

  29. Evan,

    What I mean is stations that have not moved and are not subject to TOBS bias.

    But you’re basing this assumption that they haven’t moved solely on the metadata? How do you know the metadata isn’t wrong?

    But even the stations with poor metadata support our hypothesis.

    What’s your criteria for determining the quality of the metadata?

    Like

  30. But you’re basing this assumption that they haven’t moved solely on the metadata? How do you know the metadata isn’t wrong?

    All I can say is that — for USHCN back to 1979 — it appears to be excellent. Menne (2010) gave the MoEs on the metadata from 1980-2009, and there has been dramatic improvement over the last 5 years — for USHCN. It was always spot on for MMTS conversions, at any rate.

    Besides (I repeat), the stations we dropped, on average, have lower trend than the “unperturbed” stations we kept.

    I don’t know what the state of the non-USHCN COOP stations. We may be looking into that pretty soon. (I am not necessarily optimistic.)

    There is also evidence in the fact that the data shows no real step jumps: the divergence between compatible and non-compatible siting (and the “official” adjusted stats) is as smooth as silk, as far as these things go. (Dr. Venema will be most pleased, I’m sure.)

    What’s your criteria for determining the quality of the metadata?

    Oh, I didn’t mean it that way. I meant a station with “bad” metadata is one that shows moves or TOBS-bias. By now, all the USHCN stations have good metadata for the recent period we are studying.

    As an aside, I find my own MMTS adjustments are coming out too small. Plus, they are only regionally homogenized. So I will swallow the bitter pill and apply the Menne (2010) jumps (+.0375 to Tmean, +0.1 to Tmax, and -0.25 from Tmin) after point of conversion. We may get into it deeper, but that would require another paper.

    Like

Leave a reply to Victor Venema (@variabilityBlog) Cancel reply