Repeatability of Large Computations

Some parts of the discussion of Oh dear, oh dear, oh dear: chaos, weather and climate confuses denialists have turned into discussions of (bit) reproducibility of GCM code. mt has a post on this at P3 which he linked to, and I commented there, but most of the comments continued here. So its worth splitting out into its own thread I think. The comments on this issue on that thread are mostly mt against the world; I’m part of the world, but nonetheless I think its worth discussing.

What is the issue?

The issue (for those not familiar with it, which I think is many. I briefly googled this and the top hit for “bit reproducibility gcm” is my old post so I suspect there isn’t much out there. Do put any useful links into comments. Because the internet is well known to be write-only, and no-one follows links, I’ll repeat and amplify what I said there) is “can large-scale computer runs be (exactly) reproduced”. Without any great loss of generality we can restrict ourselves to climate model runs. Since we know these are based effectively on NWP-type code, and since we know from Lorenz’s work or before that weather is chaotic, we know that means that on every time step, for every important variable, everything needs to be identical down to the very last bit of precision. Which is to say its all-or-nothing: if its not reproducible at every timestep down to the least significant bit, then it completely diverges weatherwise.

I think this can be divided down into a heirarchy of cases:

The same code, on the same (single-processor) machine

Nowadays this is trivial: if you run the same code, you’ll get the same answer (with trivial caveats: if you’ve deliberately included “true” random numbers then it won’t reproduce; if you’ve added pseudo-random numbers from a known seed, then it will). Once upon a time this wasn’t true: it was possible for OSs to dump your code to disk at reduced precision and restore it without telling you; I don’t think that’s true any more.

The (scientifically) same code, on different configurations of multiple processors

This is the “bit reproducibility” I’m familiar with (or was, 5+ years ago). And ter be ‘onest, I’m only familiar with HadXM3 under MPP decomposition. Do let me know if I’m out of date. In this version your run is decomposed, essentially geographically, into N x M blocks and each processor gets a block (how big you can efficiently make N or M depends on the speed of your processor versus the speed of your interconnect; in the cases I recall on our little Beowulf cluster, N=1 and M=2 was best; at the Hadley Center I think N = M = 4 was considered a fair trade-off between speed of completion of the run and efficiency).

Note that the decomposition is (always) on the same physical machine. Its possible to conceive of a physically distributed system; indeed Mechoso et al. 1993 does just that. But AFAIK its a stupid idea and no-one does it; the network latency means your processors would block and the whole thing would be inefficient.

In this version, you need to start worrying about how your code behaves. Suppose you need a global variable, like surface temperature (this isn’t a great example, since in practice nothing depends on global surface temperature, but never mind). Then some processor, say P0, needs to call out to P0..Pn for their average surface temperatures on their own blocks, and (area-)average the result. Of course you see immeadiately that, due to rounding error, this process isn’t bit-reproducible across different decompositions. Indeed, it isn’t necessarily even bit-reproducible across the same decomposition, but with random delays meaning that different processors put in their answers at different times. That would depend on exactly how you wrote your code. But note that all possible answers are scientifically equivalent. They differ only by rounding errors. It makes a difference to the future path of your computation which answer you take, but (as long as you don’t have actual bugs in your code or compiler) it makes no scientific difference.

Having this kind of bit-reproducibility is useful for a number of purposes. If you make a non-scientific change to the code, one which you are sure (in theory) doesn’t affect the computation – say, to the IO efficiency or something – then you can re-run and check this is really true. Or, if you have a bug that causes the model to crash, or behave unphysically, then you can run the code with extra debugging and isolate the problem; this is tricky if the code is non-reproducible and refuses to run down the same path a second time.

Obviously, if you make scientific changes to the code, it can’t be reproducible with code before the change. Indeed, this is practically the defn of a scientific change: something designed to change the output.

The same code, with a different compiler, on the same machine. Or, what amounts to much the same, the same code with “the same” compiler, on a different machine

Not all machines follow the IEEE model (VAXes didn’t, and I’m pretty sure DEC Alpha’s didn’t either). Fairly obviously (without massive effort and slowdown from the compiler) you can’t expect the bitwise same results if you change the hardware fundamentally. Nor would you expect identical results if you run the same code at 32 bit and 64 bit. But two different machines with the same processor, or with different processors nominally implementing IEEE specs, ought to be able to produce the same answers. However, compiler optimisations inevitably sacrifice strict accuracy for speed, and two different compiler vendors will make different choices, so there’s no way you’ll get bit repro between different compilers at anything close to their full optimisation level. Which level you want to run at is a different matter; my recollection is that the Hadley folk did sacrifice a little speed for reproducibility, but on the same hardware.

Does it matter, scientifically?

In my view, no. Indeed, its perhaps best turned round: anything that does depend on exact bit-repro isn’t a scientific question.

Why bit-repro doesn’t really matter scientifically

When we’re running a GCM for climate purposes, we’re interested in the climate. Which is the statistics of weather. And a stable climate – which is a scientifically reliable result – means that you’ve averaged out the bit-repro problems. If you did the same run again, in a non-bit-repro manner, you’d get the same (e.g.) average surface temperature, plus or minus a small amount to be determined by the statistics of how long you’ve done the run for. Which may require a small amount of trickery if you’re doing a time-dependent run and are interested in the results in 2100, but never mind.

Similarly, if you’re doing an NWP run where you do really care about the actual trajectory and are trying to model the real weather, you still don’t care about bit-repro, because if errors down at the least-significant-bit level have expanded far enough to be showing measureable differences, then the inevitable errors in your initial conditions, which in any imaginable world are far far larger, have expanded too.

Related to this is the issue people sometimes bring up about being able to (bit?) reproduce the code by independent people starting from just the scientific description in the papers. But this is a joke. You couldn’t get close. Certainly not to bit-repro. In the case of a very very well documented GCM you might manage to get close to climate-reproducibility, but I rather doubt any current model comes up to this kind of documentation spec.

[Update: Jules, correctly, chides me for failing to mention GMD (the famous journal, Geoscientific Model Development) the goal is what we call “scientific reproducibility”.]

Let’s look at some issues mt has raised

mt wrote There are good scientific reasons for bit-for-bit reproducibility but didn’t, in my view, provide convincing arguments. He provided a number of practical arguments, but that’s a different matter.

1. A computation made only a decade ago on the top performing machines is in practice impossible to repeat bit-for-bit on any machines being maintained today. I don’t think this is a scientific issue, its a practical one. But if we wanted to re-run, say, the Hansen ’88 runs that people talk about a lot then we could run them today, on different hardware and with, say, HadXM3 instead. And we’d get different answers, in detail, and probably on the large-scale too. But that difference would be a matter for studying differences between the models – an interesting subject in itself, but more a matter of computational science than atmospheric science. Though in the process you might discover what key differences in the coding choices lead to divergences, which might well teach you something about important processes in atmospheric physics.

2. What’s more, since climate models in particular have a very interesting sensitivity to initial conditions, it is very difficult to determine if a recomputation is actually a realization of the same system, or whether a bug has been introduced. Since this is talking about bugs its computational, not scientific. Note that most computer code can be expected to have bugs somewhere; it would be astonishing of the GCM codes are entirely bug-free. Correcting those bugs would introduce non-bit-repro, but (unless the bugs are important) that wouldn’t much matter. So, to directly address one issue raised by The Recomputation Manifesto that mt points to: The result is inevitable: experimental results enter the literature which are just wrong. I don’t mean that the results don’t generalise. I mean that an algorithm which was claimed to do something just does not do that thing: for example, if the original implementation was bugged and was in fact a different algorithm. I don’t think that’s true; or rather, that it fails to distinguish between trivial and important bugs. Important bugs are bugs, regardless of the bit-repro issue. Trivial bugs (ones that lead, like non-bit-repro, to models with the same climate) don’t really matter. TRM is very much a computational scientist’s viewpoint, not an atmospheric scientist’s.

3. refactoring. Perhaps you want to rework some ugly code into elegant and maintainable form. Its a lot easier to test that you’ve done this right if the new and old are bit-repro. But again, its coding not science.

4. If you seek to extend an ensemble but the platform changes out from under you, you want to ensure that you are running the same dynamics. It is quite conceivable that you aren’t. There’s a notorious example of a version of the Intel Fortran compiler that makes a version of CCM produce an ice age, perhaps apocryphal, but the issue is serious enough to worry about. This comes closest to being a real issue, but my answer is the section “Why bit-repro doesn’t really matter scientifically”. If you port your model to a new platform, then you need to perform long control runs and check that its (climatologically) identical. It would certainly be naive to swap platform (platform here can be hardware, or compiler, or both) and just assume all was going to be well. If there is an Intel fcc that makes CCM produce an ice age, then that is a bug: either in the model, or the compiler, or some associated libraries. Its not a bit-repro issue (obviously; because it produces a real and obvious climatological difference).

Some issues that aren’t issues

A few things have come up, either here or in the original lamentable WUWT post, that are irrelevant. So we may as well mark them as such:

1. Moving to 32 / 64 / 128 bit precision. This makes no fundamental difference, it just shifts the size of the initial bit differences, but since this is weather / climate, any bit differences inevitably grow to macroid dimensions.

2. Involving numerical analysis folk. I’ve seen it suggested that the fundamental problem is one with the algorithms; or with the way those are turned into code. Just as in point 1, this is fundamentally irrelevant to this point. But, FWIW, the Hadley Centre (and, I assume, any other GCM builder worth their salt) have plenty of people who understand NA in depth.

3. These issues are new and exciting. No, these issues are old and well known. If not to you :-).

4. Climate is chaotic. No, weather is chaotic. Climate isn’t (probably).

Some very very stupid or ignorant comments from WUWT

Presented (almost) without further analysis. If you think any of these are useful, you’re lost. But if you think any of these are sane and you’re actually interested in having it explained why they are hopelessly wrong, do please ask in the comments.

1. Ingvar Engelbrecht says: July 27, 2013 at 11:59 am I have been a programmer since 1968 and I am still working. I have been programming in many different areas including forecasting. If I have undestood this correctly this type of forecasting is architected so that forecastin day N is built on results obtained for day N – 1. If that is the case I would say that its meaningless.

2. Frank K. says: July 27, 2013 at 12:16 pm … “They follow patterns of synthetic weather”?? REALLY? Could you expand on that?? I have NEVER heard that one before…

3. DirkH says: July 27, 2013 at 12:21 pm … mathematical definition of chaos as used by chaos theory is that a system is chaotic IFF its simulation on a finite resolution iterative model…

4. ikh says: July 27, 2013 at 1:57 pm I am absolutely flabbergasted !!! This is a novice programming error. Not only that, but they did not even test their software for this very well known problem. Software Engineers avoid floating point numbers like the plague…

5. Pointman says: July 27, 2013 at 2:19 pm Non-linear complex systems such as climate are by their very nature chaotic… (to be fair, this is merely wrong, not stupid)

6. Jimmy Haigh says: July 27, 2013 at 3:25 pm… Are the rounding errors always made to the high side?

7. RoyFOMR says: July 27, 2013 at 3:25 pm… Thank you Anthony and all those who contribute (for better or for worse) to demonstrate the future of learning and enquiry.

8. ROM says: July 27, 2013 at 8:38 pm… And I may be wrong but through this whole post and particularly the very illuminating comments section nary a climate scientist or climate modeler was to be seen or heard from. (He’s missed Nick Stokes’ valuable comments; and of course AW has banned most people who know what they’re talking about)

9. PaulM says: July 28, 2013 at 2:57 am This error wouldn’t be possible outside of academia. In the real world it is important that the results are correct so we write lots of unit tests. (Speaking as a professional software engineer, I can assure you that this is drivel).

10. Mark says: July 28, 2013 at 4:48 am Dennis Ray Wingo says: Why in the bloody hell are they just figuring this out? (They aren’t. Its been known for ages. The only people new to this are the Watties).

11. Mark Negovan says: July 28, 2013 at 6:03 am… THIS IS THE ACHILLES HEAL OF GCMs. (Sorry, was going to stop at 10, but couldn’t resist).

Refs

* Consistency of Floating-Point Results using the Intel® Compiler or Why doesn’t my application always give the same answer? Dr. Martyn J. Corden and David Kreitzer, Software Services Group, Intel Corporation

Happy Birthday to Watts’ paper!

[Don’t miss the 2nd birthday!]

why-are-we-waiting About a year ago the entirety of the intertubes were rocked to their foundations by an announcement of epochal proportions: WUWT publishing suspended – major announcement coming. Or so we were told. Speculation was rife: had AW finally found those pix of Mann’s Prince Albert? But exactly a year ago the Truth Was Out and it all turned out to be very dull – it was just a paper preprint [*].

Most scientists of any kind of quality manage to produce at least a paper a year, anyone with ambitions for their step up is looking at two or more, in decent journals. Even a blog scientist (a bit close to the knuckle there, but the bit about guest posts about climate elves being utter sh*t resonates. AFAIK the true Blog Science manifesto is to be found at denialdepot) shouldn’t get too excited about a preprint.

However, a year has now passed with little public evidence of action. Perhaps the climate elves are working behind the scenes. I imagine that AW hasn’t given up all hope, because the “2012” graphic is still proudly on his blog. I wonder if any of the Watties ever notice it and wonder, or is it like the underpants on my bedroom floor, once they’ve been left lying there for a while they become invisible?

[*] The disparity between the mighty trailering and the feeble reality has lead those few who still think AW has a strong grasp on reality to speculate that Something Else was originally intended, and when that Something Else went sour, a half-baked preprint had to be rushed out to fill the gap. We may never know.

Refs

* Top Physicist Withdraws Support For Climate Sceptic Professor Sacked By Australian University says JM
* 2014/06: we’ve spent two years reworking it and dealing with those criticisms. Our results are unchanged and will be published soon sez Watts in “Reason”.

Oh dear, oh dear, oh dear: chaos, weather and climate confuses denialists

Its shooting fish in a barrel, of course, but you must go and read Another uncertainty for climate models – different results on different computers using the same code [WebCitation].

The issue here is a well-known one – it dates back to Lorenz’s original stuff on chaos. That trivial differences in initial conditions, or in processing methods, will lead to divergences in weather forecasts. The (entirely harmless) paper that has sparked all this off is an Evaluation of the Software System Dependency of a Global Atmospheric Model by Song-You Hong et al. and sez

There exist differences in the results for different compilers, parallel libraries, and optimization levels, primarily due to the treatment of rounding errors by the different software systems.

This astonishes the Watties, as though it was a new idea. To them I suppose it is. But it’s exactly what you’d expect, within a numerical weather prediction framework (though I’d expect you not to care within NWP. If differences in optimisation level have lead to error growth large enough to see, I’d have expected uncertainties in initial conditions to have grown much more and made the whole output unreliable). I don’t think you’d expect it within a climate projection framework, at least atmospheric-wise. You might expect more memory from the ocean. JA and I have a post on RC from 2005 that might help, originating from a post on old-stoat by me where I was playing with HadAM3.

In the comments at WUWT Nick Stokes has done his best to explain to the Watties their mistake – but AW has just rubbed out NS’s comments, because they were too embarrassing.

There’s an important distinction to make here, which is that climate modelling isn’t an Initial Value Problem, as weather prediction is. Its more of a Boundary Value Problem, with things like GHGs being the “boundary”s. Or at least, that’s the assumption and that is how people are approaching it (RP Sr disagrees, and you could discuss it with him. Except you can’t, becasue he doesn’t allow comments at his blog. RP Sr is too wise to value anyone else’s opinion). Potentially, there’s an interesting debate to be had about whether climate modelling can indeed be considered largely free of its initial conditions. But you can’t start such a debate from the level of incoherent rage displayed at WUWT.

Refs

* Initial value vs. boundary value problems – Serendipity
* Chaos, CFD and GCMs – Moyhu, 2016.

Wonga is “morally wrong”?

Non-beardy says “I’ve met the head of Wonga and I’ve had a very good conversation and I said to him quite bluntly we’re not in the business of trying to legislate you out of existence, we’re trying to compete you out of existence” (see-also the Gruan). When I first heard this while driving into work I mis-heard it (or slightly more accurately, at that point the news was new, and exactly what he meant by this wasn’t clear): I thought the CofE were intending to actually loan out money, on a commercial-but-nicer basis. Thankfully they aren’t going to do that: it would most certainly have been a total disaster (remember the Church Commissioners financial ineptitude). In principle I applaud his stated intent of out-competing rather than out-legislating them; that would be, in principle, the way to demonstrate that your system is better. But I think that while he might actually do some good, overall he is doomed.

[N.b.: while everyone in the current version of this argument is using Wonga in the generic sense that “Hoover” means vaccuum cleaner, AFAIK they are just one of several such “pay-day lenders”.]

It fairly soon emerged that the CofE actually hold a stake in Wonga, albeit indirectly. That doesn’t directly affect the argument; but it would be a hint to the wise that modern finance is more complex that back in the good old days of clearing the moneylenders out of the temple.

I visited the CofE website to see if they’d laid out their plans carefully there, but they hadn’t. So I decided to use the FT to work out what they are proposing. First of all, there is some rhetoric, or perhaps scene-setting if you’re more generous:

Justin Welby, a former finance executive in the oil industry, has described lenders such as Wonga as “morally wrong” and has compared the industry to Old Testament usurers.

This, too, is a hint to the wise that they’re on the wrong path: traditionally the fight against usury has been a fight against reality. Even now the stricter bits of the Muslim world have absurd bits of financial engineering that dress up interest in order to pretend that it isn’t. But on to the plans:

Dr Welby has… laid out plans to help 500 financial co-operatives, which already provide small loans, to expand their reach by using the Church’s 16,000 premises. He said he was embarking on a “decade-long process” to make credit unions both more engaged in their communities and “much more professional”. He has already launched a new credit union for clergy and church staff at the General Synod in York earlier this month.

This might do some modest good. I have no personal experience of this stuff, but I can easily believe that there are a number of financially-pressed folk who could do with useful advice, and possibly some actual help.

However, I strongly suspect that there is also a block of people who have a reasonable understanding of what is going on, and simply need a loan, and no-one else is going to give it to them, which is why they go to the likes of Wonga. And if you’re making smallish loans to financially pressed people with little or no collateral, then you’re going to have high expenses and you need to make money to cover the inevitable default rate (see-also Timmy). I haven’t checked, but I rather doubt that Wonga is making ginormous profits. If it was, it wouldn’t be for long, as others would pile into the sector and margins would fall. If it isn’t making enormous profits, then its margins aren’t excessive. QED.

But I have to admit, Dr Welby is a model of sanity compared to idiot politicians such as:

Stella Creasy, a Labour MP who has campaigned for a cap on credit costs and a wider crackdown on payday lenders, welcomed Dr Welby’s intervention, but said: “It should not take divine intervention to deal with this problem. It is very easy to fix.”

You have to be a complete moron, or a complete liar, to assert that this problem is “very easy to fix”.

[Update: I’m pleased to say that the Tories, fed up with falling behind in the talking-utter-drivel stakes, have made a late – and, it looks to me, winning – entry in the “Oh good grief I really can’t believe that even a politician would be dumb enough to say that” competition:

Church should consider pulling money out of Google, government adviser says… Claire Perry, a Tory MP and David Cameron’s adviser on childhood, went a step further by urging the Church and other investors in Google to “put their money where their mouth is”.]

[Yes, I know. Another ill-advised foray into economics and politics. But at least you know what I think.]

Refs

* Timmy largely shares my views. But then again, I largely got them from him, though not about this story in particular.
* Wonga, in their own words

Arctic methane ‘time bomb’ could have huge economic costs?

vast-costs Says Aunty. And the Graun says “Arctic thawing could cost the world $60tn, scientists say”. $60tn is a big number. But lets not trouble ourselves with the popular press: lets go straight to the source, which is Nature (“Vast costs of Arctic change”, by Gail Whiteman, Chris Hope and Peter Wadhams). Is that an impeccable source? Weeell, not quite. Nature whores after big-impact studies in a rather regrettable way, and more importantly this is but a “Comment” not (as far as I can tell) a proper peer-reviewed article.

You’d certainly hope it wasn’t peer reviewed, because some of it is dodgy, most obviously the opening paragraph:

Unlike the loss of sea ice, the vulnerability of polar bears and the rising human population, the economic impacts of a warming Arctic are being ignored.

But lets skip over that (and that Wadhams has form with AMEG), and proceed to the meat of the comment, which proves to be remarkably brief: all they’ve done is plug in a 50 GT methane “burp” into an integrated assessment model and read off the predicted damages. This is something that anyone could do, with no special insight required.

The first Key Point is: is 50 Gt believable? Wiki tells me that annual methane emissions from natural+anthro is about 600 Tg, which is 0.6 Gt by my calculation; so WHW’s 50 Gt is close to 100 years emissions. So its a large number. I’ll come back to that, because I’ve realised I want to pause to put $60 tn in context, which is the second Key Point. As the article says:

This will lead to an extra $60 trillion (net present value) of mean climate-change impacts for the scenario with no mitigation, or 15% of the mean total predicted cost of climate change impacts (about $400 trillion).

So yes its a big number but its “only” a 15% increase on what you’d get anyway. Also, I think (though I’m open to correction on this) that “net present value” means all future impacts, back-calculated to today using discount rates. As the comment puts it, that can be compared to “the size of the world economy in 2012 (about $70 trillion)”. Or put another way, all future impacts of this methane release can be paid for by a single year’s economic output. Which discount rates? Probably Stern-style very low ones, guessing from the Graun stating that WHW are “using the Stern review”. And my recollection is that while the Stern-type numbers are indeed very large, the damages (from GW, and out to 2100 at least) are smaller than the benefits (that we get, economically, from emitting the CO2; I don’t mean that the benefits of GW outweigh its costs). So given the vast uncertainties in the Stern-style process (you can certainly change Stern’s numbers by more than 15% just by tweaking his discount rates a bit) I don’t think an extra 15% is news, ter be ‘onest.

Now, how about that 50 Gt. Wiki tells me that there are 1,400 Gt potentially going; so 50 is only a small fraction of that. Lets have the whole para, since its quite informative:

Current methane release has previously been estimated at 0.5 Mt per year.[12] Shakhova et al. (2008) estimate that not less than 1,400 Gt of Carbon is presently locked up as methane and methane hydrates under the Arctic submarine permafrost, and 5-10% of that area is subject to puncturing by open taliks. They conclude that “release of up to 50 Gt of predicted amount of hydrate storage [is] highly possible for abrupt release at any time”. That would increase the methane content of the planet’s atmosphere by a factor of twelve.[13]

13 is only an abstract, ending:

The total value of ESS [East Siberian Shelf] carbon pool is, thus, not less than 1,400 Gt of carbon. Since the area of geological disjunctives (fault zones, tectonically and seismically active areas) within the Siberian Arctic shelf composes not less than 1-2% of the total area and area of open taliks (area of melt through permafrost), acting as a pathway for methane escape within the Siberian Arctic shelf reaches up to 5-10% of the total area, we consider release of up to 50 Gt of predicted amount of hydrate storage as highly possible for abrupt release at any time. That may cause ∼12-times increase of modern atmospheric methane burden with consequent catastrophic greenhouse warming

But read that again carefully, noting fault zones, tectonically and seismically active areas. They aren’t (as I read it) saying that the 50 Gt will or might be released due to human activity; they’re saying that geologic events, and leaks through existing holes in the permafrost, might lead to this release. At least I think that’s what they’re saying. In which case its an odd calculation, because they appear to be assuming that all the faults will become active at once. Aren’t they? And my reading of it completely decouples it from GW. So I don’t understand.

WHW don’t ref that abstract, of course. Instead they ref Predicted methane emission on the East Siberian shelf by the same lead author (Shakhova), but this time in a proper journal. Or… is it? the journal is “Doklady Earth Science” which is a journal of the Presidium of the Russian Academy of Sciences. It contains English translations of papers published in Doklady Akademii Nauk (Proceedings of the Russian Academy of Sciences). Is it any good? I don’t know. I can only preview the first two pages of the paper and it looks suspiciously to me as though they are rather more assuming their emission rates than predicting them. Anyone who has access to the full journal, do let me know.

Conclusion: meh.

[Update: more unravelling: mt and Nisbet et al.. And Notz et al. (Nature 500, 529 (29 August 2013) doi:10.1038/500529b) think its nonsense too.]

Refs

* It looks like the Graun has a rather worse take available, which mixes in implausibly early sea ice collapse: “Ice-free Arctic in two years heralds methane catastrophe – scientist”. Gavin isn’t impressed (detail) though as far as I can see, he’s only unimpressed by the sea ice collapse bit. Elsetweet, Gavin also says “nowhere is the v. low plausibility of emission pulse discussed” but since twitter is a pile of dingo’s kidneys I can’t work out how to link to it.
* Good heavens, this is unprecedented. one of the comments at the Graun is sane.
* Ruppel, C. D. (2011) Methane Hydrates and Contemporary Climate Change. Nature Education Knowledge 3(10):29
* A couple of posts on SS by Andy Skuce.
* Methane mischief: misleading commentary published in Nature by Jason Samenow, WaPo; Wadhams replies but continues to not-impress. As Gavin says, “Eemian”.
* Toward Improved Discussions of Methane & Climate
* Nafeez for the Record at P3
* Arctic and American Methane in Context – RC

Curry’s wide Sargasso Sea of Stupidity

ph This entire episode is so depressingly stupid that I almost threw the post away. But, courage!

As my title suggests, this is a morass of stupidity, of interest only to the navel-gazers within the incestuous world of climate blogs. Anyone with an interest in the actual science should steer clear. Metaphorically: if you’re starting from one side of the Sargasso Sea and wish to reach clear water on the other side, you’re better off going round rather than pushing through and clearing an endless buildup of weed off your rudder.

The motive for this was, now that I have a moment from the rowing to pause to think, me thinking “hmm, I haven’t written about science much recently”. That is partly an inevitable, and predicted, consequence of me not doing science any more. But also, it seems to me, because there isn’t that much going on. So since James and Eli are on hols, and not much was showing up elsewhere, I thought I’d range off into Curry-land, to see what she had found. And it was looking pretty thin to me: weekend discussion threads and stuff. But then I found Ocean acidification discussion thread, and took a look. On the surface, its yet another of those rubbish posts that JC does which boil down to “I haven’t got a clue about subject X, but here are two people who disagree, errrm, well that didn’t teach anyone anything did it, never mind I got a pile of page hits”. But there is far more wrong with it than that.

Lets do the surface stuff first. Actually lets not. Lets first notice that she attempts to use her “Italian flag method” to reason about the situation. That method is drivel, as many people have pointed out. Set that aside, and return to the surface.

JC complains about

Today the surface ocean is almost 30% more acidic than it was in pre-industrial times

on the grounds that the speaker, Doney, provided “no evidence or reference”. But this is dishonest of her because she’s clipped the preceeding sentence:

Over the past two centuries, human activities have resulted in dramatic and well documented increases in atmospheric carbon dioxide and acidification of the upper ocean

That is a hint that he isn’t bothering to docuemnt the bleedin’ obvious. Anyone less clueless that JC, or the legions of fools who form the majority of her commentators, could simply look it up. And furthermore Doney (who I don’t know, but is a “Senior Scientist at the Woods Hole Oceanographic Institution”, and even JC pretends to respect his expertise) just isn’t going to lie about basic, easily-checked facts to the US Senate. There’s a basic dumb America fallacy to this, as to so much of the denialist tripe. But anyway.

Then we need to clear away some confusion: a change in pH from 8.25 to 8.14, which is what has happened, is indeed a change in H+ [*] ion concentration by ~30% (covered in some detail here, wot I got via DA, thanks). That little mathematical transformation ties up people for quite a long time over on the Dork Side.

She then complains that Doney isn’t providing enough expressions of uncertainty. However, since we’re actually certain of this particular factoid, expressions of uncertainty would be wrong. But there seems to be no place in JC’s science-is-a-social-construct type worldview for this; expressions of scientific information have a strengthened credibility if they contain uncertainty.

And here we come to the nub of it all: JC is clueless, and is (if we pretend for a moment to believe what she says) attempting to evaluate these two competing views as texts to try and determine credibility. She needs to use this method, because she lacks the ability, or the time, to understand or verify what is being said. But the problem is that her method is worthless. The only way to evaluate such texts accurately is to read, understand and verify them. Or a useful shortcut is to depend on the authority of the speaker – this is, inevitably, what most of the populace are obliged to do, since “read and understand” simply isn’t open to them.

But the last (and to me worst) part of all this is that JC is just spraying disinformation around. If she finds this issue interesting, and finds this very basic fact to be beyond her ability to verify, then she can f*ck*ng well talk to some of her colleagues (assuming she hasn’t managed to cut all her ties to people of any quality). She’s at a university, no? She can talk to the prof of Chemistry. Or of Oceans. Or something. But one way or another, she can f*ck*ng well find out the truth, first. Then she could have written a post that might have been informative, and might have reduced people’s confusion on this issue. Instead she’s made the world a little bit worse.

[Update: hello, Watties! Are any of you brave enough to leave your walled garden and click through? How about being brave enough to comment. Not much hope there I suspect, you talk big in your little world but you’re not so brave in the Real World. You might be wondering why I’m not making sarky comments over at the Dork Side. The answer is that AW the gutless coward banned me for exposing his fantasies.

And to add a pic from a pdf suggested by NS]

[*] There’s a subtlety here that I hadn’t at first appreciated: pH is -log_10(a_H+), whereas the original p[H] is -log_10(H+). http://en.wikipedia.org/wiki/PH for details. For our purposes, the difference doesn’t matter.

Refs

* Eli will arise to say “the Sun”
* Eli again on pH measurement
* The Geological Record of Ocean Acidification Bärbel Hönisch et al..; Science, 2012 (via MV).
* Curry, 2015: pushing “Quantifying (that is to say, denying) the anthropogenic contribution to atmospheric CO2” by Fred Haynie.