KK provokes again with Ecocide on the Docket, referencing Trial tests whether ‘ecocide’ could join genocide as global crime. If this is just PR then I’m with KK: it is stupid. If it is real, it is also stupid.

They have a definition of ecocide:

Ecocide: The extensive damage, destruction to or loss of ecosystems of a given territory, whether by human agency or by other causes, to such an extent that peaceful enjoyment by the inhabitants of that territory has been severely diminished.

The addition of “by human agency or by other causes” is curious: that would make, say, Mt St Helens, or Pinatubo, guilty of ecocide. Perhaps Katrina was, too. Why that is any concern of the courts, though, is a mystery to me. By “inhabitants” I assume they mean humans (mind you, what is a “territory”? A country? A geographical area? A deliberately vague word designed to generate income for lawyers?); that would mean that total environmental destruction of, say, Antarctica would not be ecocide. Or any uninhabited portion of the Sahara, or the Amazon rain forests. I don’t think that a definition of ecocide that only works in inhabited areas makes any sense at all.

They give some examples: the BP gulf spill is one. I find that hard to fit as “ecocide”: it was all really exciting at the time, of course, but is very largely over now. Some people are still inconvenienced, but the ecosystem is largely back.

The Canada tar sands are another example of ecocide but (and I say this in near total ignorance) isn’t that too a largely unpopulated area? Should it matter whether it is inhabited or not?

Old news

DSC_8658-stained-glass_xform_crop The comments over at More Misc are trailing off, but I am (as ever) astonished by peoples’ desire to have the last word. Let it never be said that KK is uncontroversial. Still, what more could he ask? So, time for something else.

I didn’t comment on Al Gore’s latest (did I?) or even watch it, but David Hone has what look like some perceptive comments.

I’m beginning to get google circle spam in unmanagable amounts: too many X’s are adding me to their circles, and I can no long bother check them all out, let alone reciprocate. Still, I did find Climate Deniers Campaign Against the BBC Backfires. Yes, it is old, like me.

In other old news, the UK shale gas find made the news over here and the blogs. That links somewhat to the maybe gas is as bad as coal idea, which I find dubious (but haven’t worked through yet). Various people don’t want to use the gas and the Grauniad reports Chris Huhne with: The UK’s “dash for gas” will be halted by the government because if unchecked it would break legally binding targets for carbon dioxide emissions, Chris Huhne, energy and climate change secretary, said on Monday evening. That seems weird to me because (pace Cornell) switching to gas is better than coal (certainly if measured in terms of CO2 emissions, which is the legally binding bit). However what he actually said was “We will not consent so much gas plant so as to endanger our carbon dioxide goals” which is ever so slightly different, and could be consistent with approving lots more gas. Apart from anything else, we could extract and sell it to johnny foreigner. Or then again, if the gas were too cheap to meter (ahem) it would make CCS economic. Curiously enough, Timmy in April had a rather different perspective. I’m very dubious that our so-called legally binding CO2 targets are meaningful, anyway.

And a sop: Erasing false balance: the right is more antiscience than the left.

[Update: on the gas-vs-coal: the thing I didn’t look at closely was how much Wigley’s numbers depend on aerosols from coal. I assume, a lot. But going forward, that is unlikely to be true: we just couldn’t increase coal use that much without cleaning it up a lot.]


* Coal to gas: the influence of methane leakage CLIMATIC CHANGE Volume 108, Number 3, 601-608, DOI: 10.1007/s10584-011-0217-3, 2011.
* Press Release: CMU Researchers Find Fewer Greenhouse Gas Emissions From Controversial Drilling at Marcellus Shale Sites Statewide
* Alberta leads Canada in Fight Against Global Warming

Galileo on infinity

Stung by some rather odd language on wikipedia at [[Two New Sciences#Infinite sets]], I’m reading the bits about infinity in “Two New Sciences” (Discourses and Mathematical Demonstrations Relating to Two New Sciences aka Discorsi e dimostrazioni matematiche, intorno à due nuove scienze, 1638) via the translation available at If you look at the oldid of the wiki article, you’ll see the familiar problems: that all Galileo’s ideas have been translated into the language of modern (set) theory that he likely would not even have understood; and that the article is a travesty of what he actually thought – it makes him anticipate Cantor in a way that I don’t think he did (and so I’ve fixed it). Anyway, lets read him, remembering that, approximately, The same three men as in the Dialogue carry on the discussion, but they have changed. Simplicio, in particular, is no longer the stubborn and rather dense Aristotelian; to some extent he represents the thinking of Galileo’s early years, as Sagredo represents his middle period. Salviati remains the spokesman for Galileo. That is also from the wiki article, but I’ll trust it nonetheless, because I know no better.

tnsFig005_s It probably helps to have searched for “infinit” in that text, which on Chrome at least helpfully highlights all the occurences, and marks up the scroll bar, so you can see where the discussion occurs. We start around page 68. Galileo is groping towards ideas of limits. He is trying to understand what happens when two concentric cylinders roll along (apparently, this is a classic Aristotelian problem): if the outer one, B, doesn’t slip, then the inner on, C must. But what does “slip” mean, he wonders? An infinite number of infinitely small slips? He motivates his discussion by considering polygons, shown in the pic, and then considers polygons with more and more sides.

square But there is a problem, which I’ll try to illustrate here with my picture of a square (someone must have a better picture of the same idea). Consider the paths across the square: starting with the outer one (black), then the one using lines of half side (dark grey), half again (red) and so on down to the smallest I’ve drawn in a fetching shade of pale green. The limit of these paths is the diagonal (clearly). But the lengths of all of these paths is the same: 2 (assuming a unit side to the square). But the length of the diagonal is sqrt(2). So, as we know, f(lim(s)) != lim(f(s)), where f is a function (in this case “the length of”) and s is a sequence (in this case, of paths).

Aside: Galileo is uneasily aware of problems like this; he uses the somewhat more complex example shown here, where we draw a circle based on some points (see p 84), whose radius approaches infinity as we move the point C towards O, finally becoming a straight line. He doesn’t resolve that problem, merely uses it as an example, and doesn’t answer his question Now what shall we say concerning this metamorphosis in the transition from finite to infinite?

So Galileo ends up with his explanation as “The line traversed by the larger circle consists then of an infinite number of points which completely fill it; while that which is traced by the smaller circle consists of an infinite number of points which leave empty spaces and only partly fill the line”. Which is meaningless.

There is also a failure to separate out the maths from the real world: SALV: …Thus one can easily imagine a small ball of gold expanded into a very large space without the introduction of a finite number of empty spaces, always provided the gold is made up of an infinite number of indivisible parts. SIMP. It seems to me that you are travelling along toward those vacua advocated by a certain ancient philosopher. SALV. But you have failed to add, “who denied Divine Providence,” an inapt remark made on a similar occasion by a certain antagonist of our Academician. I don’t think this is fatal (at least not as far as I’ve read) but it does introduce the possibility of confusion (and as SIMP points out, of conflict with Da Man and Da Church).

Now, comparing infinities: SIMP. Since it is clear that we may have one line greater than another, each containing an infinite number of points, we are forced to admit that, within one and the same class, we may have something greater than infinity, because the infinity of points in the long line is greater than the infinity of points in the short line. SALV. This is one of the difficulties which arise when we attempt, with our finite minds, to discuss the infinite, assigning to it those properties which we give to the finite and limited; but this I think is wrong, for we cannot speak of infinite quantities as being the one greater or less than or equal to another.

This is wrong (in modern terms). With suitable definitions, comparing infinities is fine; we say and prove that the “number of points” in a line of twice unit length is “the same as” the “number of points” in a line of unit length. SALV, however, undertakes to prove his point, but drops down into the integers to do it. The example is a now familiar one (did he originate it? I doubt it; this implies not) that if we consider numbers and their squares, there appear to be “more” plain numbers than squares; and yet we can map in a 1-1 way from numbers to their squares, so there must be “the same” number (the modern solution, if you don’t know it, is that if you can construct a 1-1 map from one set to another then they have “the same number of elements” (cardinality). And this works, and is free of contradition).

So it isn’t entirely clear why Galileo insists infinities can’t be compared; in the case of the integers, he has almost everything he needs to get the right (i.e. modern, more powerful, interesting and useful) answer; but he funks it. Perhaps because he wasn’t really interested in the integers; they were merely a helpful expository device to him. He already “knew” the answer he wanted in terms of the continuum. But it is harder to reason about the reals.

Update: I have a thought about where Galileo went wrong. He was too much trapped in the way of thinking that the things you are reasoning about are known and all you need to do is to think more carefully. After all, people have been thinking about this for millenia. What he failed to see – what everyone failed to see – was that what was missing was a definition of equality for infinite quantities. Once you see that, then you see that you have the definition to hand – 1-1 correspondence – and all problems resolve themselves. But realising that you don’t understand equality is the hard part. In this, it is very like Einstein realising that a definition of simultaneity is needed.


* Galileo and Leibniz: Different Approaches to Infinity
* Me discussing infinity and stuff with E. Ockham

Climbing tales of terror

Its a book, in fact. I remember browsing it in the good old days when I used to climb. But today I was browsing Rock Athlete which is Ron Fawcett’s book, and contains the following memorable story, which I will share with you because I liked it so much:

In the old days, a pair of his friends were climbing Malham Cove. They were aid climbing the extensive overhangs in in very bad weather in winter, and were benighted. Not realising they were close to the top, they chose to ab off in darkness. The rope became tangled, and in the cold the first was unable to untangle it, and forced to cut it; he promptly fell 35 feet to the ground, breaking both legs. The second, feeling the rope slack, came down and also fell off the end of the rope, breaking both wrists. They slowly made their way to the nearest farmhouse, and the second knocked on the door – with his head, since both wrists were broken.

Which reminds me, I’ve been eyeing the Pembroke Rockfax.

More misc

DSC_8645-quince Just some random jottings, none of which amount to much. I’m making quince jelly. Last Sunday was the first anniversary of my half-marathon running career, which I celebrated at Grunty Fen. Next Sunday will see me sculling the Boston marathon for the first time, after being in the ladies VIII last year. We’ve been painting the club blades recently; Amy has some nice pix.

Arctic sea ice may have hit its minimum this year and be on the way up. Neven certainly thinks so. I think it is likely; a few more days will make it clear. Its quite early, though, for a minimum, but probably not unprecedentedly so. The familiar end-of-season post will appear at some point.

With all the fuss about the Euro and so on, Timmy would like to tell you about taxes and transfers. I’ve recently taken to reading Bronte Capital too. That is the problem with not being impecunious: you have to worry about where to put your pennies. Lest I be thought to blatantly libertarian, Bruce Schneier has a nice example of how regulation helps.

Eli has some stuff to say about the weird Trial of Charles M. Speaking of which,
I’m close to giving up KK for a bit; too many trolls.

[Update: JF notices Piers Corbyn, Twitter spammer?.]

Natural disasters (again)

Though of course by “natural” we’re thinking of with-a-human-contribution. My text is taken from the book of Grauniad:

On Friday a team of researchers in Boston calculated that even with only a 2C rise, summer temperatures now regarded as “extreme” will become normal. This is the second such warning from the US this summer. Europeans in 2003 and Russians in 2010 had lethal experience of heat waves. … Munich Re predicted that 2011 – on the evidence of the first six months alone – will be the costliest year ever for disasters triggered by natural hazard. Total global losses by June had reached $265bn, far outstripping the $220bn record set for the whole of 2005.

But without directly taking a position on the vexed issue of how natural disasters have increased due to GW, it is clear that the Grauniad is talking bollocks. Or rather, it is clear that while what they say is literally true, it is deeply misleading. Consider this slide from the Munich Re report:


So, that is at least $40 billion in unexpected-earthquake (insured) losses, or $230 billion in overall losses. And if you compare that with page 29 on the same report, you see that 2005 was ~$250B (not sure why the Grauniad says $220B, perhaps different to-date adjustment). So the “not including earthquake” losses for 2011 are “only” ~$40B. Note that the Indonesian tsunami was 2004, but December 26th, so might have bled into 2005; but anyway according to p37 wasn’t very expensive – presumably because of where it occurred. Katrina was the big cost in 2005. The Spring 2011 tornado season in the US looks to have been costly, in comparison to other tornado seasons.

Incidentally, if you scroll down further you get various financial bits, which I don’t understand. p58 shows that underwriting capacity is at an all time high, which is presumably good.

h/t Timmy.


* Bootleggers, Baptists, and the Global Warming Battle

Ozone regs

According to the Economist:

Barack Obama socked it to the left on September 2nd, by backtracking on a new rule to mitigate air pollution. As proposed by the Environmental Protection Agency (EPA)–a hate object to many Republicans–the rule would have reduced ambient ozone, a toxic gas created by power-plant emissions and exhaust fumes, to less deadly levels than America currently permits. According to the EPA, this would by 2020 have saved up to 12,000 lives and 2.5m working days and school days lost to the toxic effect of ozone on American lungs each year. It would also have cost polluters and government up to $90 billion per year–a toll that, in hard times, Mr Obama felt unable to levy.

Doing a quick-n-dirty [ahem; but also wrong – see update] calculation, that is $50 million per life saved. Which isn’t economic, those it isn’t wildly out – as I recall, first-world lives go for about $10 million a go. Which is to say, if you’ve got that much money and want to save lives, there are ways to spend that money and get a far better result.

So Obama was correct to veto the regs; at least based on that analysis [but according to the update, things are much more ambiguous]. Not everyone is happy though. That analysis ends with “the EPA estimated that increasing the stringency of the standard would produce up to $37 billion in health benefits annually”, which sounds impressive, but they don’t even attempt to mention the costs, which seems odd (note, BTW, that if you’re up in arms about attempting to value a life, then you can’t accept that $37B figure either). As I’ve said, the Economists quotes $90 billion/year, so that too makes it a net loss.

Oddly enough, Krugman (although an economist, and a noted one at that) doesn’t even attempt a cost benefit analysis. As I read his argument, it goes “the country needs economic stimulus, ozone regs would have cost money, so that’s great lets do it”. But even based on that kind of analysis, perhaps there are other ways of achieving the same aim (i.e., that of throwing money at the economy).

[Update: thanks for a couple of refs found. Both say The Clean Air Act bars EPA from considering costs in setting or revising any national air quality standard. EPA analyzes the benefits and costs of any major rule under requirements of Executive Order 12866 and according to guidelines from the White House Office of Management and Budget. which is interesting and might be reasonable: they need to try to find out the cost/benefit, but aren’t allowed to use it. That, however, is only reasonable if there is a clear understanding that their regs can be overturned on cost/benefit grounds.

There are a couple of different proposed standards, which accounts for some of the vast range of costs/benefits. It looks like it is best to talk about 0.070 to 0.075 parts per million, which might be the EPA’s favourite. says Annual net benefits for implementation of the proposed standards (i.e., 0.070 ppm to 0.075 ppm) in 2020 range between -$20 billion and +$23 billion. Because of the high degree of uncertainty in these calculations, EPA cannot estimate whether costs will outweigh benefits, or vice versa. which means the numbers I have must be wrong, since I get a clear net loss. I only used the deaths, not the days-lost stuff, but even so I doubt that fits. I think the answer is that I was wrong to think of the deaths as being “up to 2020”; that figure is annual, at 2020. Hence the deaths-saved are actually about 7 times bigger than I thought, which brings costs and benefits back into approximate balance. The original language of the EPA is a touch vague, but the Economist transformed “in 2020” into “by 2020” which didn’t help (but with some faint justification; it is the year EPA uses, and they use it because that is when they estimate the new unknown controls could come in). says The benefits estimates include the value of an estimated reduction in the following adverse health effects in 2020:… 0.060 ppm… Avoided premature death 4,000 to 12,000…The costs of reducing ozone to 0.070 ppm would range from an estimated $19 billion to $25 billion per year in 2020. For a standard of 0.060 ppm, the costs would range from $52 billion to $90 billion.

There is clearly a pretty big range in all this: Setting a standard at 0.075 ppm would reduce ozone- related premature deaths by 200 deaths per year in 2020; using the three studies that synthesize data from a large number of individual studies leads to an estimated reduction in ozone related premature mortality of 900 to 1,100 per year. which gives a factor of 5 in deaths-avoided calculation.

Also interesting is some breakdown of the costs: The annual control technology costs of implementing known controls as part of a strategy to attain either a standard of 0.075 ppm and 0.070 ppm in 2020 would be approximately $3.9 billion… For areas that cannot meet the standards using known controls, particularly in California, the estimate of additional control technology costs for unknown controls range from $5.9 billion to $18 billion annually for a 0.070 ppm standard. So it looks like most of the cost comes from “unknown controls”. That, inevitably, is going to make their costs hard to estimate and perhaps likely to inflate.

What is, however, entirely clear is that any of these measures would be vastly more cost effective than the “war on terror”; you could save a twin-towers worth of people each year at far far lower cost.

Global cooling, again

The Washington Post Continues to Publish George Will’s Climate Change Disinformation at thinkprogress. Just keeping track of these things, you understand. I thought the 70’s-cooling mole had been well whacked, but no.


* Now out in BAMS: The myth of the 1970s global cooling scientific consensus
* Global cooling: Inhofe talking sh*t* again
* Fuckem’s Razor and the solution to the climate question
* etc. etc.

I think we should expect that more-even most-papers from skeptics will be of poor quality

Not me guv, but Tom Fuller (just when I’d given up hope he would ever say something sensible). You might say, “well der”. But this chimes in very neatly with a not-fully-discussed problem with the Spencer and Braswell error, which Gavin talks about at RC: With better peer review, Spencer could perhaps have discovered these things for himself, and a better and more useful paper might have resulted. By trying to do an end run around his critics, Spencer ended up running into a wall.

Spencer and his ilk are afraid of peer review. Not for the reasons that they give – that the vast conspiracy will squelch them – but because they know their work is weak, and they really don’t want it exposed to proper scrutiny (you might disagree; but never mind that, my argument doesn’t depend no it. All you need to agree is that the sceptics avoid proper journals). So they send their stuff to journals where they know it won’t get proper review by experts, as happened in this case. This is intended as a cunning plan to evade scrutiny, but it ends up depriving them of the vital feedback and interaction with peers that improves papers. And not just at review stage: from Woy at least you can see a bunker mentality which means he won’t be discussing his ideas with others even as he tries to mature and work on them pre-publication. Its the lack of this feedback/interaction that will doom future skeptic-type papers from the likes of Woy.


* Conspiracy Dog-whistling about GRL and the New Dessler Paper
* “Dr” Roy Spencer is sad and lonely and wrong.

Spot the problem

Just for once, a vaguely work-related post, but without any work.

What is wrong with this (its in C, of course):

switch (stoat)
    bool goat = TRUE;
    case weasel:
        goat = FALSE
        /* Fallthrough */
    case ferret:

where you should assume that “mustelid” represents a block of code large enough to be worth not repeating. You should find this not very hard to solve, once I’ve presented it in this format.

For bonus points, what does the compiler say? What does your editor do?

[Update: OK, so the answer is that the initialisation of goat is skipped. Its a statement, but it isn’t behind a case label, so it is ignored. But the second part of the answer is that it is ignored silently, which I’ve got used to the compiler not doing: if you do something stupid which has no effect, it will normally warn you. quokka was the first to get both parts of the answer (those who said “there is a semi-colon missing” were correct, but fell for the “hide the hard answer behind the easy answer” test that I accidentally set. I wouldn’t bother write a post about a missing semi-colon, and anyway the compiler easily spots that and fails to compile it).

There is a third part, which is that not only is the non-functional statement not warned about, but it also fails to warn that “goat” may be used uninitialised, which is very naughty of it indeed. That may be gcc4 brokenness.

Someone else who investigated this more thoroughly than I wrote:

“GCC 4 (4.1.2, 4.4.4 and 4.6.1) won’t warn you about this even with -Wall and -W (and -O which is necessary for some of the analyses to work). For GCC 4.1.2 and 4.4.4 it takes the specific command line option -Wunreachable-code to tell you that the first assignment won’t be executed. However, -Wunreachable-code is not normally recommended. The GCC manual says:

This option is not made part of -Wall because in a debugging version of a program there is often substantial code which checks correct functioning of the program and is, hopefully, unreachable because the program does work. Another common use of unreachable code is to provide behavior which is selectable at compile-time.

GCC 4.6.1 doesn’t provide -Wunreachable-code but the new option -Wjump-misses-init finds the problem (this is not part of -Wall or -Wextra).”

Isn’t C interesting?]