Anticipating the facts
Diplomatic ingenuity
November 9, 2009 63 comments
In the course of a review of the new Satow’s Diplomatic Practice, Jeremy Greenstock, former UK ambassador to the UN, writes in the TLS: ((Not yet renamed to Times Literary, though I suppose it is only a matter of time.))
The US and the UK famously came to grief when they tried too hard, when lacking proof, to be convincing about Iraq’s weapons of mass destruction. Those of us closely involved on the UK side believed we were illustrating a case that was bound to turn out to be true when the final evidence was collected. But it never was; and we had to take the rap for anticipating the facts.
Diplomatic language, of course, is celebrated for its litotes and subtly playful oxymoron, ((The phrase non-paper, also used by Greenstock in his review, is a particularly nice term of art.)) and here I believe that Greenstock has designed a masterpiece of the genre: anticipating the facts.
At first it looks like a mere exculpatory understatement, on the order of “jumping the gun”: a cover-your-ass euphemism, decorated with self-mocking paradox (things that aren’t true were never “facts”, anticipated or not). Yet the chronology implied in anticipating the facts is of course devastating: to say that the claims of the existence of WMD were anticipating the facts is to say that those claims (of certain knowledge of the weapons’ existence) were made before the facts were known. And so Greenstock is confirming — in the most delicate possible way! — what was already known but bears repeating: that the US and UK lied (about what they knew to be the case versus what they hoped “was bound to turn out to be true”).
In this way, to say that the US and UK governments were anticipating the facts is an even stronger verdict than saying the “facts were being fixed around the policy”. Greenstock’s formulation makes it clear that there were at the time no “facts” of the required sort to be had, let alone to be had and then fixed around the implacable plans for war.
What “facts” are you currrently anticipating, readers?
Anticipating the facts = guessing (which contrasts nicely with the assertion at the time that it was an established fact that Iraq had WMD, an assertion which supported the contention that the invasion was legal)
And has anyone really taken the rap yet for being so completely wrong?
From Wikipedia:
John Scarlett became the head of SIS on 6 May 2004, before publication of the findings of the Butler Review. Although the review highlighted many failings in the intelligence behind the Iraq war and the workings of the Joint Intelligence Committee, it specifically stated that Scarlett should not resign as head of the Committee and SIS.
Quoth Butler: “We have a high regard for his abilities and his record.”
Guano — it’s not the being-wrong for which rap ought to be taken, in my view, but the demonstrable lying about the strength of the evidence available at the time.
Many people, even among those who opposed the war, and even some of my most forensically minded friends, are quite reluctant to accept that the governments lied — to say so evidently feels in some quarters somehow unserious, as though one is aligning oneself with rank conspiriological quarters of internet basement-dwellers and people who spell Blair “Bliar”. What complicates the matter is that the mainstream-lefty claim about what the governments lied about — ie, that they said there were “WMD” while knowing that none existed — is not, as far as I know, supported by any evidence at all. So it is tempting to assume that that is the case for government lies, properly reject it, and so conclude that the governments did not lie.
But of course they did lie, in that they claimed that they had hard proof, incontrovertible top-secret intelligence, certain knowledge of “stockpiles” and so forth, even though they knew that none of the “evidence” they were ordering their intelligence agencies to scrape up was of this reliable quality, and knew that it had to be massaged into more convincing shape for the purposes of public hoodwinking.
I agree with you completely, Steven. It was a lie to say “we know that Iraq has WMD” because the people who said that knew that the evidence was not conclusive. I have said this many times before.
However my point is that I don;t think that anyone has in fact taken the rap for “anticipating the facts” or lying or getting it wrong. In fact quite a number of people who went around the world repeating these lies have done quite well out of it.
Indeed: as long as you confess a struggle with mental illness or a belief in parthenogenesis, say, then all is forgiven.
I believe in…
parthenogenesis pär??n??jen?sis|
noun Biology
Reproduction from an ovum without fertilization, esp. as a normal process in some invertebrates and lower plants.
But only in some invertebrates and lower plants.
I look forward to seeing “anticipating the facts” and the far more splendid and sinister “tried too hard” in an upcoming episode of The Thick Of It. Lines worthy of Armando Iannucci.
Stan Halen is quite correct. For a belief in parthenogenesis to be omniexculpatory, it must relate specifically to parthenogenesis in humans. I apologize for my loose talk.
Thank you, Steven. And do you apologize for your loose typing or have you moved Statezide?
A move to Oxford would suffice. Or possibly suffize.
The recipe for the only really exculpatory version of parthenogenesis involves one human female (sex chromosomes XX) and produces a human male (sex chromosomes XY).
I once had the immense displeasure of seeing a TV drama that hinted (to a biologically illiterate character) at such a solution for reconciling science and religion, as if knowing and caring nothing about science were in some way an intriguing theological novelty.
Apparently I still bear a grudge.
@4: nice to hear there’s such a thing as a ‘mainstream-lefty’ claim, though I’m not sure I recognise the version of it you suggest.
There seemed various lies floating around before the war, in addition to claims of knowledge about WMD – e.g., that a decision had not been taken by the US or the UK to go to war; that – this esp. to the British public – Britain would abide by the decision of the United Nations (holding this line required some especially marvellous & heroic linguistic convolution from Greenstock); that – this esp.to the American public – Saddam Hussein and Osama Bin Laden were in cahoots…
Sometimes covert smearing of inconvenient individuals amounted to what I would call lying, as when press-releases characterised dead-in-the-woods Kelly as a ‘Walter Mitty’ character. To my mind there’s been one especially big smear told as an open lie after the war. In order to dismiss the estimates of the numbers who have died in Iraq which were published in the Lancet, UK government ministers have repeated claims that the estimates were based on ‘flawed methodology’. This is just groundless, and some of those who repeat the claim must know it.
Dave, re your curiously* off-topic point about “smears” and the Lancet study:
Some leading experts in the field claim there were indeed serious flaws in the Lancet study’s methodology (or rather its implementation & quality-control): Beth Osborne Daponte, Mark van der Laan, Debarati Guha-Sapir and Olivier Degomme (Centre for Research on the Epidemiology of Disasters, Brussels), Jon Pederson, etc. See: http://tinyurl.com/proj-censored
Of course, when the government spokespeople (together with Bush) said the methodology was flawed, they had no clue, as the study had only just been published. Still, that doesn’t change the fact that there are now grounds for claiming that the methodology (as implemented) was seriously flawed.
And it doesn’t help that a much larger (and better quality-controlled) study (IFHS) with a similar methodology produced an estimate of violent deaths that differed from the Lancet estimate by 450,000, or that the Lancet study’s lead author, Gilbert Burnham, was suspended for ethical violations over the study, and censured by AAPOR for violating “fundamental standards of science”.
* “Smears” and defense of the Lancet study are two of the main obsessions of the mainstream-lefty (or perhaps “Fundamentalist Chomsky-lite”) website, Medialens.
dave @ 13 —
You’re right: the Downing Street memo shows that to have been a lie.
I don’t think it’s really justified to call this a lie, however, since I think Blair et al at the time really believed they would get the second resolution, so of course they intended to abide by that decision. (I think it is in general hard if not impossible to show that statements about future intentions are ever lies.)
Mind you, that does remind me of another lie, viz the concerted falsification of what Jacques Chirac said.
What sort of ethics guide how we discuss the ethical violations of ourselves or others? Curious question, that.
Bruce writes:
What were the ‘ethical violations’ that took place? Did Burnham fabricate data, falsify data, or plagiarise data? Not only are these ethical violations that scientists do sometimes commit, but these violations would severely undermine any basis for the credibility of the research at hand. Was it one of these? Or did Burnham fail to disclose conflicts of interest perhaps? Such an ethical violation would not necessarily undermine the data, but it would cause us to pause before accepting the data and would certainly invite us to pause before accepting the author’s interpretations of the data. Or did Burnham sleep with one of his research scientists or harrass somebody? Not nice, of course, and an ethical violation that does sometimes happen in the sweaty world of scientific research, but once again, not necessarily damning of the research. Or perhaps there was a violation of subject anonymity in the study? That some of the data collectors in Iraq put down information on the sheets that were not supposed to be there – like names and addresses? I believe that’s what happened. And the study authors took responsibility for this. It’s a serious problem – indeed, given that one of the most serious challenges to conducting research during a war is ensuring that the research does not make people less safe, it’s a very serious problem. But, oddly enough, people who take ethical matters seriously will see that it is a very different ethical problem from, say, data fabrication, and so people who take ethical matters seriously would shy away from blanket, undetailed accusations like “was suspended for ethical violations”. As for the AAPOR stuff – well, I’d be hard pressed to leave that lingering at the end of a sentence as though it were an objective, clear, and unambiguous measure of the man’s ethics, research or otherwise, especially given that this organisation was demanding information from a man who is not a member and who is affiliated with an institution that is not a member. It’d be a bit like me demanding that Burnham send me all his information and then formally sanctioning him for not doing so: in fact, you might add that to the end of your ethical assessment of Burnham “was suspended for ethical violations over the study, and censured by both the AAPOR for violating “fundamental standards of science” and sw for “failing to be a proper scientist”.
I love that in the link Bruce supplies there are tons of people saying, “Yeah, but I think the number is lower, ’cause it doesn’t feel right” and then sometimes, not always but sometimes, pointing to other sources, often their own, without mentioning the limitations of their own work. Still, there are some really smart people there. And I do sympathise. I can’t help but think that the number is lower.
I do appreciate the link within the link to Beth Osborne Daponte’s paper. Her paper is well-worth reading because it details many of the problems involved in collecting this type of data; I can’t say that I would agree with her conclusions, any of them, or fully understand how she makes her choices as to what is best. But it begins with probably the ugliest, strangest and, not so much unspeakiest but obliviousnessiest, first line of any paper I’ve read on the subject:
“During wartime, the public and policymakers legitimately thirst for figures on the war’s civilian death toll due to the war’s direct violent and indirect health effects on a population.”
The line is crazily poetic. I love how you hope the next sentence will be, “Less legitimately, the public and policymakers thirst for blood! And guts!” I love the values and judgement implied by “legitimately”, in a context where legitimacy is so fraught. I love the use of “figures” – a word that can mean “bodies” as well as the “numbers” representing those bodies, and how one hopes that in an alternate universe the first line of this paper says that during wartime, “the public and policymakers legitimately thirst for figures who will bring peace.” I love her optimism that people are so intensely interested in wars’ effects that they have a bodily need for this data, thirsting for it. I love how the first two words make me think of David Byrne. I love the confidence that gets packaged into “civilian” and the separation of a “war’s direct violence” from “indirect health effects on a population” – sure, they’re kinda separate, right, but not always?
Maybe “anticipating facts” and our legitimate thirsts for the factual truth and the perfect methodology for studying how many civilians are killed in wartime need another reference point:
We’re still waiting.
Thank you to sw for that useful taxonomy of ethical violations!
I must say, when I first hear that a scientist has been found guilty of “ethical violations”, I find myself rather suspicious of the scientist, and the science.
But then when someone points out that the particular “ethical violations” in question are entirely irrelevant to any estimation of the science, then I instead find myself very suspicious of the person who casually mentioned the “ethical violations” as a way of trying to undermine the credibility of the science.
Jon Pedersen did not describe the Lancet study’s methodology as “flawed”; his view is simply that it is impossible to conduct a valid survey covering a time period as long as four years. Describing the IFHS survey as “better quality controlled” is very controversial indeed, as this survey a) was carried out by government employees who identified themselves as such and b) did not visit the province of Anbar at all because it was too dangerous to do so.
Steven @ 17: Understood, but the next step, perhaps, is for someone to point out that the “ethical violations” were not in fact “entirely irrelevant” to “any estimation of the science”. See, for example, Stephen Soldz’s ZNet piece: http://www.zcommunications.org.....icle/20890 (and that’s just for starters)
Incidentally, if this Unspeak page had been devoted to Iraq mortality studies, I would have written at greater length about it, rather than briefly include a few pointers to counter Dave’s off-topic assertion that claims about “flawed” methodology were “groundless”.
If anyone is under the illusion that these pointers do not point to things that bring Dave’s “groundless” assertion into question, then I’d be happy (with Steven’s permission) to go into great detail on Pedersen’s opinions and IFHS quality-control (see dsquared @ 18) and the Burnham suspension and AAPOR censure (see sw @ 16), and their implications specifically for evaluation of the Lancet study’s methodology, as implemented.
I’m afraid I can’t see how Soldz’s piece accomplishes the work you claim it does. As for the rest, go for it if you like.
Steven @ 20: I thought Soldz was pretty clear. Had you already read his piece before I linked to it?
Oh, it’s pretty clear what Soldz is saying. I find his attempt to argue that the particular “ethical violations” we are talking about — viz., writing down names and addresses — call into question the reliability of the report as a whole (“this error […] means that […] we can no longer rely upon the Lancet II mortality estimates”) perfectly unconvincing.
But I don’t myself intend to argue any further about this. Feel free to go into your threatened detail and argue with sw and dsquared, if they wish to reply. Do be careful that you are not underestimating the quality of your opposition.
Well, Soldz isn’t arguing that this alone caused him to question the study’s reliability. (Incidentally, Soldz was a long-time supporter of the Lancet study, apparently well-informed on the various criticisms of it – which he’d previously attempted to address by speaking directly with its authors and some of its critics, eg Pedersen. He’s been vocal defending the study against several criticisms. I mention this just in case anyone thinks that he must be a rightwing pundit or something for finding the Lancet estimate unreliable). Burnham’s documented lapse (the reason for his suspension) seems to have confirmed some of Soldz’s doubts about quality control. When I said he was “clear” I meant specifically about his reasoning (in the postscript on the piece I linked to) in answer to the very point you raise, which he himself stated:
“Several readers have raised the question as to why the lapse committed by Burnham et al. in this study warrants dismissing the entire study. After all, they argue, the lapse of recording names was an ethical lapse, perhaps, but recording extra information should not affect the results. Let me take this opportunity to clarify my reasoning.”
He then explains:
“the Lancet study authors have been less than forthcoming with key details, such as their exact sampling procedure for selecting streets, which, under criticism, they admitted was not accurately described in the published paper. That we now know that another crucial detail, the collection of identifiable information, deviated from the published record, and that the authors failed to correct the public record on the matter until forced to, raises questions about what other aspects of the study may not have been conducted as described. As long as these questions remain, the study cannot be considered reliable.”
As for the comments from sw and dsquared —
sw writes: ‘I love that in the link Bruce supplies there are tons of people saying, “Yeah, but I think the number is lower, ’cause it doesn’t feel right”’
In fact many of the references (in the article I linked to) are peer-reviewed studies criticising the Lancet study, not merely “feeling”-based opinions (as sw characterises it). And if leading researchers such as Jon Pedersen and Paul Spiegel, etc, are offering their opinions (even if they’re not providing formal studies), we perhaps shouldn’t dismiss their views so lightly. We’re talking about some of the leading authorities in the field here, in some cases with expert knowledge of the region. I don’t think sw’s characterisation (“Yeah, but I think the number is lower, ’cause it doesn’t feel right”) accurately covers their contribution.
To address dsquared’s point: whether or not Jon Pedersen used the term “flawed” with regard to the implementation of the Lancet study’s methodology, he clearly thinks there were several problems with it. He stated, for example “I very much agree with the MSB-team that there is some main stream bias” (which points to a problem with the sampling methodology), and that “I find it difficult to separate that problem from a number of other problems in the study. A main street bias of the scale that we are talking about here, is very, very large, and I do not think that it can be the sole culprit.”
As for dsquared’s other point about quality control: I stated that the IFHS had better quality control than the Lancet study. Both Nature and Science journals pointed out problems with the Lancet study’s quality control, as did a number of peer-reviewed studies. Burnham’s suspension is also partly about quality control. (Meanwhile, the AAPOR censure drew attention to the fact that essential details regarding sampling methodology have not been made available to anyone by the Lancet study’s authors – Soldz mentions this important point, which again is partly about quality control). Nobody is arguing that IFHS had perfect quality control, but I don’t see anything like the criticism that the Lancet study has received in published studies, journals etc (the only place I see it is on discussion forums – but perhaps dsquared can point me to scholarly sources that I’ve missed, and we can compare it to criticism of the Lancet study on this matter).
Finally: Soldz suddenly found himself attacked and under “suspicion” for changing his mind about the Lancet study (a study which he’d spent a lot of time and effort researching and defending). He was even accused of providing “propaganda for the mass murderers” because of his criticism of the Lancet study. It goes with the territory of criticising this particular study, unfortunately. And I find that I’m suddenly “suspect” because I “casually” point to things which may counter equally “casual” claims of the “groundlessness” of any suggestion that the Lancet methodology was flawed.
Er, that’s exactly what he did say in the sentence of his I quoted:
(Of course Soldz provides no evidence for his casual accusation of a “possible coverup”.) The lengthy parts of his postscript that you now copy-paste don’t make his argument any more convincing to my mind. YMMV.
You did indeed at #14 casually mention the “ethical violations” without specifying their nature, implying that they constituted grounds for doubting the study’s conclusions. In explanation, you now pray in wholesale the reasoning of one article on Znet. This is no longer quite “casual”, though I continue to find it perfectly unpersuasive.
I’ve been following Soldz’s contributions on this topic elsewhere (eg on Soldz’s own site and at the Deltoid forum) – as a result I find it difficult to see his remarks about a “possible coverup”, etc, as “casual”. (Soldz has been of the the most vocal defenders of the Lancet authors on most points).
But I suppose I can see how the ZNet article (in isolation from everything else Soldz has written) gave that impression.
So can I! It gave that impression because he did indeed allege a “cover-up” while offering no reasoning or evidence that a cover-up had occurred. Of course, your allusion to his many other writings on the internet will be relevant here only if they actually contain the reasoning or evidence supporting his cover-up theory that is so glaringly absent here — in which case perhaps you can show it to us?
Steven writes “he did indeed allege a “cover-up” while offering no reasoning or evidence that a cover-up had occurred”
In fact he provides a link to a long piece which quotes public statements from Burnham and JHU supporting his earlier statement (in his ZNet article) that they “implicitly denied in numerous public pronouncements” that IRB protocol was deviated from – and he adds that the linked piece points out that “Burnham’s public statements were, in spirit if not in legalistic wording, not accurate”. He does all that in the ZNet article.
Steven writes: “your allusion to his many other writings on the internet will be relevant here only if they actually contain the reasoning or evidence supporting his cover-up theory
They’d be relevant to the point I made if they demonstrated Soldz’s record of being a loyal defender of the Lancet study (which makes it difficult for me to see his remark as casually made – as I clearly stated). And they do demonstrate this. Forgive me if I casually point to a section of articles at Deltoid (on which Soldz was a frequent poster) rather than do hours of research finding specific posts: http://scienceblogs.com/deltoid/lancetiraq/
I really couldn’t care less if Soldz was, as you keep repeating, “a loyal defender of the Lancet study” before he started attacking it; what is at issue is the quality of his attack in the piece you eventually offered as sole backup for your initial casual insinuation that the “ethical violations” were grounds for doubting the report’s conclusions, and I continue to think the quality very feeble.
I’m sorry if this upsets you, or encourages you to think: “Oh noes, I am being attacked for daring to criticize the Lancet report!”; I am here making no argument about the general reliability of the Lancet report, but I cheerfully reserve the right to point out, as previously, that feeble attacks on it are feeble.
But, to quote your earlier comment, “the particular ‘ethical violations’ in question are entirely irrelevant to any estimation of the science”
Entirely irrelevant? Really? Perhaps you should do an Unspeak piece on “entirely”. I wonder who is being casual here?
Yup. Qua ethical issue, the writing down of identifying information in this context, as sw said, is serious. But it has no bearing at all on a judgment of whether the science of the paper is correct.
Blair said that he would abide by a UN decision, but then he didn’t. According to an earlier commenter, Blair wasn’t lying because he really thought that he would get a positive UN decision. It appears to me that he was lying, because he failed to tell us that his statement only applied when there was a positive UN decision and if he didn’t realise that there was a possibility of a negative decision then he was very stupid indeed. So is this lying? Or is there some other word to describe this kind of deception?
Oh hey we’re back on topic? I think it plausible that Blair sincerely believed he would abide by what he sincerely believed would be the UN decision he wanted, and was able to put out of his mind any possibility that things would not go his way.
Part of the problem, indeed, was always that there was far too much sincere belief floating around in Blair’s head: as he said about the necessity to go to war, “I may be wrong in believing it, but I do believe it.” You might say he had a genius for adjusting his perception of the world to accord with what he believed. For this trait, “lying” is not quite the right word.
And BBC political editor Andrew Marr stated that if the UN didn’t give the green light for war, then it would be a “nightmare scenario” for the UK government. (That “nightmare scenario” later turned into: “France has hijacked the democratic process”). I think it was apparent that commentators like Marr really believed Blair would get his UN go-ahead, and this probably reflected Blair’s own beliefs.
Jon Pedersen on Main Street Bias. This isn’t a serious issue; it’s ginned up by Michael Spagat.
Meanwhile, whatever was written in scholarly journals, it’s clear from the IFHS that they didn’t visit clusters representing 10% of the population because they were too violent. Nobody needs to write a scholarly paper about how that impairs the reliability of your numbers – it’s obvious.
Curious. On the article dsquared links to, it says “Jon Pedersen said:” There then follows a quote which isn’t, in fact, from Pedersen (it’s from Soldz, funnily enough). But this is a misrepresentation of Pedersen’s views in any case. What Pedersen actually said was:
1) I very much agree with the MSB-team that there is some main stream bias, and that this is certainly an important problem for many surveys – not only the Iraq Lancet one.
2) I am unsure about how large that problem is in the Iraq case – I find it difficult to separate that problem from a number of other problems in the study. A main street bias of the scale that we are talking about here, is very, very large, and I do not think that it can be the sole culprit.
3) The MSB people have come up with some intriguing analysis of these issues.”
(That’s from an email from Pedersen quoted lower down on the page. I’ve personally confirmed with Pedersen himself that these are in fact his views).
Also, dsquared writes: “This isn’t a serious issue; it’s ginned up by Michael Spagat.”
Well, it was serious enough to be the subject of a peer-reviewed paper which was awarded “Article of the Year” by the Journal of Peace Research: http://tinyurl.com/msb-award
But, hey, an award like that, and an endorsement from someone like Pedersen – it really must just amount to more of that “feeble” criticism of the Lancet’s methodology. We can dismiss it out of hand, it’s all “groundless”.
The thing is, if your main tactic in such a debate is the argument from authority, it does look a little self-defeating when you start accusing one of your own authorities of misrepresenting the views of another one.
That’s a cheap shot. Soldz isn’t one of my “authorities”. He simply wrote something which (IMO) refutes your notion that Burnham’s lapse was “entirely irrelevant to any estimation of the science”.
I was at a conference recently where an editor of a prominent British medical journal mentioned an idea heard at another conference (the provenance of this idea is unclear, to me at least); the gist of it is that each published paper should have a box – called, according to this editor, the Oops box – in which the authors can say what sort of shit really happened in their experiment, the things everybody leaves out of their study or glosses over. The editor gave an example, something like “we lost four specimens because the lab tech dropped them, and although we said we would re-test all subjects within four weeks, we were unable to reach two of them until two days after that deadline, but included their data too (we’re not going to underpower our three-year study because of what we consider to be a non-relevant and insignificant violation of protocol)”. You know, the real world stuff that goes on. And it would be exempted from Peer Review; so you get to put your best foot forward, get judged on that, and then, if the paper is accepted, you get the space to acknowledge those intrusions of reality. I suspect that there are very, very few scientists working in any branch of science, and very, very few original articles based on experimentation published even in such journals as Science or Nature without something to say if there were an Oops box.
Why bring up such a point? When the editor suggested it, everybody laughed, but also shuddered a bit. It’s about the problem of science and reality. And it’s about how every methodology is flawed, because it is a methodology: a methodology consists of a series of decisions about how and where there will be approximations, a series of decisions about how nature will be carved and how close you will get to the joint, a series of decisions about how and why the observer will understand the observed – and, as the Oops box shows us, each one of these series is subject to unimagined, unforeseen intrusions of reality, of gravity, of people not checking their phone messages. Science is all about these methodological flaws, about spotting them and pre-empting them, figuring out how to eliminate them, figuring out how eliminating some flaws create other flaws elsewhere, and learning from these flaws. In no small way, each of these decisions that make up a methodology is guilty of anticipating the facts. A research protocol, a methodology, is a model for anticipating the facts, recasting some of them as hypotheses, and others as means, instruments, tools towards testing these hypotheses – the genius of science is how this anticipation of the facts becomes knowledge, by turning anticipation of the facts into a form of question; the tragedy of science is that anticipating the facts will also become ignorance – not simply because the more we know, the less we know, but because we commit to the facts we anticipate and cannot always be sure of how this commitment obscures the falsehoods in our knowledge, how our commitment to the question determines the answer. Science is always apologising for itself; hubris and humility walk hand in hand.
It is groundless to dismiss a study for “flawed methodologies” because the term is fundamentally tautologous, to any discerning eye. It is not groundless to show how a flawed study fails to answer the question it seeks to answer, but that requires a lot of work – and I’m not convinced that people I’ve read have done that work with Burnham’s paper (with one exception, or rather, one flaw, one decision). Burnham’s study, and all the others, including IBC, have flawed methodologies – and these methodologies tell us something about war and mortality and epidemiology and “human rights”; and Christ knows some small part of Burnham must have wondered what an Oops Box in his paper would have said; but they are all trying to answer a question – how many Iraqis have died because of the 2003 invasion – whose only answer is “too many”. So, take your pick.
sw writes: “It is groundless to dismiss a study for “flawed methodologies” because the term is fundamentally tautologous, to any discerning eye.”
What utter nonsense. If a study depends on, say, random sampling, but implements a methodology in a way that random sampling isn’t achieved, do we have grounds for saying the methodology (or rather its implementation) is flawed? At least one published study says so.
Is it just a case of “oops”? Not when an estimate of 601,000 violent deaths (extrapolated from 300 recorded deaths) is based on a claim by the authors that random sampling did take place.
Is it just a case of “oops”? Not when an estimate of 601,000 violent deaths (extrapolated from 300 recorded deaths) is based on a claim by the authors that random sampling did take place.
Bruce, it doesn’t go well for your credibility when you say things like this and don’t mention the issues with respect to the IFHS survey.
Note that Lancet 1 (Roberts et al, 2004) used a GPS-grid methodology that could not possibly have had “Main Street Bias” because it didn’t use streets. However for Lancet 2 (Burnham et al, 2006), the GPS-grid technique wasn’t usable because Iraq was so violent that everyone involved thought that if they walked down the street with a GPS unit they’d be killed.
Burnham et al came up with a sampling scheme based on roads off main streets (not main streets themselves, a misconception of Spagat et al) to deal with this problem. How did the IFHS deal with it?
Basically, by not visiting some of the areas which were considered too dangerous.
Can you see which one of these is more likely to give a non-random sample?
dsquared writes: Bruce, it doesn’t go well for your credibility when you say things like this and don’t mention the issues with respect to the IFHS survey.
That’s because nobody is arguing that claims of flaws in IFHS are “groundless”.
dsquared: Burnham et al came up with a sampling scheme based on roads off main streets (not main streets themselves, a misconception of Spagat et al)
You talk about “credibility”, but earlier you got it wrong over Jon Pedersen, and here you get it wrong over Spagat et al. It’s easy enough to check – you just have to read their study. From the abstact: “The stated methodology of Burnham et al. is to (1) select a random main street, (2) choose a random cross street to this main street, and (3) select a random household on the cross street to start the process.” http://jpr.sagepub.com/cgi/con.....t/45/5/653
Bruce, you stated that:
the IFHS had better quality control than the Lancet study
Your evidence for this appears to be that the Lancet study had a methodology for selecting households which might (under very tendentious assumptions made by Spagat et al) lead to an unknown degree of nonrandomness in the sample. You really do need an explanation of why this is worse than the IFHS methodology, which clearly selected entire areas for nonsampling, and did so in a very obviously nonrandom way.
Good spot on the Spagat et al paper and I apologise – I misremembered this analysis of Tim’s which demonstrated how absurdly sensitive their results in the award-winning paper were to amazingly extreme and tendentious assumptions.
Bruce, mate, please? Perhaps I might incur the wrath of the unspeak community for saying that your determination that I am speaking nonsense is based on taking what I said, ahem, out of context? Without using a full-stop, I will re-state the argument: I was arguing that one cannot dismiss a study by stating that a study has “flawed methodology” because every study has a “flawed methodology”, for reasons I outlined (and which may be wrong!), but then that the onus is on the critic to explain how a “flawed methodology” specifically undermines the question being asked or the answers proferred, and that this onus of criticism is a thrilling aspect of the scientific project, which ought to be treated as seriously and carefully as every other aspect of the scientific project. Right? Something like that?
Very careful bombing?
Here’s a suggestion for an Unspeak piece which I’ve sent to Steven Poole. I doubt he’ll use it, although it does contain a pretty striking example of Unspeak (in fact it contains two in one brief sentence):
The quote is from Les Roberts, co-author of the Lancet 2006 study – he was explaining why the study didn’t indicate a high spike in deaths during the March 2003 intense bombing period. Source: http://www.dailylobo.com/index.....es_roberts
If you find it more palatable to imagine the words being spoken by Chris Morris on The Day Today War Special, please do so.
? It is perfectly sensible to say that bombing campaigns can be more or less carefully carried out, with respect to the amount of care taken to only bomb genuine military targets (and “genuine military targets”, as opposed to civilian targets spuriously declared to have a military aspect, like airports, isn’t Unspeak either).
Is it “perfectly sensible” to say that shock-and-awe was “very careful”?
“Bruce”, since you are now just ignoring good-faith responses like #43 and #44, you might finally save us all some time on this particular topic if you just linked to your blog devoted to criticising the Lancet reports and the defence of the Lancet reports by Medialens and the silliness of Medialens in general?
Thanks, Steven, but I’m not “ignoring” any responses – or at least I’m not “ignoring” them any more than you “ignore” posts which you choose not to respond to for whatever reason.
On the topic of “good faith”, perhaps you’d like to go the whole hog and reveal whatever you think my identity is? Alternatively, we could ignore such matters and continue with what we were discussing. Or if you want me to stop posting in this, or any Unspeak topics, just say so, and I’ll oblige.
dsquared writes: and “genuine military targets”, as opposed to civilian targets spuriously declared to have a military aspect, like airports, isn’t Unspeak either
Well, there seems to be an assumption there that “military targets” automatically excludes “civilian targets”. Isn’t that just PR built into words?
dsquared @ 43 – I thought I’d already responded at length over quality control (IFHS & Lancet), in post 23, to which you responded @ 34
sw @ 44 – despite your explanation, my response is still what I expressed in post 40. Sorry.
*shrugs
Another resource which may give people some idea why it’s not easy to separate so-called “ethical” issues from the “correctness of the science” in these surveys is Prof Michael Spagat’s long paper on “Ethical and data-integrity problems” in the Lancet 2006 survey (soon to be published in Defence and Peace Economics):
http://tinyurl.com/4xsjtl
I’m aware that Spagat is attacked at every opportunity on certain blogs – usually by people who evidently haven’t read his studies (I doubt for example that dsquared could have made such a fundamental misrepresentation of one of Spagat’s papers – see posts 41 & 42 – if he’d actually read it), but please don’t let that stop you looking into it and weighing things up for yourself.
Bruce, are you really trying to suggest that the seriousness of a problem can be measured by the amount of commentary about it in journals, particularly on a politically sensitive subject? That’s quite silly – for example (and this is not a randomly selected example) it would mean that you would tend to overestimate the importance of tiny and debatable issues that caused huge debate because a substantial proportion of the statistics profession thought that they were not important at all, while underestimating the importance of glaring, massive problems that nobody wrote follow up articles about because they were so obvious.
Think about it; Main Street Bias is a possible, hypothetical source of nonrandomness, where the nonrandomness might or might not be correlated with violence levels. “Not visiting the most violent clusters because they’re too violent bias” is by definition nonrandom and correlated. Your answer above was really quite embarrassing and I thought at the time that it was rather polite of me to ignore it.
@53 – it’s quite embarrassing for Spagat that his new paper is still banging on about the AAPOR non-story. The New Scientist reported at the time,
According to New Scientist’s investigation, however, Burnham has sent his data and methods to other researchers, who found it sufficient. A spokesman for the Bloomberg School of Public Health at Johns Hopkins, where Burnham works, says the school advised him not to send his data to AAPOR, as the group has no authority to judge the research. The “correct forum”, it says, is the scientific literature. …
Michael Spagat, an economist at the University of London and longtime critic of Burnham’s study, told New Scientist that he asked for Burnham’s data and was refused, though he says the detailed data on households has been sent to other researchers…
In fact, in March 2008, AAPOR’s own journal, Public Opinion Quarterly, published an analysis of Burnham’s Iraq survey by David Marker of Westat, a consultancy in Maryland that designs surveys. “I received the dataset they distributed. I also saw presentations they made about their methodology and they responded to a number of inquiries I made,” he says.
Marker says Burnham’s methods were “preferable to most of the other counting methods out there”, albeit not perfect. “I suggested small changes in their procedures that could have produced more easily defensible results, including procedures called for by AAPOR.”
The lack of such procedures, he says, “doesn’t invalidate their estimates, but opens them up to attack from those who don’t like their results.”
http://www.newscientist.com/ar.....imate.html
Um, Dr Evil… David Marker also said:
But Marker is, it should be said, largely supportive of the Lancet study.
Dsquared, I’m sympathetic to your general point (on commentary in journals @54), but I see more evidence of scientific method and informed evaluation in published, refereed articles than in blogs run by people who are clearly advocates for given studies. Most of the criticism of IFHS that I’ve seen has appeared in blogs that fall into that category. It’s not that all of the comment in these blogs is worthless, it’s just that much of it seems bogged down with axe-grinding and adjectives such as “tendentious”, etc. (At the “science” blog, Deltoid, “plucked-from-ass” was one considered opinion of the MSB work, along with “bogus” – difficult to argue on a reasonable basis with such views, especially when the people saying those things evidently haven’t read the studies they’re criticising).
(One “interesting” criticism of IFHS, incidentally, came from John Tirman, who commissioned the Lancet 2006 study. He said the IFHS estimate for violent deaths was “not credible”1 – curious echoes of Bush’s dismissal of the Lancet study there. That’s the kind of thing Tirman was happy to write in blog-world. In his articles published by Truthout, etc, he was a bit more careful/precise with his language, although by no means error-free – he wrote that IFHS “found” 400,000 excess deaths2. In a refereed scientific journal, he wouldn’t get away with that, I’d hope.)
More specifically, you asked me to support my view that IFHS is better quality controlled than Lancet 2006, and I gave you my reasons. I said I’d seen very little scholarly work raising problems with IFHS’s quality control. That doesn’t mean to say there isn’t any. If you want to provide a list of the quality-control problems that you think IFHS has, regardless of whether it’s published work or scrawlings on envelopes, please do so. But remember: nobody is arguing that IFHS is perfect; my point, rather, has been to show that it’s absurd to claim that all the criticisms of the Lancet study’s methodology are “groundless”.
So, there’s a general point about quality control which I’d addressed at #23 (if you don’t like the way I’ve addressed it then I’m sorry, but that can’t be helped). Then there’s the more specific point about sampling methodology (which forms one aspect of quality control), and the even more specific point about Main Street Bias, which you only started to ask me about much later in the thread (you’d earlier tried to dismiss the MSB issue by writing: “This isn’t a serious issue; it’s ginned up by Michael Spagat”, at #34. “Ginned up” – that sounds suspiciously like a casual insinuation?)
One thing to emerge from the Main Street Bias criticism was the admission from the Lancet authors that there was indeed a bias of this type to take into account. As Gilbert Burnham put it:
That’s quite important. The lead author of the Lancet study is saying here that the sampling methodology as implemented included procedures for reducing this bias from “busy streets”. But no such procedures are mentioned in the published account of the sampling methodology, so what exactly were these procedures? To date, as far as I’m aware (and despite requests from numerous researchers, AAPOR, Science and Nature journals, etc) these crucial details haven’t been released by the Lancet authors.
In other words, they haven’t disclosed the basic level of information which is necessary to assess how their claim of giving all households “an equal chance of selection” holds up. If you’re extrapolating from 300 actual violent deaths to 601,000 estimated violent deaths, based on this claimed sample-randomness, then it would seem pretty important that the sampling scheme could be assessed in some way. Currently it can’t be, because nobody outside the Lancet team knows what that sampling scheme entailed.
1. http://scienceblogs.com/deltoi.....ent-709043
2. http://www.truthout.org/articl.....ontroversy
@56 – just give it up on AAPOR is my advice. It’s a non-story, as the New Scientist article shows, and you’re embarrassing yourself by keeping on bringing it up (just like Spagat is).
Perhaps you should tell David Marker (whom you quoted) to give up on AAPOR.
According to the news reports, AAPOR last brought such a charge of ethics violation 12 years ago, against rightwing pollster Frank Luntz. I’m not sure whether they’re a serious professional organisation (as Marker seems to think), or part of a shady conspiracy to spread doubt about the validity of the Lancet estimates. Whatever, here’s what AAPOR’s President, Richard Kulka, said over the censure of Burnham:
@56, @59 – evasive resort to ctrl-C/ctrl-V operations is insufficient for your case i’m afraid. What part of “the school advised him not to send his data to AAPOR, as the group has no authority to judge the research” or “doesn’t invalidate their estimates” did you not understand? Or are you dumb as well as a (quite boring, i may say) troll?
I wonder if the school also advised him not to provide answers to “basic questions about how their research was conducted”? And if so, why would the school do that? If you publish a study which records 300 violent deaths but estimates 601,000 based on a claim of random sampling (in a country ripped to pieces by an illegal war), can you see any good reason why basic details of the sampling methodology (ie how random sampling was achieved) are not made available to any interested party (never mind AAPOR in particular)? See my post #57 for background on this.
Unfortunately the brief New Scientist piece doesn’t address that question. Earlier articles in Nature and Science journals went into a little more depth on the subject.
Incidentally, there’s an error in your post #55. You write: “it’s quite embarrassing for Spagat that his new paper is still banging on about the AAPOR non-story”
Spagat’s paper was written in 2008, so I’d be interested to see how you support your assertion that it “bangs on” about a story that arose in 2009. (Spagat’s paper did of course reference AAPOR’s code, but since Spagat didn’t have a time machine there’s no reference to the 2009 censure). You’re not the first person in this thread to make a criticism about a Spagat paper without making an effort to actually read it.
If you want to provide a list of the quality-control problems that you think IFHS has, regardless of whether it’s published work or scrawlings on envelopes, please do so
The IFHS, as I believe I mentioned, did not visit clusters representing 10% of the Iraqi population because they were too violent. My source for this is the IFHS.
With my good friend Senator Inhofe, I have recently founded an important public body called the Agglomeration of Truthiness in Scientist Harrassment. Burnham is not a member, and nor is his institution, and his institution is indeed at this moment telling him that our organization does not have the authority to judge his work, but I am going to write to Burnham anyway demanding that he send me cloned copies of all his hard drives, plus receipts for any food he has eaten over the past three years. Unfortunately, I’m sure he will decline, so he is hereby pre-emptively CENSURED for laughing in the face of scientific discipline and committing ethicopocalypse, as part of the AoTiSH’s ongoing commitment to servicing democracy everywhere?
With this final, devastating nail in the coffin of the Lancet report’s credibility, I feel that further discussion on the subject would be superfluous.