See also, by
Robert Shone: Misrepresenting
Iraq Body Count





Scientists criticise Lancet 2006 study
on Iraqi deaths

by Robert Shone

In addition to several research papers, articles (including those published in Nature and Science journals) and some peer-reviewed studies - all critical of the Lancet 2006 study on Iraqi deaths, the following letters of criticism were published in the Lancet journal, 2007.

[Please note that I am antiwar - however, I think the case against war should not appeal to faith in a given study - RS].

The Lancet 2007; 369:102 - Correspondence

Mortality in Iraq

Debarati Guha-Sapir (a), Olivier Degomme (a) and Jon Pedersen (b)

Gilbert Burnham and colleagues' Iraq mortality study(1) fills an important information gap in a country where reliable mortality statistics are rare. It transforms anecdotes of violence into systematic evidence. However, the paper could have addressed some methodological issues which might have strengthened the credibility of the estimates.

First, according to Burnham and colleagues' results, there were nearly 600 war deaths per day—an unusually high number compared with almost any other armed conflict or indeed with other Iraqi mortality estimates.(2) Burnham and colleagues' figure 4, in which cumulated Iraq Body Count deaths parallel their study's mortality rates, is misleading. Rates cannot be compared with numbers, much less with cumulative numbers. The correct comparison would be the one presented here (figure), in which the Iraq Body Count numbers are transformed into rates by period. In that case, there is no similarity between the trends in the study and Iraq Body Count.

Second, the study suggests that, over a 3-year period, around 90% of the deaths were directly related to violence. However, experience from other conflicts indicates that indirect causes (disease, malnutrition) typically outnumber the deaths due to violence (bombs, gunshots, etc).(3) Burnham and colleagues' figure remained high for a long period of time. By comparison, only one of 17 surveys in Darfur reported a similar level of violent deaths, and this level only persisted for 3 months of a 6-month period.(4)

Third, the heterogeneity of the pattern of violence in Iraq argues for a differentiated estimation across the governorates. Insurgency and coalition action is still concentrated mainly in the Sunni triangle, but large tracts in the rest of the country are relatively peaceful. A better accounting for differences in violence by governorate separately and the effect of excluding the Sunni triangle would have strengthened the study.

We declare that we have no conflict of interest.


1. Burnham G, Lafta R, Doocy S, Roberts L. Mortality after the 2003 invasion of Iraq: a crosssectional cluster sample survey. Lancet 2006; 368: 1421-1428.

2. UN Development Programme. Iraq living conditions survey 2004
(accessed Oct 13, 2006)..

3. Coghlan B, Brennan RJ, Ngoy P, et al. Mortality in the Democratic Republic of Congo: a nationwide survey. Lancet 2006; 367: 44-51.

4. Guha-Sapir D, Degomme O. Darfur: counting the deaths. Mortality estimates from multiple survey data. Brussels: Center for Research on the Epidemiology of Disasters, 2006.

The Lancet 2007; 369:101-102 - Correspondence

Mortality in Iraq

Madelyn Hsiao-Rei Hicks (a)

Crucial weaknesses exist in Gilbert Burnham and colleagues' study of Iraq's war-related mortality.(1)

First, 47 clusters seem to be too few for a large population experiencing highly localised violent events.

Second, household sampling within clusters was not random: only households located on or near residential streets crossing a main street had a chance of inclusion,(2) and only if located near the “start household” for that cluster.

Third, it is infeasible that “One team could typically complete a cluster of 40 households in 1 day”. Assuming continuous interviewing for 10 h despite 55°C heat,(3) this allows 15 min per interview including walking between households and obtaining informed consent and death certificates. The improbability of so many interviews being done so quickly and reliance on “word of mouth among households” during selection and recruitment suggest potential sources of bias, ethical compromise, and risk to interviewees during interview-gathering.(4)

Iraq's suffering from war is properly reflected not by producing high-mortality findings, but by producing accurate mortality findings. The Iraq Living Conditions Survey(5) provided such an example. In this study, ten randomly sampled households were interviewed per cluster in 2200 clusters across all governorates of Iraq to provide an estimate of conflict-related deaths within the same difficult field conditions.

I declare that I have no conflict of interest.


1. Burnham G, Lafta R, Doocy S, Roberts L. Mortality after the 2003 invasion of Iraq: a cross-sectional cluster sample survey. Lancet 2006; 368: 1421-1428.

2. Johnson NF, Spagat M, Gowley S, Onnela J, Reinert G. Bias in epidemiological studies of conflict mortality.
(accessed Dec 19, 2006).

3. Burnham G, Doocy S, Dzeng E, Lafta R, Roberts L. The human cost of the war in Iraq: a mortality study, 2002-2006. John Hopkins Bloomberg School of Public Health and Al Mustansiriya University School of Medicine
(accessed Oct 23, 2006).

4. Hicks MH. Mortality after the 2003 invasion of Iraq: were valid and ethical field methods used in this survey? (accessed Dec 19, 2006).

5. UN Development Programme. Iraq living conditions survey 2004 (accessed Oct 23, 2006).


The Lancet 2007; 369:101 - Correspondence

Mortality in Iraq

Prabhat Jha (a), Vendhan Gajalakshmi (b), Neeraj Dhingra (c) and Binu Jacob (a)

Gilbert Burnham and colleagues(1) do a commendable study of mortality in Iraq in difficult circumstances. Our concerns are two: the reasonably small number of clusters, which might generate random errors, and selective biases if households over-reported mortality during the conflict period. The survey work was done by physicians, and it might well be that households reported mortality in homes other than their own.

To address possible biases, Burnham and colleagues might wish to report three specifics: (a) were the proportions of households who could produce a death certificate similar during the pre-conflict and conflict periods (and did the survey team have any way of assessing whether identifier information on the death certificates matched household details)? (b) was there any specific digit or date preference pattern in the deaths reported in the post-conflict period that might suggest false reporting? and (c) was there any difference in the death rates for the first, middle, and last thirds of the sampling period? (if households wanted to over-report mortality, news of the survey would have spread to other areas only after the survey began).

Similarly, as an additional validity check on rates, they might apply “capture-recapture” methods to their earlier study(2) and their current study in areas that were in common in those sampled areas for the pre-conflict period. A general weakness of the method was the lack of resampling by independent teams. Our large-scale mortality studies in India(3–5) find that repeat survey of at least 5–10% provides far more stable cause-specific mortality rates than do single surveys.

We declare that we have no conflict of interest.


1. Burnham G, Lafta R, Doocy S, Roberts L. Mortality after the 2003 invasion of Iraq: a cross-sectional cluster sample survey. Lancet 2006; 368: 1421-1428.

2. Roberts L, Lafta R, Garfield R, Khudhairi J, Burnham G. Mortality before and after the 2003 invasion of Iraq: cluster sample survey. Lancet 2004; 364: 1857-1864.

3. Registrar General of India. Special Fertility and Mortality Study, 1998: A report of 1·1 million households. New Delhi: Registrar General, 2005:.

4. Jha P, Gajalakshmi V, Gupta PC, et alfor the RGI-CGHR Prospective Study Collaborators. Prospective study of one million deaths in India: rationale, design, and validation results. PLoS Med 2006; 3: e18.

5. Gajalakshmi V, Peto R, Kanaka S, Jha P. Smoking and mortality from tuberculosis and other diseases in India: retrospective study of 43000 adult male deaths and 35000 controls. Lancet 2003; 362: 507-515.

The Lancet 2007; 369:101 - Correspondence

Mortality in Iraq

Johan von Schreeb (a), Hans Rosling (a) and Richard Garfield (b)

The uncertainty of estimates from retrospective mortality surveys in humanitarian emergencies is composed of both sampling and reporting errors. Gilbert Burnham and colleagues, in their mortality study in Iraq (Oct 21, p 1421),(1) quantify the sampling error, but the security situation did not allow for the supervision and repeat interviews needed to estimate reporting errors.

Over-reporting of deaths was regarded as limited because 92% of reported deaths were supported by death certificates, but Burnham and colleagues do not report who issued these certificates. Neither do they discuss why the availability of death certificates increased from 81% in 2004.(2)

The existence of a substantial reporting error is supported by the finding of low child mortality. The study population only reported 54 non-violent deaths in those younger than 15 years, and 1474 births—ie, an under-15 mortality of 36 per 1000 births. This is a third of the estimated preinvasion under-5 mortality.(3) Since nothing indicates that child mortality has decreased,(4) the results suggest that fewer than half of child deaths were reported.

Without an explanation for the high availability of death certificates, one could assume that the reporting error is of the same size as the sampling error (±30%). This assumption still yields at least a five-fold higher number of violent deaths than the passive surveillance mortality numbers.(5) If the death certificates are valid and the availability above 90%, it seems better to monitor mortality by compiling data from the local agencies that issue these certificates than by doing further dangerous household surveys.

We declare that we have no conflict of interest.


1. Burnham G, Lafta R, Doocy D, Roberts L. Mortality after the 2003 invasion of Iraq: across-sectional cluster sample survey. Lancet 2006; 368: 1421-1428.

2. Roberts L, Lafta R, Garfield R, Khudhairi J, Burnham G. Mortality before and after the 2003 invasion of Iraq: cluster sample survey. Lancet 2004; 364: 1857-1864.

3. Ali MM, Blacker J, Jones G. Annual mortality rates and excess deaths of children under five in Iraq, 1991-98. Popul Stud 2003; 57: 217-226.

4. UNICEF. The State of the world's children 2007. New York: United Nations Children's Fund, 2006.

5. Iraq Body Count (accessed Dec 18, 2006).

The Lancet 2007; 369:102-103 - Correspondence

Mortality in Iraq

Josh Dougherty

Gilbert Burnham and colleagues state in their latest Iraq mortality study(1) that the US Department of Defense (DoD) has published civilian death estimates and that these corroborate their findings. Burnham and colleagues are mistaken in these assertions.

The claimed corroboration is illustrated by their figure 4, which compares trends in their data with those from the DoD and truncated data from Iraq Body Count. The original DoD data seem to be sourced from a graph on page 32 of the Aug 29, 2006, “Measuring stability and security in Iraq” report(2) published by the DoD. However, Burnham and colleagues' assertion that the DoD “estimated the civilian casualty rate at 117 deaths per day” is mistaken, as is their figure 4, which repeats this error in graphic form.

These data refer to Iraqi civilians and security-force personnel, not just to civilians, and to casualties (ie, deaths or injuries), not just deaths. The DoD numbers do not refer to Iraqi “deaths per day” and do not offer any direct means by which to calculate what number might be deaths, let alone civilian deaths. What is clear, however, is that the number in the DoD data is unlikely to be anywhere close to 117, as can be confirmed by a cursory analysis of the blue “Coalition” columns included alongside those for Iraqis in the DoD graph. These columns show that non-Iraqi Coalition forces have suffered roughly 17000 “casualties” since January, 2004. The current official total for all Coalition deaths since the beginning of the conflict in March, 2003, stands at just over 3000, or less than 20% of Burnham and colleagues' interpretation of these figures.

I declare that I have no conflict of interest.


1. Burnham G, Lafta R, Doocy D, Roberts L. Mortality after the 2003 invasion of Iraq: across-sectional cluster sample survey. Lancet 2006; 368.

2. US Department of Defense. Measuring stability and security in Iraq. Washington: US Department of Defense, 2006


The Lancet 2007; 369:103-104 - Correspondence

Mortality in Iraq – Authors' reply

Gilbert Burnham (a), Riyadh Lafta (b), Shannon Doocy (a) and Les Roberts (a)

Johan von Schreeb and colleagues point out that our interviewed households were only visited once. This is true, but past efforts at repeat interviewing in Iraq, Liberia, and Zaire have yielded more deaths on a second interview.(1–3)

The deaths that were not confirmed by a certificate were too few to do any meaningful trend analysis. The higher proportion of death certificate confirmations in our second study might reflect the re-establishment of issuance procedures in recent years. In the 2004 data (but not in the 2006 data), the period most associated with no death certificates was the weeks following the invasion, and this period constituted a smaller fraction of all deaths in the second study.

The observation that the non-violent death rate among those younger than 15 years is lower than in 2004 (4·8 per 1000 per year) and strikingly lower than during a period of sanctions a decade earlier is interesting but ignores the fact that this rate is similar to those reported by the UN for 2004 in Syria and Kuwait.(4) We openly acknowledge in the Discussion section of the paper that under-reporting of deaths might have occurred. In a stable and functioning environment where death certificates are universally completed and fully accessible, the method von Schreeb and colleagues suggest would be an excellent check on under-reporting from survey or census results.

Prabhat Jha and colleagues and Madelyn Hsiao-Rei Hicks express concern about the number of clusters. The confidence intervals presented in the paper were calculated with robust standard errors that account for cluster sampling. Virtually all clusters experienced violent deaths (figure) and if only visiting 47 randomly selected areas missed significant areas, this can only mean that our estimate is too low.

Our criteria for a household death included the decedent residing with the interviewed families continuously for the 3 months before their death. We suspect that this definition would have excluded deaths rather than including deaths from other households as Jha and colleagues suggest. The death certificate documentation was high before and after the invasion period and it is not plausible that people would have fabricated death certificates after hearing about our study because (a) most small cities and villages were only visited for one day, and (b) the interviewing began just minutes after the selection of a specific location. In order for this mass falsification to happen, millions of Iraqis would have needed to fabricate death certificates, and their motive to do so is not clear.

In both this survey and the 2004 survey, the proportion of the entire population surveyed was so small that a capture-recapture analysis is not possible. We agree that revisiting households with different interviewers would be ideal but security concerns made this seem imprudent.

Although, as Ricks states, two governorates were not sampled, rates calculated were applied against the population of Iraq minus the population of these governorates, or 26·1 million.

If neighbourhoods selected also included residential streets that did not cross main streets, these additional streets were also included in the random sampling process. The results of this current survey closely paralleled those from the 2004 survey which selected start houses by global positioning system (GPS) methods, suggesting that random selection of start households used in this study did not introduce a measurable systematic bias. Selecting the house with the nearest front door is a standard field method, and the author supervising field work ensured this approach was consistently followed. Sampling in each cluster involved two teams working together. With this arrangement, sampling 40 households in a day was indeed feasible.

The Iraq Living Conditions survey was carried out after barely a year of conflict by government employees who did interviews that took 82 min on average.(1) The survey included 2200 clusters to provide development information by governorate. Only one question was asked about deaths, and one of the senior researchers in that study has stated that he knows his estimate was an underestimate.1 He knows this because the baseline non-violent death rate measured was implausibly low and because revisits to ask about mortality in those younger than 5 years in a sample of the same houses after survey completion found about 50% more deaths than initially reported.

Josh Dougherty and Debarati Guha-Sapir and colleagues all point out that figure 4 of our report mixes rates and counts, creating a confusing image. We find this criticism valid and accept this as an error on our part. Moreover, Dougherty rightly points out that the data in the US Department of Defense source were casualties, not deaths alone. We regret this labelling error. But the graph presented by Guha-Sapir and colleagues uses a scale that masks the fact that there are roughly three times as many deaths reported by Iraq Body Count in recent months than during the same post-invasion months of 2003. We had wanted to show that the three sources all similarly pointed to an escalating conflict, but neither graph shows that well, and we regret the confusion that this created.

Among the comments by Guha-Sapir and colleagues, we do not see the inconsistency they describe. First, by looking at mortality data from the Democratic Republic of Congo collected more than a year after the main conflict was resolved and the warring armies returned home, they conclude that the ratio of indirect excess deaths to violent deaths seems low in Iraq. We feel a better comparison would be to the data collected during that war which showed that 1·8% of the 19·9 million people in the eastern part of the country died of violence in the first 33 months of the conflict, a proportion similar to that measured in Iraq.(5)

It is believed that the population in Iraq is not as susceptible to death from malnutrition and disease as that in Darfur. Wars occurring in countries with widespread access to high-power weaponry, such as Kosovo and Bosnia and where violence accounted for most excess wartime deaths, are more fitting comparisons.

We declare that we have no conflict of interest.


1. UN Development Programme. Iraq living conditions survey 2004 (accessed Nov 29, 2006).

2. Becker SR, Diop F, Thornton JN. Infant and child mortality in two counties of Liberia: results of a survey in 1988 and trends since 1984. Int J Epidemiol 1993; 22: S56-S63.

3. Taylor WR, Chahnazarian A, Weinman J, et al. Mortality and use of health services surveys in rural Zaire. Int J Epidemiol 1993; 22: S15-S19.

4. WHO. Life tables for WHO member states (accessed Dec 18, 2006).

5. Roberts L, Belyakdoumi F, Hale C, et al. Mortality in eastern Democratic Republic of Congo: results from eleven mortality surveys. New York: International Rescue Committee, 2001: (accessed Nov 28, 2006).

Nature journal:
Death toll in Iraq: survey team takes on its critics

Jim Giles

Nature 446, 6-7 (1 March 2007) | doi:10.1038/446006a

Raw data should settle arguments over study methods.

It's not often that George W. Bush takes time out to attack a scientific paper on the day that it's released. But then few papers attract as much attention as the one that claimed that more than half a million people, or 2.5% of the population, had died in Iraq as a result of the 2003 invasion. Published last October in the run-up to the US mid-term elections, the interview-based survey attracted huge press interest and controversy.

The media spotlight has moved on, but interest within the scientific community has not. The paper has been dissected online, graduate classes have been devoted to it and critiques have appeared in the literature with more in press. So far, the discussion has created more heat than light. Many of the criticisms that dogged the study are unresolved. For example, Nature has discovered that different authors give conflicting accounts of exactly how the survey was carried out. And although many researchers say the questions hanging over the study are not substantial enough for it to be dismissed, a vocal minority disagrees.

The controversy creates extra interest in the authors' decision, made last week, to release the raw data behind the study. Critics and supporters will finally have access to information that may settle disputes.

On paper, the study seems simple enough. Eight interviewers questioned more than 1,800 households throughout Iraq. After comparing the mortality rate before and after the invasion, and extrapolating to the total population, they concluded that the conflict had caused 390,000–940,000 excess deaths (G. Burnham, R. Lafta, S. Doocy and L. Roberts Lancet 368, 1421–1428; 2006). This estimate was much higher than those based on media reports or Iraqi government data, which put the death toll at tens of thousands, and the authors, based at Johns Hopkins University in Baltimore, Maryland, and Al Mustansiriya University in Baghdad, have found their methods under intense scrutiny.

Much of the debate has centred on exactly how the survey was run, and finding out exactly what happened in Iraq has not been straightforward. The Johns Hopkins team, which dealt with enquiries from other scientists and the media, was not able to go to the country to supervise the interviews. And accounts of the method given by the US researchers and the Iraqi team do not always match up.

Several researchers, including Madelyn Hicks, a psychiatrist at King's College London, recently published criticisms of the study's methodology in The Lancet (369, 101–105; 2007). One key question is whether the interviews could have been done in the time stated. The October paper implied that the interviewers worked as two teams of four, each conducting 40 interviews a day — a very high number given the need to obtain consent and the sensitive nature of the questions.

The US authors subsequently said that each team split into two pairs, a workload that is "doable", says Paul Spiegel, an epidemiologist at the United Nations High Commission for Refugees in Geneva, who carried out similar surveys in Kosovo and Ethiopia. After being asked by Nature whether even this system allowed enough time, author Les Roberts of Johns Hopkins said that the four individuals in a team often worked independently. But an Iraqi researcher involved in the data collection, who asked not to be named because he fears that press attention could make him the target of attacks, told Nature this never happened. Roberts later said that he had been referring to the procedure used in a 2004 mortality survey carried out in Iraq with the same team (L. Roberts et al. Lancet 364, 1857–1864; 2004).

Other arguments focus on the potential for 'main-street bias', first proposed by Michael Spagat, an expert in conflict studies at Royal Holloway, University of London. In each survey area, the interviewers selected a starting point by randomly choosing a residential street that crossed the main business street. Spagat says this method would have left out residential streets that didn't cross the main road and, as attacks such as car bombs usually take place in busy areas, introduced a bias towards areas likely to have suffered high casualties.

The Iraqi interviewer told Nature that in bigger towns or neighbourhoods, rather than taking the main street, the team picked a business street at random and chose a residential street leading off that, so that peripheral parts of the area would be included. But again, details are unclear. Roberts and Gilbert Burnham, also at Johns Hopkins, say local people were asked to identify pockets of homes away from the centre; the Iraqi interviewer says the team never worked with locals on this issue.

Many epidemiologists say such discrepancies are understandable given that Roberts and Burnham could not directly oversee the survey, and do not justify accusations that the process was flawed. For those who disagree, access to the raw data is essential. Although previously reluctant to release them, Roberts and Burnham now say they are removing information that could be used to identify interviewers or respondents and will release the data within the next month to people with appropriate "technical competence".

One researcher keen to see the numbers is Spagat. The 2004 survey used GPS coordinates instead of the main-street system to identify streets to sample, and when Spagat used the limited data available so far to compare the two studies for the period immediately following the invasion, he found that the 2006 study turned up twice as many violent deaths, suggesting that main-street bias may be present.

Roberts and others question Spagat's methods. But the issue could be checked using the raw data. If main-street bias exists, says Spagat, then death rates will fall as the interviews move away from the main street.

The raw data may also help address a fear that some researchers are expressing off the record: that the Iraqi interviewers might have inflated their results for political reasons. That could show up in unusual patterns within the data.

Roberts and Burnham say they have complete confidence in the Iraqi interviewers, after working with them directly for the 2004 study. And supporters say that criticisms should not detract from the fact that the Iraqi team managed to produce a survey under extremely difficult circumstances. Security threats forced the team to change travel plans and at one point to consider cancelling the survey altogether. Since its completion, one interviewer has been killed and another has left Baghdad, although it is not known whether either case is linked to their involvement in the survey. Either way, the continuing violence in the country is enough for the remaining interviewers to say that they are not willing to repeat the exercise.

Back to top