Sample / survey methodology

The statistical method used in the Lancet study to estimate deaths relies on the selection of a representative sample from the population. This section provides information about how this was addressed in the Lancet study.

1: Is the sample size large enough to yield reliable estimates?

Summary: The sample size is sufficiently large establish that the number of excess deaths in Iraq is 95 percent certain to be in the interval of 393-943 thousand, and much more likely to be near 655 thousand than any of these extremes. The use of a relatively small number of clusters does not "bias" the results but explains why the estimates are less precise than some other studies.

The study was based on a "cluster sample survey" of 1849 households with 12,800 members. The households were distributed in 47 clusters across the whole of Iraq.

Some commentators have claimed that this makes the results unreliable. The UK Prime Minister's spokesman said:

The problem with this was that they were using an extrapolation technique, from a relatively small sample

Similarly, Steven Moore wrote a Wall Street Journal Op-ed describing the survey as "bogus" largely because it used an "an extraordinarily small number of cluster points".

The sample size affects the precision of the estimate

Fortunately, it is not necessary to speculate about the effects of the small sample size. Any sampling leads to some uncertainty about the "true" number in the total population. However, standard statistical methods make it possible to put upper and lower bounds on the numbers that are consistent with the sample (the "confidence interval"). In the MIT/Bloomberg study, the number of excess deaths is 95 percent certain to be in the interval 393-943 thousand.

While there is uncertainty about the exact number in this range, the most likely number is 655 thousand excess deaths. Numbers closer to the extremes of the interval are very unlikely. For example, there is only a 1/6 chance that the number is lower than 500 thousand excess deaths, and number higher than 655 thousand are as likely as lower numbers.

A larger sample would have narrowed the interval. The results nonetheless are highly informative, showing that a very large number of excess deaths have occurred in Iraq since the 2003 invasion.

A small sample size does not "bias" the results

Steven Moore also misunderstands the effects of a small sample when he writes:

With so few cluster points, it is highly unlikely the Johns Hopkins survey is representative of the population in Iraq.

An unrepresentative sample leads to "bias", i.e., a systematic under- or over-estimation of the true number. However, standard statistics shows that this is unrelated to the sample size. Provided the sample is "random" -- meaning that all members of the population are equally likely to be included in the sample -- it is unbiased whether the number of clusters is 5, 47, or 470.

With a random sample, the only effect of the sample size is to widen the confidence interval. As noted above, with 47 clusters the study shows with high certainty that hundreds of thousands of deaths have occurred in Iraq.

The required sample size does not vary with the total population

The same Wall Street Journal Op-ed also claimed that the number of clusters used by the sample is too small because the populatin of Iraq is large:

Another study in Kosovo cites the use of 50 cluster points, but this was for a population of just 1.6 million, compared to Iraq's 27 million.

This also is a misunderstanding of statistics. The number of observations or clusters required for a particular degree of precision does not depend on the total population. For example, opinion polls in the UK typically use around 1,000 respondents, but ones in the United States use the same number even though the US population is more than four times as large as that of UK. Similarly, when surveying a small town of just 5,000 people it would not be adequate just to ask a handful of people -- 1,000 individuals still would be required to yield the same precision as a nationwide opinion poll of that size. In brief: it's the size of the sample that matters, not the size of the total population surveyed.

Biostatistician Steve Simon explains these considerations in a nice analogy:

"Every cook knows that it only takes a single sip from a well-stirred soup to determine the taste. It's a nice analogy because you can visualize what happens when the soup is poorly stirred.

With regards to why a sample size characterizes a population of 10 million and a population of 10 thousand equally well, use the soup analogy again. A single sip is sufficient both for a small pot and a large pot."

2: Does the study sample accurately represent the Iraqi population?

The requirement for a sample that accurately representats the Iraqi population essentially is that all members of the population have an equal chance of being included in the sample. If this is the case, the sample is said to be "random", which in turns allows standard statistical techniques to be used for inferences about overall mortality.

The Lancet article contains a compressed discussion of the method by which the sample was constructed for the mortality study:(p. 2):

  1. 50 clusters were selected systematically by Governorate with a population proportional to size approach, on the basis of the 2004 UNDP/Iraqi Ministry of Planning population estimates.
  2. Each Governorate's constituent administrative units were listed by population or estimated population, and location(s) were selected randomly proportionate to population size.
  3. A main street was randomly selected within the administrative unit from a list of all main streets.
  4. A residential street was then randomly selected from a list of residential streets crossing the main street.
  5. On the residential street, houses were numbered and a start household was randomly selected.
  6. From this start household, the team proceeded to the adjacent residence until 40 households were surveyed.

The resulting sample is a random sample provided that none of the steps means that some households are more likely to be included in the sample than others.

If one of the steps caused some types of household to be more likely to be in the sample, this could lead to "bias", a statistical term that refers to systematic over- or under-representation of the results. For this to substantially affect the results, the sample would have to systematically select households that have significantly different mortality characteristics than the "average" household. Some commentators have contended that the use of main streets to construct the sample could be one such source of bias. We discuss this possibility here.

There also has been a lot of misunderstanding of the method used. For example, some commentators have suggested that "administrative units" somehow meant all clusters were in urban areas. This is not the case, but one-quarter of clusters were in rural areas (see discussion here).

We discuss more general possible sources of bias in the sample here

3: What are potential sources of bias in the study?

All surveys have potential for bias resulting if not all households were equally likely to be included in the sample. The Lancet study was conducted in very dangerous conditions, which constrained various aspects of the sampling process (for example, the study team had uased GPS units to choose random locations for clusters in 2004, but in 2006 this was felt to be too dangerous.)

In the article in the Lancet, the authors discuss how this and a number of other considerations could have affected the results, including the following issues:

  1. Restrictions on survey resources. Security concerns for the interviewers restricted the size of teams, number of supervisors, and the length of time spent in each household location, which in turn affected the size and nature of questionnaires. A common problem in mortality surveys is under-reporting of deaths, especially deaths of women or young children. In general, limited time therefore would serve to underestimate the number of deaths, although the size of any such bias is unclear.

  2. Unavailable households The researchers felt it was too dangerous to call back to households not available on the initial visit. The direction of any resulting bias is unclear, but the effect is in any case small as the number of households not responding was very small (only 15 out of 1849).

  3. Response bias. If households declining to answer the survey have different mortality characteristics than other households, a bias could be introduced. However, the response rate was very high (>99%), so the size of any such bias would be very small.

  4. "Hidden deaths". Some deaths may not have been reported, especially in households with combatants killed. This would lead to an underestimtae of mortality.

  5. Reporting of false deaths. If respondents stated that deaths had taken place that had in fact not, this would lead to an overestimate of mortality. However, as death certificates were available in 92% of the cases asked for (and in 80% of cases overall) any such effect would be small.

  6. Under-reporting categories of deaths. Under-reporting of infant deaths is a wide-spread concern in surveys of this type. This would lead to an underestimate of mortality, and to a skewed distribution of the sex and age of deaths.

  7. Survivor bias. Where entire households had been killed no deaths would be recorded. This would lead to an underestimate of mortality.

  8. Emigration. Large-scale emigration out of Iraq would reduce the population size compared to the mid-2004 population estimate used in the study to calculate the total number of excess deaths. This would over-estimate total excess deaths. It also could bias the death estimate if households emigrating had a particularly high or mortality characteristics and history, although the direction of any such bias is unclear.

  9. Migration within Iraq. Migration from areas with high mortality to low mortality theoretically could affect the results. However, the survey recorded demographic data and found a similar number of households with in-migration as with out-migration. The effect therefore is likely to be small and the direction of any bias unclear.

  10. Exclusion of Duhuk and Muthanna. The researchers treated the missing clusters from two provinces by assuming that no excess deaths occurred in those provinces. This leads to an underestimate of mortality. However, as the provinces are known to be less violent than average and small relative to the total population, the underestimate also is likely to be small (see this FAQ for more discussion).

  11. Interview bias. The article explains:

    "the potential exists for interviewers to be drawn to especially affected houses through conscious or unconscious processes. Although evidence of this bias does not exist, its potential cannot be dismissed.

    (The reverse also is possible, that interviewers avoided dangerous areas with more deaths within clusters, which would exert a downward bias on the estimate.)

  12. "Recall bias". Families might not accurately recall all deaths in the period 2002-2006. This could lead to a general underestimate of mortality rates. However, if recall bias is different over time, so that deaths in 2002 are less likely to be recalled than deaths in later years, then the change in mortality would be overestimated and the number of excess deaths bias upward. The size of this potential effect cannot easily be measured, but is likely to be limited as deaths were recorded by death certificates. It would be large if households systematically were unable to recall deaths of household members within a five-year time window.

  13. Misclassification of deaths. It is possible that either the circumstances or sex of the decedents was misreported, either because the household was mistaken or because of misunderstandings in the survey. This would not affect the total death estimate (and thus is not a source of bias) but could lead to an inaccurate break-down of the cause of death, or other similar characteristics.

In addition, to this concern, other commentators have asked whether additional sources of bias exist, including "main street bias" (see this FAQ).

4: Did the study survey an unusually violent area of Iraq?

The UK Prime Minister's Official Spokesman criticised the survey for not sampling all of Iraq:

"The problem with this was that they were using an extrapolation technique, from a relatively small sample, from an area of Iraq which was not representative of the country as a whole." link

Contrary to these statements, the study did in fact survey households in all of Iraq (so not just "an area of Iraq"). The observations were spread across all of Iraq's governorates, with a larger number of observations of households in Governorates larger population. This population-weighting meant that all households in Iraq had an equal chance to be included in the sample, a prerequisite for a representative sample.

The detailed distribution of observations is shown in this table, reproducing the information in Table 1. in the study:

GovernorateMid-year 2004
population
Number of
clusters
Baghdad6,554,12612
Ninewa2,554,2705
Basrah1,797,7583
Sulamaniyah1,715,5853
Thi-Qar1,493,7813
Babylon1,472,4053
Erbil1,418,4553
Diyala1,392,0933
Anbar1,328,7763
Salah al-Din1,119,3692
Najaf978,4002
Wassit971,2801
Qadissiya911,6401
Tameem854,4701
Missan787,0721
Kerbala762,8721
Muthanna554,9940
Dahuk472,2380
Total27,139,58447

Some commentators have speculated that the population data may not be accurate. As discussed here, this is unlikely to have affected the results significantly, and unclear whether it leads to an over- or under-estimate of the number of deaths.

Others have contended that, by chance, and because of rounding errors, the clusters may have ended up over-sampling violent governorates. A detailed examination (here) shows that this is unlikely to have .

It can also be seen that the two smallest Governorates, Muthanna and Dahuk, were not sampled. This can lead to an under-estimate of the total number of deaths, a possibility we discuss here

5: Does the survey suffer from "main street bias", i.e., the flaw of choosing housholds near main streets that may be particularly likely to have been exposed to violence?

The Lancet article provides the following details of how households were selected in the survey:

The third stage [of constructing the sample] consisted of random selection of a main street within the administrative unit from a list of all main streets. A residential street was then randomly selected from a list of residential streets crossing the main street. On the residential street, houses were numbered and a start household was randomly selected.

Some commentators have suggested the method of using main streets could have biased the sample. If households near main streets were more likely to be included in the sample, and such households had a higher mortality rate than other households, then the total mortality rate could be biased upward.

This possibility was raised by Neil Johnson and Sean Gourley, phycisists at Oxford University and Michael Spagat, an economist at Royal Holloway, University of London. The suggestion was first publicised in an article in Science:

"Neil Johnson and Sean Gourley, physicists at Oxford University in the U.K. who have been analyzing Iraqi casualty data for a separate study, also question whether the sample is representative. The paper indicates that the survey team avoided small back alleys for safety reasons. But this could bias the data because deaths from car bombs, street-market explosions, and shootings from vehicles should be more likely on larger streets, says Johnson."

The claim subsequently attracted widespread subsequent media attention (e.g., BBC, the Guardian).

The authors of the MIT/Bloomberg study have responded that this is a misunderstanding of a very compressed discussion of the sample methodology in the original article. Les Roberts and Gilberg Burnham write in a reply:

Sampling for our study was designed to give all households an equal chance of being included. In this multistage cluster sampling, random selections were made at several levels ending with the "start" house being randomly chosen. From there, the house with the nearest front door was sampled until 39 consecutive houses were selected. This usually involved a chain of houses extending into two or three adjacent streets. Using two teams of two persons each, 40 houses could be surveyed in one day. Of our 47 clusters, 13 or 28% were rural, approximating the UN estimates for the rural population of Iraq. ... In no place does our Lancet paper say that the survey team avoided small back alleys. The methods section of the paper was modified with the suggestions of peer reviewers and the editorial staff.

The Johnson and Gourly's original suggestion subsequently has been transformed further; for example, Fred Kaplan suggested in Slate that:

"if a household wasn't on or near a main road, it had zero chance of being chosen" (emphasis added).

The authors wrote in reply:

Our study team worked very hard to ensure that our sample households were selected at random. We set up rigorous guidelines and methods so that any street block within our chosen village had an equal chance of being selected. Once we started, we went to the next nearest 39 doorways in a chain that typically spanned two to three blocks. Thus, the first-picked block usually did not provide most of the houses within a given cluster. It is also important that most violent deaths probably happened outside of the home, making the location of the house on the street irrelevant.

In an interview with the BBC Les Roberts clarified that "people being shot was by far the main mechanism of death, and we believe this usually happened away from home" and said:

"there would have to be both a systematic selection of one kind of street by our process and a radically different rate of death on that kind of street in order to skew our results. We see no evidence of either."

Searn Gourly has indicated that he is undertaking a simulation exercise to attempt to quantify the size of the bias.

6: Did the survey allow enough time for each interview?

Some have questioned whether the survey used by the MIT/Bloomberg allowed enough time for the interviews to obtain reliable results.

For example, Madelyn Hsiao-Rei Hicks, a psychiatrist at the Institute of Psychiatry, University of London, writes in a letter to the Lancet:

"In their Lancet paper, Burnham and co-authors write, “The two survey teams each consisted of two female and two male interviewers, with the field manager [co-author Prof. Riyadh Lafta] serving as supervisor.” They go on to write, “One team could typically complete a cluster of 40 households in 1 day”. My assessment that 40 household interviews per day is unfeasible is based on my own experience doing house-to-house epidemiological interviews and on doing some basic calculations: Based on a scenario generously estimating that a team manages to complete 10 hours of continuous, back-to-back interviews, despite the 130ºF (55ºC) heat described by the authors’ elsewhere (Burnham et al., 2006, ‘The Human Cost of War’), this would allow 15 minutes per interview, maximum. "

Hicks points out that enough time is important to "explain the study, ensure that the interviewee understood its purpose, and allow time for the interviewee to deliberate on his or her decision to participate without feeling coerced", and further argues:

"How could they ask their questions and get complete and accurate responses in such a short time, as well as locating and examining corroborating death certificates? How could they manage to keep interviews private and confidential within such a short time, especially when the whole neighborhood was aware of who was being interviewed and why?"

The authors recognise in the article in the Lancet that the security situation and resources available put limitations on the survey:

"The extreme insecurity during this survey could hae introduced bias by restricting the size of the teams, the number of supervisors, and the length of time that could be prudently spent in all locations, which in turn affected the size and nature of questionnaires" (p. 1427)

However, Les Roberts, one of the study co-authors, nonetheless has argued that the amount of time was adequate, and responded in an exchange hosted by the BBC:

"In Iraq in 2004, the surveys took about twice as long and it usually took a two-person team about three hours to interview a 30-house cluster. I remember one rural cluster that took about six hours and we got back after dark. Nonetheless, Dr. Hicks' concerns are not valid as many days one team interviewed two clusters in 2004"

However, Dr. Hicks does not consider this plausible:

"Roberts is saying here that in their 2004 Iraq mortality survey, a team would spend three hours (180 minutes) to interview the 30 households in a cluster. 180 minutes divided by 30 households gives 6 minutes per household interview. He also says that this was twice the time that was spent in their 2006 interviews, meaning that Roberts is here establishing that an interview team typically spent about 3 minutes per household interview in their 2006 study. Roberts then goes on to make the incredible statement that “…many days one team interviewed two clusters in 2004”, which would apparently be about 60 interviews in one day plus traveling between the two clusters.

7: Could there have been a self-selection bias in who agreed to take part in the survey?

Surveys sometimes suffer from "self-selection bias". This results when only a sub-group of respondents selected for the survey agree or available to take part, and the decision to take part is related to the variable being measured.

The possibility that this may have affected the MIT/Bloomberg study was discussed at UK Polling Report:

several people have pondered the “word of mouth” effect. The researches state in their report that having explained to the first house in a cluster their good intentions, word of mouth travelled ahead of them and made it easier to presuade the rest of the cluster of their good intentions. Some people have, quite reasonably, asked whether this could skew the result - could people with deaths in the family have become more or less likely to take part in the survey? In theory yes, they could, but given the response rate of 98% there is very little space for it to have made a difference. If it made people with deaths more likely to take part, they are 98% likely to have done so anyway. If if made them less likely to take part, it obviously didn’t have much effect.

8: Did the distribution of clusters across Governorates lead to over-sampling of violent areas?

Summary: clusters were distributed according to the population of Governorates and all Governorates included in the calculation of total excess deaths had at least one cluster. With sufficient number of clusters, there is no reason that this would lead to over- or under-sampling of Governorates with particular characteristics. This is borne out by more detailed examination of the data.

The survey was designed so that the number of clusters in each Governorate varied with the proportion of the total population. For example, as one-quarter of the Iraqi population lives in Baghdad, one-quarter of the clusters (12 out of 47) also were located there. The other Governorates had 1 to 5 clusters.

On average, each of the 47 clusters represent 577 thousand people. However, the distribution of clusters in all Governorates inevitably leads to some rounding, with some Governorates having very slightly "too many" clusters and some "too few" relative to their share of the population. As the only principe for distributing clusters was population, and there is no reason to think that such rounding errors should be correlated with mortality, this general principle is a valid method for constructing a sample.

Nonetheless, the small theoretical possibility that there could be a systematic correlation between the degree of over/under-representation and the level of violence has been raised by some commentators:

This creates an instant problem because the areas where they round down the number of clusters will be underrepresented and the areas where they round up will be overrepresented in the final numbers. As it works out, the most overrepresented governorates are two of the most violent and most populous ... very violent areas were probably oversampled.link

Detailed analysis reveals this to be an unwarranted worry. The below table is a calculation of the degree of over or under-representation in each governorate, using the information in Table 1 in the Lancet article. The second column shows how the extent to which each Governorate (excluding Dahuk and Muthanna, which were not surveyed) was over/under-represented. For example, Baghdad has 25.53% of the total number of clusters but 25.10 percent of the population, and therefore was "oer-represented" by an amount corresponding to 0.43% of the surveyed population due to rounding.

In column three, each Governorate is given a rating of the degree of excess mortality in categories high/mid/low, using the information in Figure 3 of the article.

Province Degree of "over-representation" Mortality level
Baghdad 0.43% Mid
Ninewa 0.86% High
Basrah -0.50% Mid
Sulamaniyah -0.19% Mid
Thi-Qar 0.66% Low
Babylon 0.74% Low
Erbil 0.95% Low
Diyala 1.05% High
Anbar 1.29% High
Salah al-Din -0.03% High
Najaf 0.51% Low
Wassit -1.59% Low
Qadissiya -1.36% Mid
Tameem -1.14% Mid
Missan -0.89% Mid
Kerbala -0.79% Mid

Summing by the degree of mortality, this gives the following results:
High-mortality: over-represented by 3.2%
Medium-mortality: under-represented by 4.3%
Low-mortality: over-represented by 1.1%

As this illustrates, and as would be expected, there is no correlation between rounding error and the degree of mortality. This is an illustration of a more general principle: if the number of clusters is sufficiently large and distributed without bias according to the distribution of the population of the country, the sample also is representative of the population as a whole.

9: Did the study inflate death estimates by sampling predominantly urban areas?

The Lancet article contains the following explanation of how clusters were selected:

At the second stage of sampling, the Governorate’s constituent administrative units were listed by population or estimated population, and location(s) were selected randomly proportionate to population size.

This has been taken by some commentators to mean that only urban areas were included in the sample. However, this concern appears to be unfounded. Gilliam Burnham, the lead author, has explained that 28 percent of the clusters (and thus households) in the sample were in rural areas. This corresponds closely to estimates of the proportion of the total population that live in rural areas. For example, the 2004 UNDP Iraq Living Conditions Survey, found that 7.1 million, or 26 percent, of Iraq's total population of 27.1 million live in rural areas [pdf link].

The fact that the proportion of the population and the proportion of clusters in rural areas is similar is an ilustration of how random sampling works. Rebecca Goldin , Associate Professor in Mathematical Sciences at George Mason University, writes more about this:

[it has been argued] that the sampling method would invariably favor densely populated areas, and that these areas would have disproportionate levels of bombs. It is certainly true that densely populated areas are more likely to be sampled – but only proportional to their population. In other words, if ten times as many people live in Region A than live in rural Region B, then Region A is ten times as likely to be chosen as a sampling destination. Overall, this will not have the effect of oversampling cities; it will have the effect of sampling cities proportional to their population, and rural areas proportional to theirs.

10: Did the survey rely on accurate estimates of the total population of Iraq?

The MIT/Blomberg study UNDP/Iraqi Ministry of Planning population estimates for mid-2004 (the middle of the study period) to construct the study sample and calcualte excess deaths.

As these were not based on a census, they are uncertain, and some commentators have noted that other available estimates could lead to different results. For example, blogger Mike Dunford points out that the population figures differ from the data from the Iraqi Ministry of Health used for the [2004 Bloomberg study].

Inaccurate population figures could have two effects:

First, population data were used to distribute clusters between Governorates (see this FAQ), and inaccurate data thus could lead to over-sampling of some Governorates and under-sampling of others. While this is a potential source of bias, it is likely to be small as each Governorates contained at least one cluster. It also is unknown whether such bias as remains would lead to an over- or under-estimate of the true number of excess deaths. The only way to establish this would be to compare against an accurate population estimate, and the researchers arguably used the most reliable recent estimate available.

Second, total population data are used to calculate the total number of excessd deaths (see this FAQ). If the total population number were too high, then the total number of excess deaths estimated also would be too high. However, while some earlier population estimates (e.g., UNDP) are smaller, it is not clear that these are more accurate. As the population structure of Iraq changes quickly through migration and high fertility and death rates, older estimates also rapidly become outdated. The use of the joint UN / Iraqi Ministry of Planning estimate only would bias the results if there were reason to believe that these institutions had a biased population estimate. (Mike Dunford acknowledges this in a follow-up post here)

More to the point, the total population estimate does not affect mortality rates, expressed in deaths per thousand. The survey found that, for every thousand Iraqis, an additional 7.8 deaths have occurred per year since the 2003 invasion. Whether the population of Iraq is 25 million or 27 million, this implies that in the region of 600 thousand excess deaths have occurred since 2003.

11: How does the exclusion from the sample of clusters in Dahuk, Muthanna, and Wassit affect the result?

Summary: Due to mistakes, some Governorates were not sampled as intended. These were treated as though no deaths had occurred, leading to a known under-estimate of the total number excess deaths in Iraq.

Exclusion of Muthanna and Dahuk Governorates

Mistakes caused the researchers not to visit clusters in the Governorats Muthanna and Dahuk province.

In the absence of firm information about mortality in these Governorates, the study researchers assumed that the mortality rate in these Governorates did not change in the study period. The figure of 655 thousand excess deaths therefore excludes any excess deaths in Muthanna and Dahuk. This is a conservative assumption, as any actual excess deaths in these provinces are not counted, and thus risks under-estimating the number of excess deaths in Iraq as a whole. (Equivalently, it is correct to say that the 655 thousand excess deaths is an estimate of deaths in Iraq excluding Muthanna and Dahuk.)

However, the underestimate is not likely to be large, for two reasons. First, the population of Muthanna and Dahuk is relatively small (c. 3.7% of the total population) and the number of deaths excluded from the study therefore also will only form a small proportion of the total number of deaths. Second, these two Governorates are areas with relatively little violence and the number of excess deaths not estimated by the study therefore is likely to be small.

Exclusion of cluster in Wassit Governorate

A cluster in the Wassit province could not be sampled because of insecurity. This led to only one cluster being surveyed in this Governorate, instead of two clusters that would be implied by the population weighting.

This could lead to a slight under-representation of the conditions in Wassit when calculating total mortality. For example, if Wassit had lower number of excess deaths than the Iraq average, then under-sampling this Governorate could lead to a slight overestimate of overall mortality in Iraq. By symmetry, if Wassit were more violent than than the average Governorate, it would lead to an underestimate.

In either case, the effect would be very small, as Wassit has only 3.7 percent of the total population. By way of illustration, even if the excess death rate in this Governorate were just half of that in the rest of the country and no clusters were sampled in this area, this would only lead to an over-estimate of the number of deaths by 1 percent.

12: Is it plausible that such a high proportion of households could produce death certificates within the time available for the interviews?

Some have questioned that death certificates could be produced within a short time (15-20 minutes) available for the survey interviews. One commentator put it this way:

And 92% produced death certificates? Maybe households in poorer countries keep that stuff in a special place forever, where they can instantly put their hands on it . . . but I certainly wouldn't want to bet that my family of pack rats could immediately produce a death certificate when asked for any of my dead relatives, much less all of them. And the average household had six people, yet the interviews supposedly took only 15 minutes. link

The survey results do imply that Iraqis generally keep death certificates available for production at short notice. One possible explanation is that death certificates are an important document and used for a number of purposes after deaths. Situations where relatives may be required to produce a death certificates include:

  • highway checkpoints,
  • entitlement to government pensions and other benefits,
  • burial (cemeteries do not take bodies for burial without a certificate),
  • insurance claims, and
  • compensation claims.
    (See the New York Times and the study companion document for more details.)

For more general considerations about the time available for each interview in the survey, see this FAQ

13: The response rate to the survey was very high (99.2%). Is this a plausible response rate?

Summary: 99.2 percent of the households approahced by the researchers agreed to supply the requested information. Some commentators have suggested that this number is not plausible, indicating some error with the survey. However, this response rate is consistent with other experience from Iraq.

A response rate of 99.2 percent seems very high compared to the response rates of market research or polls in developed countries. However, very high response rates appear to be common in Iraq. This was discussed in a paper by Johnny Heald, Managing Director of market research company Opinion Research Business, which has carried out more than 150 polls in Iraq. He wrote in a September 2006 paper Measuring Opinion in a 'War Zone':

As an industry, there is a great deal of concern with declining response rates. With face-to-face interviewing, a recent tender from the UK Government demanded a response rate of at least 70% – a figure which concerned some agencies enough not to submit proposals. There are no such problems in Iraq.

Following the invasion, the average response rate on randomly generated face-to-face samples was 95+%. While we in the West often switch off when asked about our politics and attitudes towards the Government, Iraqi’s are only too willing to have their views heard.

The paper indicates that a November 2003 poll had an astonishing 100% response rate: out of 1067 people approached, 1,067 responded.

Other pollsters have similar experiences. For example, a 2003 report by the former Coalition Provisional Authority, summarised surveys conducted by the Office of Research and Gallup. Most of the surveys have response rates, and some numbers approaching those of the survey used for the mortality study. For example, for one survey, the "overall response rate was 89 percent, ranging from 93% in Baghdad to 100% in Suleymania and Erbil."

Another point of comparison is the 2004 Iraqi Living Conditions Survey. This employed a questionnaire that took a median time of 83 minutes to complete, and 21,688 out of 22,000 households (98.5%) agreed to answer. The ILCS reported this as unusual:

The extremely high response rate on a long and taxing questionnaire is testimony to the interest the people had in telling the real story about their current situation and in contributing to building a better future.

In summary, the high response rate of the Lancet study thus appears to be consistent with other experience from Iraq.

14: Were the same fatalities reported multiple times?

Some have wondered whether same death could have been double-counted, which would lead to an over-estimate of the number of deaths. The issue was raised in an exchange hosted by the BBC, where one commenter noted that

if you were to ask people in the UK if they know anyone who has been involved in a traffic accident most would say they do. Applying your logic that means there are 60 million accidents every year.

Les Roberts, one of the authors of the study, gave the following explanation of the methodology for avoiding this in the Lancet study:

That is an excellent question. To be recorded as a death in a household, the decedent had to have spent most of the nights during the 3 months before their death "sleeping under the same roof" with the household that was being interviewed. This may have made us undercount some deaths (soldiers killed during the 2003 invasion for example) but addressed your main concern that no two households could claim the same death event.