Methods Commentary: Risk of Bias in Cross-Sectional Surveys of Attitudes and Practices

The following commentary has been contributed by the CLARITY Group at McMaster University.

A. Agarwal [1,2], G.Guyatt [2,3], J. Busse [4,5]

Keeping Terminology Consistent

In our commentaries addressing rating of the conduct of randomized control trials (RCTs) and cohort studies, we argued for the use of the term “risk of bias” rather than “quality” of the studies, “methodological quality”, “validity” or the “internal validity” of the studies. We noted that each of these terms may refer to risk bias: the likelihood that, because of flaws in design and execution of a study, it is at risk of a systematic deviation from the truth (i.e. an overestimation or underestimation). We believe risk of bias is the optimal term not only for RCTs and cohort studies, but also for cross-sectional surveys

Assessing Risk of Bias in Cross-Sectional Surveys of Attitudes and Practices

Unlike clinical trials in which participants are assigned to an intervention and are then followed prospectively to determine outcome status, or cohort studies in which individuals exposed or non-exposed to either a treatment or potentially harmful agent are followed forward in time (with either prospective or retrospective data collection), cross-sectional surveys of attitudes and practices do not involve the study of an intervention (though respondents may be selected on the basis of a given exposure status), and collect responses at a single time-point only. Such surveys require specific criteria for establishing risk of bias. We are not aware of any existing instrument that specifically addresses risk of bias in surveys of attitudes and practices. On the basis of perusal of instruments addressing other types of designs [1-5], we have selected five items to assess risk of bias for cross-sectional surveys addressing attitudes and behaviors:

  1. Representativeness of the sample: The selection of a representative population is important to ensure that the results of a given survey provide an unbiased estimate of the attitudes or practices of the target population.
  2. Adequacy of response rate: Ensuring the survey response rate is sufficiently high is important to minimize the likelihood that any systematic differences between respondents and non-respondents will influence results.
  3. Missing data within completed questionnaires: In addition to the response rate and differences between respondents and nonrespondents, one must also consider the extent of missing data within a questionnaire. A survey may be completed by the majority of a study sample, but a substantial amount of missing data due to items that were not answered by survey respondents may introduce bias.
  4. Conduct of pilot testing: Risk of bias is decreased if investigators have conducted a formal assessment of the comprehensiveness, clarity and face validity of a questionnaire with a field-test in a subset (e.g. 5 to 10 individuals) drawn from the larger sample. Such “pilot” assessments may ensure survey feasibility, readability of included items and assessment of whether they are subjectively perceived by respondents as addressing what they are designed to measure.
  5. Established validity of the survey instrument: The degree to which survey items evaluate the theoretical concept(s) the survey is focused on are important considerations. A survey should produce similar responses as other established surveys evaluating related constructs.

We have used our enhancement of the response options from the Cochrane risk of bias instrument [6] and applied a similar approach to our instrument for assessing risk of bias in cross-sectional surveys. We frame each criterion as a question, and have provided 4 response options for each one: definitely yes (low risk of bias), probably yes (low risk of bias), probably no (high risk of bias), and definitely no (high risk of bias). Response options are framed to facilitate dichotomization of studies as being either “low risk of bias” or “high risk of bias” on an item-by-item basis, which may be especially relevant for subgroup analyses based on risk of bias or pooled estimates restricted to studies at low risk of bias. The instrument includes, for each item, examples of study design that would lead to low risk of bias, higher risk of bias, and high risk of bias. In situations involving higher risk of bias, evaluators may need to carefully consider the context of an item in relation to the study being evaluated when choosing to rate as “probably yes” (low risk of bias) or “probably no” (high risk of bias).

Rating of Risk of Bias Should Be Domain-Specific

In our commentary addressing rating of the conduct of RCTs, we noted that, traditionally, systematic review authors have provided a single rating of risk of bias for a particular study. When dealing with cross-sectional surveys of attitudes and practices, authors should consider reporting risk of bias on a domain-by-domain basis rather than assigning an overall risk of bias rating. Such an approach may be especially relevant when all domains do not have the same weight for risk of bias, or when domains may be associated with one another.

References

  1. Wells G, Shea B, O’Connell D, et al. The Newcastle-Ottawa Scale (NOS) for assessing the quality of non randomized studies in meta-analyses. [http://www.ohri.ca/programs/clinical_epidemiology/oxford.Asp]
  2. Deeks JJ, Dinnes J, D’Amico R, et al. Evaluating non-randomised intervention studies. Health Technol Assess. 2003;7:iii-x,1-173.
  3. Downs S, Black N. The feasibility of creating a checklist for the assessment of methodological quality both of randomized and nonrandomized studies of health care interventions: Summation of the conference. J Epidemiol Community Health. 1998;52:377-384.
  4. Reeves B, Deeks J, Higgins JP, et al. Including non-randomized studies. In: Higgins J, Green S, eds. Cochrane Handbook for Systematic Reviews of Interventions 5.0.1. Chichester, U.K.: John Wiley & Sons, 2008.
  5. Sterne JAC, Hernán MA, Reeves BC, et al. ROBINS-I: a tool for assessing risk of bias in non-randomised studies of interventions. BMJ. 2016;355:i4919. 6 Akl EA, Sun X, Busse JW, et al. Specific instructions for estimating unclearly reported blinding status in randomized trials were reliable and valid. J Clin Epidemiol. 2012;65(3):262-7.

[1] School of Medicine, University of Toronto, Toronto, Ontario, Canada
[2] Department of Health Research Methods, Evidence and Impact, McMaster University, Hamilton, Ontario, Canada
[3] Department of Medicine, McMaster University, Hamilton, Ontario, Canada
[4] Michael G. DeGroote National Pain Center, McMaster University, Hamilton, Ontario, Canada
[5] Department of Anesthesia, McMaster University, Hamilton, Canada

Correspondence:

Dr. Jason W. Busse, DC, PhD
Department of Anesthesia, McMaster University – HSC-2U1
1200 Main St. West
Hamilton, ON, Canada, L8S 4K1
Email: [email protected]
Phone: (905) 525-9140 x21731 Fax: (905) 523-1224

Copyright The Clarity Group and Evidence Partners 2011