In cluster-randomized trials, particular biases to consider include: (i) recruitment bias; (ii) baseline imbalance; (iii) loss of clusters; (iv) incorrect analysis; and (v) comparability with individually randomized trials.
(i) Recruitment bias can occur when individuals are recruited to the trial after the clusters have been randomized, as the knowledge of whether each cluster is an ‘intervention’ or ‘control’ cluster could affect the types of participants recruited. Farrin et al. showed differential participant recruitment in a trial of low back pain randomized by primary care practice; a greater number of less severe participants were recruited to the ‘active management’ practices (Farrin 2005). Puffer et al. reviewed 36 cluster-randomized trials, and found possible recruitment bias in 14 (39%) (Puffer 2003).
(ii) Cluster-randomized trials often randomize all clusters at once, so lack of concealment of an allocation sequence should not usually be an issue. However, because small numbers of clusters are randomized, there is a possibility of chance baseline imbalance between the randomized groups, in terms of either the clusters or the individuals. Although not a form of bias as such, the risk of baseline differences can be reduced by using stratified or pair-matched randomization of clusters. Reporting of the baseline comparability of clusters, or statistical adjustment for baseline characteristics, can help reduce concern about the effects of baseline imbalance.
(iii) Occasionally complete clusters are lost from a trial, and have to be omitted from the analysis. Just as for missing outcome data in individually randomized trials, this may lead to bias. In addition, missing outcomes for individuals within clusters may also lead to a risk of bias in cluster-randomized trials.
(iv) Many cluster-randomized trials are analysed by incorrect statistical methods, not taking the clustering into account. For example, Eldridge et al. reviewed 152 cluster-randomized trials in primary care of which 41% did not account for clustering in their analyses (Eldridge 2004). Such analyses create a ‘unit of analysis error’ and produce over-precise results (the standard error of the estimated intervention effect is too small) and P values that are too small. They do not lead to biased estimates of effect. However, if they remain uncorrected, they will receive too much weight in a meta-analysis. Approximate methods of correcting trial results that do not allow for clustering are suggested in Section . Some of these can be implemented by review authors.
(v) In a meta-analysis including both cluster and individually randomized trials, or including cluster-randomized trials with different types of clusters, possible differences between the intervention effects being estimated need to be considered. For example, in a vaccine trial of infectious diseases, a vaccine applied to all individuals in a community would be expected to be more effective than if the vaccine was applied to only half of the people. Another example is provided by Hahn et al., who discussed a Cochrane review of hip protectors (Hahn 2005). The cluster trials showed large positive effect whereas individually randomized trials did not show any clear benefit. One possibility is that there was a ‘herd effect’ in the cluster-randomized trials (which were often performed in nursing homes, where compliance with using the protectors may have been enhanced). In general, such ‘contamination’ would lead to under-estimates of effect. Thus, if an intervention effect is still demonstrated despite contamination in those trials that were not cluster-randomized, a confident conclusion about the presence of an effect can be drawn. However, the size of the effect is likely to be underestimated. Contamination and ‘herd effects’ may be different for different types of cluster.