Unfortunately, many cluster-randomized trials have in the past failed to report appropriate analyses. They are commonly analysed as if the randomization was performed on the individuals rather than the clusters. If this is the situation, approximately correct analyses may be performed if the following information can be extracted:
The number of clusters (or groups) randomized to each intervention group; or the average (mean) size of each cluster;
The outcome data ignoring the cluster design for the total number of individuals (for example, number or proportion of individuals with events, or means and standard deviations);
An estimate of the intracluster (or intraclass) correlation coefficient (ICC).
The ICC is an estimate of the relative variability within and between clusters (Donner 1980). It describes the ‘similarity’ of individuals within the same cluster. In fact this is seldom available in published reports. A common approach is to use external estimates obtained from similar studies, and several resources are available that provide examples of ICCs (Ukoumunne 1999, Campbell 2000, Health Services Research Unit 2004). ICCs may appear small compared with other types of correlations: values lower than 0.05 are typical. However, even small values can have a substantial impact on confidence interval widths (and hence weights in a meta-analysis), particularly if cluster sizes are large. Empirical research has observed that larger cluster sizes are associated with smaller ICCs (Ukoumunne 1999).
An approximately correct analysis proceeds as follows. The idea is to reduce the size of each trial to its ‘effective sample size’ (Rao 1992). The effective sample size of a single intervention group in a cluster-randomized trial is its original sample size divided by a quantity called the ‘design effect’. The design effect is
1 + (M – 1) ICC,
where M is the average cluster size and ICC is the intracluster correlation coefficient. A common design effect is usually assumed across intervention groups. For dichotomous data both the number of participants and the number experiencing the event should be divided by the same design effect. Since the resulting data must be rounded to whole numbers for entry into RevMan this approach may be unsuitable for small trials. For continuous data only the sample size need be reduced; means and standard deviations should remain unchanged.