%0 Journal Article %T Small studies may overestimate the effect sizes in critical care meta-analyses: a meta-epidemiological study %A Zhongheng Zhang %A Xiao Xu %A Hongying Ni %J Critical Care %D 2013 %I BioMed Central %R 10.1186/cc11919 %X Critical care meta-analyses involving randomized controlled trials and reported mortality as an outcome measure were considered eligible for the study. Component trials were classified as large (¡Ý100 patients per arm) and small (<100 patients per arm) according to their sample sizes. Ratio of odds ratio (ROR) was calculated for each meta-analysis and then RORs were combined using a meta-analytic approach. ROR<1 indicated larger beneficial effect in small trials. Small and large trials were compared in methodological qualities including sequence generating, blinding, allocation concealment, intention to treat and sample size calculation.A total of 27 critical care meta-analyses involving 317 trials were included. Of them, five meta-analyses showed statistically significant RORs <1, and other meta-analyses did not reach a statistical significance. Overall, the pooled ROR was 0.60 (95% CI: 0.53 to 0.68); the heterogeneity was moderate with an I2 of 50.3% (chi-squared = 52.30; P = 0.002). Large trials showed significantly better reporting quality than small trials in terms of sequence generating, allocation concealment, blinding, intention to treat, sample size calculation and incomplete follow-up data.Small trials are more likely to report larger beneficial effects than large trials in critical care medicine, which could be partly explained by the lower methodological quality in small trials. Caution should be practiced in the interpretation of meta-analyses involving small trials.Small-study effects refer to the pattern that small studies are more likely to report beneficial effect in the intervention arm, which was first described by Sterne et al. [1]. This effect can be explained, at least partly, by the combination of lower methodological quality of small studies and publication bias [2,3]. Typically, such small-study effects can be evaluated by funnel plot. Funnel plot depicts the effect size against the precision of the effect size. Small studies with effect size %U http://ccforum.com/content/17/1/R2