Skip to content

Instantly share code, notes, and snippets.

@GabrielHoffman
Last active July 29, 2020 22:22
Show Gist options
  • Save GabrielHoffman/8920cc403872828c766e3d127f699fb8 to your computer and use it in GitHub Desktop.
Save GabrielHoffman/8920cc403872828c766e3d127f699fb8 to your computer and use it in GitHub Desktop.
The goal of meta-analysis is to test if an effect is significant by combining multiple studies. One way of doing this is by using the effect size and standard errors from each study. In a fixed effect meta-analysis (i.e. FE) the null hypothesis is that effects sizes from all studies are the same and equal zero. A random effect meta-analysis assumes that the true effect sizes can vary across studies. The null hypothesis is that the mean of the observed effect sizes is zero and the variance of the effect sizes is zero (i.e. all effect sizes are constant and zero). in REML (i.e. random effects) the variance of effect sizes is computed and a test is significant if there is sufficient variability.
The choice between FE and REML is the question: should effects of opposite signs be significant? In practice, if two studies have effect sizes -10 and 10 (i.e. opposite sign) FE will say that the mean effect size is zero and the test is not significant. But REML will say that the effect sizes vary substantially so that the test is significant. In this case, based on the biology, I don’t want to count opposite signs as significant.
I just picked a simple example. In practice, a differential expression analysis of a given gene in 2+ studies is most likely to give opposite signs (see https://www.synapse.org/#!Synapse:syn22087539) due to 1) random noise, 2) confounding and 3) different biology across studies. We tend to believe that #1 and #2 are much larger drivers of this discordance than #3. Therefore don’t want to count discordant results as significant, as least on the first try
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment