Skip to content

Instantly share code, notes, and snippets.

View Lakens's full-sized avatar

Daniel Lakens Lakens

View GitHub Profile
@Lakens
Lakens / 4study_meta_50%_true_effects.R
Created April 30, 2016 16:29
Internal meta-analysis on 4 studies, 50% of which are true effects
if(!require(meta)){install.packages('meta')}
library(meta)
nSims <- 1000000 #number of simulated experiments
numberstudies<-4 # nSim/numberofstudies should be whole number
p <-numeric(nSims) #set up empty container for all simulated p-values
metapran <-numeric(nSims/numberstudies) #set up empty container for all simulated p-values for random effects MA
metapfix <-numeric(nSims/numberstudies) #set up empty container for all simulated p-values for fixed effects MA
heterog.p<-numeric(nSims/numberstudies) #set up empty container for test for heterogeneity
d <-numeric(nSims) #set up empty container for all simulated d's
@Lakens
Lakens / Fdist_tdist.R
Created April 7, 2016 18:12
F-distribution and t-distribution
df1<-1
df2<-100
critF<-qf(.95, df1=df1, df2=df2) #determine critical F-value
critT<-qt(.975, df2) #determine critical F-value
critF #critical F-value
critT^2 #Critical t squared is the same as critical F-value
critT #critical t-value
x=seq(0,10,length=10000)
maxy<-ifelse(max(df(x,df1,df2))==Inf,1, max(df(x,df1,df2))) #set maximum y axis
@Lakens
Lakens / GKPW.R
Created March 6, 2016 08:47
GKPW.R
setwd("C:/Users/Daniel/Downloads/Gilbert, King, Pettigrew, Wilson 2016 replication files/variability analysis replication files/data")
load("many labs replication cis.RData")
## Drop the top rows which are statistics from pooling together all the replications
res <- lapply(res, function(x) x[-c(1:2),])
res[[12]] <- res[[12]][-1,]
names(res[[16]])[3:5] <- names(res[[15]])[3:5]
## For each replicated study, get the number of the other replicated
## studies that were outside the CI
@Lakens
Lakens / CI_vs_CP.R
Created March 2, 2016 04:54
confidence intervals vs capture percentages
if(!require(ggplot2)){install.packages('ggplot2')}
library(ggplot2)
n=20 #set sample size
nSims<-100000 #set number of simulations
x<-rnorm(n = n, mean = 100, sd = 15) #create sample from normal distribution
#95%CI
CIU<-mean(x)+qt(0.975, df = n-1)*sd(x)*sqrt(1/n)
@Lakens
Lakens / spuriouscorrelation.R
Created January 29, 2016 17:21
spuriouscorrelationRPP
if(!require(ggplot2)){install.packages('ggplot2')}
library(ggplot2)
if(!require(MBESS)){install.packages('MBESS')}
library(MBESS)
#Set color palette
cbbPalette<-c("#E69F00", "#56B4E9", "#009E73", "#F0E442", "#0072B2", "#D55E00", "#CC79A7")
N<-20
#Set mean and SD
@Lakens
Lakens / BayesianPowerTtest.R
Created January 14, 2016 12:37
Bayesian Power Analysis for an Independent t-test
#Bayesian Power Analysis
if(!require(BayesFactor)){install.packages('BayesFactor')}
library(BayesFactor)
D<-0.0 #Set the true effect size
n<-50 #Set sample size of your study (number in each group)
nSim<-100000 #Set number of simulations (it takes a while, be patient)
rscaleBF<-sqrt(2)/2 #Set effect size of alternative hypothesis (default = sqrt(2)/2, or 0.707)
threshold<-3 #Set threshold for 'support' - e.g., 3, 10, or 30
@Lakens
Lakens / ErrorControlANOVA.R
Created January 1, 2016 09:50
Holm Error Control Simulation 2x2x2 ANOVA
library(reshape2)
library(mvtnorm)
library(ez)
#Install multtest# try http:// if https:// URLs are not supported
source("https://bioconductor.org/biocLite.R")
biocLite("multtest")
library(mutoss) #load multiple testing library for Holm function
#2x2x2 within design
N<-50 #sample size per group
@Lakens
Lakens / PlotScopusData.R
Created December 13, 2015 13:35
PlotScopusData.R
require(ggplot2)
#Save downloaded Scopus data in your working directory
scopusdata<-read.csv("scopusPS2010_2015.csv")
plot1<-ggplot(scopusdata, aes(x=Cited.by)) +
geom_histogram(colour="#535353", fill="#84D5F0", binwidth=2) +
xlab("Number of Citations") + ylab("Number of Articles") +
ggtitle("Citation Data for Psychological Science 2011-2015") +
coord_cartesian(xlim = c(-5, 250))
#Additional Analyses of Nuijten et al: https://mbnuijten.files.wordpress.com/2013/01/nuijtenetal_2015_reportingerrorspsychology1.pdf
#First run the original script to read in the data: https://osf.io/e9qbp/
#Select only errors.
subdata<-subset(data, data$Error == TRUE)
subdata$pdif<-subdata$Reported.P.Value-subdata$Computed #Compute difference in p-values.
#Plot differences in reported and computed p-values for all errors
ggplot(as.data.frame(subdata$pdif), aes(subdata$pdif)) +
geom_histogram(colour="black", fill="grey", binwidth = 0.01) + ggtitle("All Errors") + xlab("Reported P-value minus Computed P-value") + ylab("Frequency") + theme_bw(base_size=20)
@Lakens
Lakens / MetaAnalyticThinking.R
Last active April 23, 2021 02:38
People find it difficult to think about random variation. Our mind is more strongly geared towards recognizing patterns than randomness. In this blogpost, you can practice with getting used to what random variation looks like, how to reduce it by running well-powered studies, and how to meta-analyze multiple small studies.
# # # # # # # # # # #
#Initial settings----
# # # # # # # # # # #
if(!require(ggplot2)){install.packages('ggplot2')}
library(ggplot2)
if(!require(MBESS)){install.packages('MBESS')}
library(MBESS)
if(!require(pwr)){install.packages('pwr')}
library(pwr)
if(!require(meta)){install.packages('meta')}