Skip to content

Instantly share code, notes, and snippets.

@nicebread
nicebread / Elsevier review reply.txt
Last active January 21, 2019 06:17
Reply to review request from Elsevier journal
Dear Dr. XXX,
traditional scientific publishers make higher profits than almost every other legal industry, using scholars' labor at no cost (as we are subsidized by tax payers) and draining university library funds (see, e.g. [1][2]).
Your journal has a venerable history of publishing high-quality research, and actually all the credit for that goes to the authors, the editorial board, and the reviewers - not the publisher. Yet somehow the publisher owns the content, and hides it behind paywalls. I am not interested to further support that sort of unhealthy relationship between academia and (some) traditional publishers.
I made a pledge not to review, do editorial work, or submit papers to Elsevier journals any more [3]. Instead, I reallocate my reviewing and editorial contributions to sustainable and fair ways of publishing (see., e.g., [4]). I think JRP is a good journal, which took a laudable leading role in some of our field's reforms. I encourage the editorial board to end their relationship with Els
# I try to build a prior distribution for effect sizes that closely resembles meta-analytical summaries of effect sizes. Based on nearly 34,000 effect sizes, Richard, Bond, and Stokes-Zoota (2003) conclude that about 30% of published effects sizes are smaller than d=0.20, about 76% are smaller than 0.63, and 95% are smaller than 1.15.
#
# There is a normal distribution that has a near-perfect match to these percentages: mu=0, sigma=0.55. This leads to theoretical percentages of 28%, 75%, and 96%.
# 1-pnorm(0.20, 0, 0.55, lower.tail=FALSE)*2 # 28% are smaller than 1.15
# 1-pnorm(0.63, 0, 0.55, lower.tail=FALSE)*2 # 75% are smaller than 1.15
# 1-pnorm(1.15, 0, 0.55, lower.tail=FALSE)*2 # 96% are smaller than 1.15
# However, we know that published effect sizes are overestimated due to publication bias (in RP:P and also Franco, Malhotra, & Simonovits, 2015, the reported ES was twice as large as the replication/the unreported ES.
# Therefore, we apply more shrinkage, using N(0, SD=0.3). With these settings, the
BF <- ttestBF(...)
BF.post <- posterior(BF, iterations=10000)
HPD <- HPDinterval(BF.post)
HPD["delta", "lower"]
HPD["delta", "upper"]
median(BF.post[, "delta"])
# delta gives posterior on effect size (Cohen's d)
@nicebread
nicebread / show_r_code_checkbox
Created March 16, 2015 18:45
Add "Show R code" checkbox to .Rmd files
<!--Some custom CSS styles -->
<style>
.title{
font-size: 3em;
}
body, td {
font-family: sans-serif;
background-color: white;
font-size: 16px;
}
These code snippets have been tested on R 3.1.0 and Mac OS 10.9.3. They presumably do *not* work on R 2.X!
## Enter these commands in the Mac OS Terminal
# use faster vecLib library
cd /Library/Frameworks/R.framework/Resources/lib
ln -sf /System/Library/Frameworks/Accelerate.framework/Frameworks/vecLib.framework/Versions/Current/libBLAS.dylib libRblas.dylib
# return to default settings
cd /Library/Frameworks/R.framework/Resources/lib
@nicebread
nicebread / Ruscio2008_cor_generate.R
Created November 9, 2012 13:42
Ruscio - Code for generating correlating variables with arbitrary distributions
## This code snippet is taken from Ruscio, J., & Kaczetow, W. (2008). Simulating Multivariate Nonnormal Data Using an Iterative Algorithm. Multivariate Behavioral Research, 43(3), 355–381.
## In the original code were some errors; this is a cleaned version.
## 2012 Felix Schönbrodt
#' @param Pop List with empirical samples
#' @param rho The correlation to be induced