Skip to content

Instantly share code, notes, and snippets.

r_version_table <- data.frame(stringsAsFactors=FALSE,
version = c("R-3.0.0", "R-3.0.1", "R-3.0.2", "R-3.0.3", "R-3.1.0",
"R-3.1.1", "R-3.1.2", "R-3.1.3", "R-3.2.0", "R-3.2.1",
"R-3.2.2", "R-3.2.3", "R-3.2.4-revised", "R-3.2.5", "R-3.3.0",
"R-3.3.1", "R-3.3.2", "R-3.3.3", "R-3.4.0", "R-3.4.1", "R-3.4.2",
"R-3.4.3", "R-3.4.4", "R-3.5.0", "R-3.5.1", "R-3.5.2", "R-3.5.3",
"R-3.6.0"),
date = c("2013-04-03", "2013-05-16", "2013-09-25", "2014-03-06",
"2014-04-10", "2014-07-10", "2014-10-31", "2015-03-09",
"2015-04-16", "2015-06-18", "2015-08-14", "2015-12-10", "2016-03-16",
filter(gapminder, year == 1962) %>%
ggplot(aes(fertility, life_expectancy)) +
geom_point()

Option x, software peer-review by specialized communities

Or this could go in the community engagement section

Perhaps one of the most impactful outcomes of having software throughly reviewed by a community of peers, regardless of whether it results in a journal publication or not, is the improvement of the software itself. One prominent exemplar of a community led software review system is the software onboarding process (https://ropensci.org/blog/2017/09/01/nf-softwarereview/) created by the rOpenSci project (https://ropensci.org/), a non-profit initiative that promotes reproducible research through software development, advocacy, and community outreach.

rOpenSci follows an open peer review model and the entire review process is designed to be non-adversarial and constructive. rOpenSci blends best practices from publication peer-review with newer practices that address the unique challenges of reviewing software. This system combines elements of traditional academic peer review (relying on externa

rrtools::use_compendium("pkgname")
# Ideally
rrtools::use_compendium("/full/path/to/pkgname")
#
usethis::use_mit_license(name = "Karthik Ram")
devtools::use_github(".", auth_token = "xxxx", protocol = "https", private = FALSE)
library(ggplot2)
library(patchwork)
d1 <- runif(500)
d2 <- rep(c("Treatment", "Control"), each = 250)
d3 <- rbeta(500, shape1 = 100, shape2 = 3)
d4 <- d3 + rnorm(500, mean = 0, sd = 0.1)
plotData <- data.frame(d1, d2, d3, d4)
str(plotData)
p1 <- ggplot(data = plotData) + geom_point(aes(x = d3, y = d4))
library(rAltmetric)
library(tidyverse)
ids <- list(
"10.1016/j.kint.2017.06.017",
"10.1016/j.kint.2017.08.025",
"10.1016/j.kint.2017.07.024",
"10.1016/j.kint.2017.09.013",
"10.1016/j.kint.2017.06.026",
library('rcrossref')
library('tibble')
library('lubridate')
library('ggplot2')
library('dplyr')
library('readr')
library('tidyr')
# These next few lines of code create a set of start and end dates
library(googlesheets)
library(tidyverse)
lsp(googlesheets)
data <- gs_title("2018_unconf_nominations")
kr <- data %>% gs_read("Sheet1")
names(kr)
kr <- kr[-1, ]
kr %>% group_by(support_needed) %>% tally
library(ggplot2)
library(dplyr)
df <-
acuna[grepl("cited_by", names(acuna))] %>% unlist %>% data.frame %>% tibble::rownames_to_column()
names(df) <- c("Provider", "Hits")
df <- df %>% mutate(Provider = gsub("cited_by_", "", Provider))
tmp <- tempfile()
curl::curl_download(acuna$images.small, destfile = tmp)
donut <- readPNG(tmp)
library(rAltmetric)
library(purrr)
# we launch purrr because it gives us the safely function
ids <- c("10.1371/journal.pbio.0000002",
"10.1371/journal.pbio.0000003",
"10.1371/journal.pbio.0000004",
"10.1371/journal.pbio.0000005",
"10.1371/journal.pbio.0000006",