Skip to content

Instantly share code, notes, and snippets.

View lukereding's full-sized avatar

luke reding lukereding

View GitHub Profile
@lukereding
lukereding / gist:b560a4b8f2185bd8e5cc8e29e2af41b9
Created May 1, 2017 14:02
deck_gl_geojson output run locally
0214-lreding:deck_gl_geojson lreding$ ap run
+ yarn add deck.gl
yarn add v0.23.2
warning No license field
[1/4] 🔍 Resolving packages...
[2/4] 🚚 Fetching packages...
[3/4] 🔗 Linking dependencies...
[4/4] 📃 Building fresh packages...
warning Your current version of Yarn is out of date. The latest version is "0.23.3" while you're on "0.23.2".
success Saved 1 new dependency.
@lukereding
lukereding / swag.yml
Created June 2, 2017 15:19
test_swag_yaml
swagger: '2.0'
info:
description: >-
This is a simple example of a REST API deployed on Anaconda Enterprise and
documented with Swagger UI.
version: 2017.06.02
title: quotes
termsOfService: 'http://swagger.io/terms/'
contact:
email: lreding@continuum.io
swagger: '2.0'
info:
description: >-
This API implments some simple natural language processing tasks like
tokenization, sentiment analysis, and retrieving parts of speech from text.
version: 2017.06.08
title: NLP with Anaconda
termsOfService: ''
contact:
email: lreding@continuum.io
random_csv = "http://datadryad.org/bitstream/handle/10255/dryad.37977/Cranby%20E4%20pheno.csv?sequence=1"
# read in a csv
df <- read.csv(random_csv)
# print it to the console
print(df)
# print the raw characters from the csv file to the console
readLines(random_csv)
swagger: '2.0'
info:
description: >-
This API implments some simple natural language processing tasks like
tokenization, sentiment analysis, finding the most common words, and
retrieving parts of speech from text.
version: 2017.06.14
title: Natural Language Processing with Anaconda
termsOfService: ''
contact:
@lukereding
lukereding / example_ggraph.R
Created August 21, 2017 22:28
example of ggraph usage
df <- read_csv("http://datadryad.org/bitstream/handle/10255/dryad.152300/forDryad.csv?sequence=1")
df %<>% unite(uni, source, target)
edges <- df %>%
group_by(uni) %>%
count %>%
separate(col = "uni", into = c("source", "target"), sep = "_")
edges <- rename(edges, number = n)
net <-graph_from_data_frame(edges)
@lukereding
lukereding / gist:9bacdc6e42dff610b91d3cdd94a5eb1a
Created August 24, 2017 13:36
cowplot, marginal plots, and coord_flip
library(ggplot2)
library(cowplot) # github version
# create dataframe with x and y with different ranges
df <- data.frame(x = rnorm(20, mean = 5, sd = 2), y = rnorm(20, mean = 20, sd = 5))
# create scatterplot
p <- ggplot(df, aes(x = x, y = y)) +
geom_point()
data_frame(x=1:10) %>%
mutate(
# x2 = x*2,
x3 = x*3) %>%
filter(x>5)
@lukereding
lukereding / make_video.sh
Created October 18, 2017 20:53
example shell script
#!/usr/bin/env bash
# this script downloads an example animation from the internet and creates looped version of the animation that you could then use in a mate choice study
# the advantage of the script as opposed to doing this in something like iMovie:
### time. On my computer this script takes <30 seconds to execute. Doing this in iMovie could easily take half an hour.
### repeatability. I can send this script to you and you can make the video yourself.
### easy to remake videos when you make a mistake
### easy to keep track of what you've done. A plain text script like this allows you do _version control_ your work, allowing you to revert to a previous version and always know when you changed parts of your code
# make a directory to keep things tidy
@lukereding
lukereding / make_vids_variable_speeds.sh
Created November 3, 2017 15:20
bash script to download an animation and create video with variable speeds
# the goal:
## download the animation
## use ffmpeg to make it playable
## change the speed of the animation
## randomally change the speed of the animation
## concatenate multiple speeds together
# make a directory to keep things tidy
mkdir create_video
cd create_video