Navigation Menu

Skip to content

Instantly share code, notes, and snippets.

View jonspring's full-sized avatar

Jon Spring jonspring

View GitHub Profile
# Inspired by Nadieh Bremer's lasso annotations here: https://twitter.com/NadiehBremer/status/1277622602735865856
# h/t Cedric Scherer: https://twitter.com/CedScherer/status/1278351840074240001
library(ggplot2); library(ggforce)
add_lasso_layer <- function(x1, x2, y) {
width = x2 - x1
space = width * 0.05
lasso_points = data.frame(
devtools::install_github("coolbutuseless/poissoned")
library(poissoned)
# v1
poisson_disc(ncols = 100, nrows = 50, cell_size = 10) %>%
as_tibble() %>%
arrange(y) %>%
mutate(rando = runif(n()),
visibility = case_when(
y >= 300 ~ (500-y)/200,
library(tidyverse)
example_df <- tribble(
~Year, ~Category, ~Name,
2015, "A", "1",
2015, "A", "3",
2015, "A", "5",
2015, "C", "2",
2015, "C", "4",
2016, "A", "1",
library(palmerpenguins)
library(tidyverse)
library(umap)
library(gganimate)
penguins_clean <- penguins %>% filter(!is.na(bill_length_mm), !is.na(sex))
penguinos <- function(neigh = 13) {
penguins_clean %>%
select(bill_length_mm:body_mass_g) %>%
scale() %>%
library(tidyverse)
library(fable)
library(gganimate)
set.seed(1)
df <- data.frame(x = 1:365,
y = cumsum(runif(365, min = -1)))
smoothy <- function(alpha1) {
df %>%
as_tsibble(index = x) %>%
dog_moves <- readr::read_csv('https://raw.githubusercontent.com/rfordatascience/tidytuesday/master/data/2019/2019-12-17/dog_moves.csv')
dog_travel <- readr::read_csv('https://raw.githubusercontent.com/rfordatascience/tidytuesday/master/data/2019/2019-12-17/dog_travel.csv')
dog_descriptions <- readr::read_csv('https://raw.githubusercontent.com/rfordatascience/tidytuesday/master/data/2019/2019-12-17/dog_descriptions.csv')
library(tidyverse)
library(tidytext)
library(lubridate)
doggo_names <- dog_descriptions %>%
select(name, breed_primary, size, sex, size, contact_state, contact_zip, posted)
# Some ideas to try out on the data from:
# https://gist.github.com/brooke-watson/ccf3d1b1f4449ab55a72f7835a52e599
# 0. Let's describe what type of data in each row
sw1_annotated <- starwars_garbage_data1 %>%
# Counting each new group of data
mutate(group = cumsum(v1 == "Character Name")) %>%
# Assign rows within each group
@jonspring
jonspring / Sunsets for pundit
Last active December 18, 2019 17:51
When the lights go down in the city.... Local sunset times
# https://twitter.com/jonlovett/status/1191490424965218304
# What I'm curious about is how we're distributed inside of our time zones, because the further east you are inside of a time zone, the harder standard time hits. Boston's sunset is 4:34pm today. Brutal. Detroit's sunset, on the other side of the same time zone, is 5:22pm.
# I'm a big fan of Jon Lovett's podcasts, and I love trying new things in R, so it was Game On when I saw the tweet above. I googled around and combined a few data sources to get US counties' populations, coordinates, sunsets, and time zones. From those you can see who has it worst in the evenings when standard time kicks in.
# Curiously, big cities in the US tend to be on the east, "darker evening" end of their time zones.
####### Libraries #####
library(rtweet)
library(tidyverse)
library(tidytext)
library(ggforce)
sw <- search_tweets("Rogue, IV", n = 90000, include_rts = FALSE)
sw_clean <- sw %>%
unnest_tokens(word, text, drop = F, to_lower = T) %>%
filter(word %in% c("i", "ii", "iii", "iv", "v", "vi",