Increasing quantities of and access to both wildlife survey data and non-designed incidental or citizen science data have left us with a rather big problem: how to we put all of these disparate pieces together and build species distribution models that use as much of the available data as possible? This leads us to a series of sub-questions that I will address in this talk: should we combine data then model it all at once or, build multiple models and figure out how to combine their outputs (or couple their fitting)? How can we find equivalences in recorded effort (and what can we do when no effort is recorded)? I'll illustrate these issues and offer some solutions using example data from aerial and shipboard surveys of seabirds in New England, as well as from large-scale surveys of marine mammals in the North Atlantic.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
# get EXIF and make a map | |
library(leaflet) | |
library(lubridate) | |
library(plyr) | |
# all my photos are in a directory pre with subdirectories | |
pre <- "~/Dropbox/Photos/" | |
paths_to_photos <- c("some_sub_directory" | |
) |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
#273649 #647184 #B1B2C8 #A7755D #5D2E1C #38201C | |
#0F2B5F #5991C7 #8EC1E7 #B9DBF1 #D5A370 #7B4F37 | |
#5A7362 #6B867C #A1A897 #9A8D6B #8E6341 #432B21 | |
#1F150D #2B190C #513B2C #9B4D44 #893D37 #3E1B17 | |
#9C8A45 #CABE85 #678B88 #9CADAF #CCCCCC #EFEFEF | |
#1C3333 #226060 #639CA4 #D2AD7C #BE7245 #46211C | |
#0D1723 #112040 #204D88 #96ABC6 #D1DDE2 #EFEFEF | |
#000000 #350E16 #5E1521 #A72C29 #C44221 #EC702E | |
#A56B47 #C79982 #0D8EDA #23ADED #6BC6F5 #EFEFEF | |
#2A2432 #4F3855 #846D86 #EFEFCF #D5B77D #A89E5E |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Thanks to wikipedia for the data :) |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
library(emoGG) | |
library(ggplot2) | |
# set the am variable to be different emoji | |
mtcars$am[mtcars$am==1] <- "1f697" | |
mtcars$am[mtcars$am==0] <- "1f68c" | |
# use am as the emoji aesthetic | |
ggplot(mtcars, aes(wt, mpg, emoji=am))+ geom_emoji() |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
# convert mp4 to gif | |
# converts in.mp4 to out.gif, using 0-20s of the mp4 at resolution 640x480 | |
ffmpeg -ss 00:00:00.000 -i in.mp4 -pix_fmt rgb24 -r 10 -s 640x480 -t 00:00:20.000 out.gif | |
# alternative mp4 to gif (using imagemagick for the second step) | |
ffmpeg -i input.mp4 -r 10 output%05d.png | |
convert output*.png output.gif | |
# strip audio out of a video file (to aac) |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
# cyclic-random effects tensor for Noam | |
library(mgcv) | |
# code adapted from ?gam.model | |
dat <- gamSim(1,n=400,scale=2) ## simulate 4 term additive truth | |
## Now add some random effects to the simulation. Response is | |
## grouped into one of 20 groups by `fac' and each groups has a | |
## random effect added.... |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
# you need my horse library from here: https://github.com/dill/horse | |
library(horse) | |
library(magrittr) | |
library(knitr) | |
setup_twitter_oauth("you auth stuff", | |
"goes here", | |
"you can't use mine", | |
"it's mine") |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
### blah | |
library(exif) | |
# get the files | |
files <- dir(".", full.names=TRUE) | |
files <- files[grepl(".JPG", files)] | |
# get the data | |
exifs <- read_exif(files) |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
# overlap of: | |
# - US state 2 letter codes from state.abb in R | |
# - 2-letter Scrabble words https://en.wiktionary.org/wiki/Appendix:Official_English_Scrabble_2-letter_words | |
# - chemical element symbols https://en.wikipedia.org/wiki/Symbol_%28chemistry%29 | |
# get data | |
states <- tolower(state.abb) | |
scrabble <- c('aa','ab','ad','ae','ag','ah','ai','al','am','an','ar','as','at','aw','ax','ay','ba','be','bi','bo','by','ch','da','de','di','do','ea','ed','ee','ef','eh','el','em','en','er','es','et','ex','fa','fe','fy','gi','go','gu','ha','he','hi','hm','ho','id','if','in','io','is','it','ja','jo','ka','ki','ko','ky','la','li','lo','ma','me','mi','mm','mo','mu','my','na','ne','no','nu','ny','ob','od','oe','of','oh','oi','om','on','oo','op','or','os','ou','ow','ox','oy','pa','pe','pi','po','qi','re','sh','si','so','st','ta','te','ti','to','ug','uh','um','un','up','ur','us','ut','we','wo','xi','xu','ya','ye','yo','yu','za','zo') | |
elements <- tolower(c('Ac','Ag','Al','Am','Ar','As','At','Au','B','Ba','Be','Bh','Bi','Bk','Br' |