Increasing quantities of and access to both wildlife survey data and non-designed incidental or citizen science data have left us with a rather big problem: how to we put all of these disparate pieces together and build species distribution models that use as much of the available data as possible? This leads us to a series of sub-questions that I will address in this talk: should we combine data then model it all at once or, build multiple models and figure out how to combine their outputs (or couple their fitting)? How can we find equivalences in recorded effort (and what can we do when no effort is recorded)? I'll illustrate these issues and offer some solutions using example data from aerial and shipboard surveys of seabirds in New England, as well as from large-scale surveys of marine mammals in the North Atlantic.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
load("best_model.Rdata") | |
library(dsm) | |
library(raster) | |
# lazily get the plot data for the rug plot | |
plotdat <- plot(M) | |
# load the raster and mudge it into the format I want | |
dists <- stack("NA_Shore_Dist_10km_mean_10km.img") | |
dists <- as.data.frame(dists) |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
# get EXIF and make a map | |
library(leaflet) | |
library(lubridate) | |
library(plyr) | |
# all my photos are in a directory pre with subdirectories | |
pre <- "~/Dropbox/Photos/" | |
paths_to_photos <- c("some_sub_directory" | |
) |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
#273649 #647184 #B1B2C8 #A7755D #5D2E1C #38201C | |
#0F2B5F #5991C7 #8EC1E7 #B9DBF1 #D5A370 #7B4F37 | |
#5A7362 #6B867C #A1A897 #9A8D6B #8E6341 #432B21 | |
#1F150D #2B190C #513B2C #9B4D44 #893D37 #3E1B17 | |
#9C8A45 #CABE85 #678B88 #9CADAF #CCCCCC #EFEFEF | |
#1C3333 #226060 #639CA4 #D2AD7C #BE7245 #46211C | |
#0D1723 #112040 #204D88 #96ABC6 #D1DDE2 #EFEFEF | |
#000000 #350E16 #5E1521 #A72C29 #C44221 #EC702E | |
#A56B47 #C79982 #0D8EDA #23ADED #6BC6F5 #EFEFEF | |
#2A2432 #4F3855 #846D86 #EFEFCF #D5B77D #A89E5E |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
### analysis of ribbon seal data using a frequentist | |
### approach to a GAM | |
### David L Miller dave@ninepointeightone.net | |
### License: GNU GPL v3 | |
# load data from | |
# https://github.com/pconn/SpatPred/blob/master/SpatPred/data/Ribbon_data.rda | |
load("Ribbon_data.rda") |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Thanks to wikipedia for the data :) |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
#!/bin/bash | |
# alarm clock script | |
# | |
# requires: get-iplayer | |
# git clone git@github.com:dinkypumpkin/get_iplayer.git | |
# get_iplayer --prefs-add --rtmp-tv-opts="--swfVfy=http://www.bbc.co.uk/emp/releases/iplayer/revisions/617463_618125_4/617463_618125_4_emp.swf" | |
# need to install rtmpdump |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
# cyclic-random effects tensor for Noam | |
library(mgcv) | |
# code adapted from ?gam.model | |
dat <- gamSim(1,n=400,scale=2) ## simulate 4 term additive truth | |
## Now add some random effects to the simulation. Response is | |
## grouped into one of 20 groups by `fac' and each groups has a | |
## random effect added.... |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
### blah | |
library(exif) | |
# get the files | |
files <- dir(".", full.names=TRUE) | |
files <- files[grepl(".JPG", files)] | |
# get the data | |
exifs <- read_exif(files) |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
# overlap of: | |
# - US state 2 letter codes from state.abb in R | |
# - 2-letter Scrabble words https://en.wiktionary.org/wiki/Appendix:Official_English_Scrabble_2-letter_words | |
# - chemical element symbols https://en.wikipedia.org/wiki/Symbol_%28chemistry%29 | |
# get data | |
states <- tolower(state.abb) | |
scrabble <- c('aa','ab','ad','ae','ag','ah','ai','al','am','an','ar','as','at','aw','ax','ay','ba','be','bi','bo','by','ch','da','de','di','do','ea','ed','ee','ef','eh','el','em','en','er','es','et','ex','fa','fe','fy','gi','go','gu','ha','he','hi','hm','ho','id','if','in','io','is','it','ja','jo','ka','ki','ko','ky','la','li','lo','ma','me','mi','mm','mo','mu','my','na','ne','no','nu','ny','ob','od','oe','of','oh','oi','om','on','oo','op','or','os','ou','ow','ox','oy','pa','pe','pi','po','qi','re','sh','si','so','st','ta','te','ti','to','ug','uh','um','un','up','ur','us','ut','we','wo','xi','xu','ya','ye','yo','yu','za','zo') | |
elements <- tolower(c('Ac','Ag','Al','Am','Ar','As','At','Au','B','Ba','Be','Bh','Bi','Bk','Br' |