Skip to content

Instantly share code, notes, and snippets.

View baogorek's full-sized avatar

Ben Ogorek baogorek

  • spencer Health Solutions, Inc.
  • Raleigh, NC
  • X @benogorek
View GitHub Profile
@baogorek
baogorek / gist:9013206
Last active August 29, 2015 13:56
A demonstration of socket programming. Open up two python shells and follow the instructions below. You'll send a message from one shell to the other using sockets! This was motivated by the Python sockets overview at http://docs.python.org/3.1/howto/sockets.html.
# Open two python shells
# Shell 1 will be our server, Shell 2 will be our client
##### In Shell 1 #####
import socket
server_socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
server_socket.bind(("127.0.0.1", 5000))
server_socket.listen(5) # Max 5 connect requests in queue
@baogorek
baogorek / lmer_nested.R
Last active August 29, 2015 13:56
Three variations of the same nested model in lmer. See http://lme4.r-forge.r-project.org/book/Ch2.pdf for more detail.
library(lme4)
head(Pastes)
# three casks are nested within each of 10 batches.
# No cask is used twice! (30 total casks)
# Noted that the variable "sample" is a concatination of "batch" and "cask"
lmer1 <- lmer(strength ~ 1 + (1 | batch) + (1 | sample), Pastes, REML=F)
lmer2 <- lmer(strength ~ 1 + (1 | batch) + (1 | batch:cask), Pastes, REML=F)
lmer3 <- lmer(strength ~ 1 + (1 | batch/cask), Pastes, REML = F)
@baogorek
baogorek / Orange_R_color_theme.xml
Created November 1, 2012 17:52
Orange_R_color_theme
<?xml version="1.0" encoding="utf-8" ?>
- <colorTheme id="1234" name="new" modified="2011-02-01 01:56:53" author="Bob Forrest" website="anythingbutrbitrary.blogspot.com">
<occurrenceIndication color="#616161" />
<findScope color="#191919" />
<deletionIndication color="#FF0000" />
<singleLineComment color="#D6D6D6" />
<multiLineComment color="#666666" />
<commentTaskTag color="#666666" />
<javadoc color="#666666" />
<javadocLink color="#666666" />
@baogorek
baogorek / Replicate_Lmer.R
Last active December 20, 2015 22:39
This example uses functions in base R, namely matrix math and the optim function, to replicate the output of the lmer function from the lme4 package. It's a toy example so efficient computation is not an issue - that's what made it fun. The estimates match up with lmer, and it's satisfying to see the algorithm work at this lower level.
rm(list = ls())
# Setup: 3 subjects, 5 total measurements, one covariate
df <- data.frame(x1 = c(1,5,2,3,4), subject = c(1,2,3,1,2))
df$subject <- factor(df$subject)
n <- 5
beta <- matrix(c(1.3, 2), ncol=1) # coeffient for x1, a value of 2
u <- matrix(c(-.5, .6, 1.5), ncol=1) # the three random effects for subjects
@baogorek
baogorek / Pixar.R
Created August 18, 2013 23:47
My take on Pixar's situation, a statistical process control viewpoint as opposed to The Atlantic's linear regression.
rm(list = ls())
library(qcc)
dat <- data.frame(name = c("Toy Story", "A Bug's Life", "Toy Story 2",
"Monsters Inc", "Nemo", "Incredibles",
"Cars", "Ratatouille", "Wall-E", "Up", "Toy Story 3",
"Cars 2", "Brave", "Monsters U"),
x = c(76, 77, 161, 185, 235, 224, 143, 213, 228, 273, 261, 79, 169, 137 ),
n = c(76, 84, 161, 193, 237, 231, 194, 222, 237, 278, 264, 203, 217, 175)
)
@baogorek
baogorek / nonlinear_least_squares.R
Created November 3, 2013 00:44
Nonlinear least squares example. Notice the "normal equations" on line 10. Three things are different than the linear case: 1) The X matrix is now the derivative of the mean function with respect to beta 2) We're solving for the change in beta from the last iteration 3) We have to iterate See http://en.wikipedia.org/wiki/Non-linear_least_squares
N <- 10000
x <- rnorm(N)
y <- exp(-.75 + 1.2*x) + rnorm(N)
alpha <- -.5
beta <- 1
for ( i in 1:10) { # 10 iterations, for no particular reason
X <- cbind(exp(alpha + beta*x), x*exp(alpha + beta*x))
pred <- exp(alpha + beta*x)
diff.soln <- solve(t(X) %*% X) %*% t(X) %*% (y - pred)
@baogorek
baogorek / create-fitness-and-fatigue.R
Last active December 23, 2018 15:35
Fitness and fatigue variables created from intensities w in data frame train_df
convolve_training <- function(training, n, tau) {
sum(training[1:(n - 1)] * exp_decay((n - 1):1, tau))
}
fitness <- sapply(1:nrow(train_df),
function(n) convolve_training(train_df$w, n, 60))
fatigue <- sapply(1:nrow(train_df),
function(n) convolve_training(train_df$w, n, 13))
@baogorek
baogorek / fitness-fatigue-optim.R
Last active December 24, 2018 16:58
Nonlinear optimization to fit the fitness-fatigue model
# Recover parameters using non-linear regression
rss <- function(theta) {
int <- theta[1] # performance baseline
k1 <- theta[2] # fitness weight
k2 <- theta[3] # fatigue weight
tau1 <- theta[4] # fitness decay
tau2 <- theta[5] # fatigue decay
fitness <- sapply(1:nrow(train_df),
function(n) convolve_training(train_df$w, n, tau1))
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
@baogorek
baogorek / spline-convolution-estimation.R
Last active March 5, 2019 14:15
Code Block 3: Introducing a spline-based method for modeling cumulative impact.
library(splines)
my_spline <- ns(1:259, Boundary.knots = c(1, 259), knots = c(14, 40, 100))
z_vars <- list()
for (n in 1:nrow(train_df)) {
spline_pred <- predict(my_spline, newx = (n - 1):1)
spline_vars <- colSums(spline_pred * train_df$w[1:(n - 1)]) # convolution
spline_const <- sum(train_df$w[1:(n - 1)])
z_vars[[n]] <- c(spline_const, spline_vars)
}