Representations:
- Hierarchical models
- Hidden Markov models
- Graphical models
- Non-parametric Bayes (distributions over functions)
Inference Approaches:
on alfred_script(q) | |
tell application "iTerm" | |
activate | |
try | |
set _term to last terminal | |
on error | |
set _term to (make new terminal) | |
end try | |
library(Rmpi) | |
## load the packages we'll need | |
RLIBS="~/R/x86_64-redhat-linux-gnu-library/2.13" | |
.libPaths(c(RLIBS, .libPaths())) | |
### Direct RMPI way: | |
mpi.spawn.Rslaves(nslaves=15) | |
slavefn <- function() { print(paste("Hello from", foldNumber)) } | |
mpi.bcast.cmd(foldNumber <- mpi.comm.rank()) |
#!/bin/sh | |
# usage: mkramdisk 1024 ~/scratch | |
function mkramdisk() { | |
ramfs_size_mb=$1 | |
mount_point=$2 | |
ramfs_size_sectors=$((${ramfs_size_mb}*1024*1024/512)) | |
ramdisk_dev=`hdid -nomount ram://${ramfs_size_sectors}` | |
newfs_hfs -v 'ram disk' ${ramdisk_dev} |
################################################################################ | |
# Copyright 2011 | |
# Andrew Redd | |
# 11/23/2011 | |
# | |
# Description of File: | |
# Makefile for knitr compiling | |
# | |
################################################################################ | |
all:pdf # default rule DO NOT EDIT |
""" | |
Reimplementation of nanonet using keras. | |
Follow the instructions at | |
https://www.tensorflow.org/install/install_linux | |
to setup an NVIDIA GPU with CUDA8.0 and cuDNN v5.1. | |
virtualenv venv --python=python3 | |
. venv/bin/activate | |
pip install numpy |
#!/usr/bin/env bash
# Assuming OS X Yosemite 10.10.4
# Install XCode and command line tools
# See https://itunes.apple.com/us/app/xcode/id497799835?mt=12#
# See https://developer.apple.com/library/mac/documentation/Darwin/Reference/ManPages/man1/xcode-select.1.html
xcode-select --install
The following instructions are for creating your own animations using the style transfer technique described by Gatys, Ecker, and Bethge, and implemented by Justin Johnson. To see an example of such an animation, see this video of Alice in Wonderland re-styled by 17 paintings.
The easiest way to set up the environment is to simply load Samim's a pre-built Terminal.com snap or use another cloud service like Amazon EC2. Unfortunately the g2.2xlarge GPU instances cost $0.99 per hour, and depending on parameters selected, it may take 10-15 minutes to produce a 512px-wide image, so it can cost $2-3 to generate 1 sec of video at 12fps.
If you do load the
one <- seq(1:10) | |
two <- rnorm(10) | |
three <- runif(10, 1, 2) | |
four <- -10:-1 | |
df <- data.frame(one, two, three) | |
df2 <- data.frame(one, two, three, four) | |
str(df) |
import s3fs | |
import pickle | |
import json | |
import numpy as np | |
BUCKET_NAME = "my-bucket" | |
# definitions, keras/tf/... imports... | |
if __name__ == "__main__": |