Skip to content

Instantly share code, notes, and snippets.

FROM allanino/nupic
# Clone Cerebro repository
RUN git clone https://github.com/numenta/nupic.cerebro.git /usr/local/src/nupic.cerebro
# Install dependencies
# Install Mongo
RUN \
apt-get install -y libevent-dev;\
apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv 7F0CEB10;\

First of all, the major change was in the way to use the anomaly scores. Clearly we can see from the monitor's plots that anomalies do occur, but we don't use it in a pratical manner. The threshold line at 0.6 anomaly is there for visual purposes only, as it would catch too many events if we used it as an actual threshold. So a way to look at it is to seek patterns within the anomalies scores. The first thing I though was to use the frequency of anomalies beyond the threshold as a metric, as it is more probable that something is wrong if we get a lot of anomalous patterns in a short period of time. But the key here is the word probable, so why not use a probability distribution to estimate the likelihood of something really anomalous? The interesting thing is that the people at Grok also realized that the raw anomaly score is not a very good metric, as we can see in this excelent talk by Grok's engineer Subutai. As they released

@allanino
allanino / Dockerfile
Created August 6, 2014 20:28
Minimal Dockerfile for installing NuPIC
FROM ubuntu:14.04
RUN apt-get update
# Install curl (for downloading pip) and other dependencies
RUN apt-get install -y curl clang cmake git python2.7-dev python2.7 python-numpy
# Install pip
RUN curl --silent --show-error --retry 5 https://bootstrap.pypa.io/get-pip.py | sudo python2.7
@allanino
allanino / pr-fetch.md
Created October 17, 2014 17:50
Fetch a PR
git fetch origin pull/1234/head:pr-1234

Here: 1234 is the PR number. pr-1234 is your local branch name.

@allanino
allanino / generate.rb
Created February 25, 2015 15:00
Generate sequential 3 bytes sequences with characters in [0-9a-z]
def character(i)
return i < 10 ? (i + 48).chr : (i + 87).chr
end
def generate(n)
i1 = n % 36
i3 = n/(36*36)
i2 = n/36 - i3*36
return "#{character(i3)}#{character(i2)}#{character(i1)}"
end
@allanino
allanino / modular_inversion.c
Created March 17, 2015 22:15
Compute the inverse of n module m.
#include "stdio.h"
// Compute the solution x to x*n % m == 1 using the Generalized Euclidean Agorithm
int inverse(int n, int m){
int t0 = 0, t1 = 1;
int s0 = 1, s1 = 0;
int r = m - 1; // Just to get started
int a = m;
int b = n;
int q, s, t;
@allanino
allanino / get_ip.sh
Created June 22, 2015 17:38
Command to get external IP
#!/bin/bash
wget -qO- http://ipecho.net/plain ; echo
@allanino
allanino / gist:8f8eb50a62e450980cbc
Last active August 29, 2015 14:25 — forked from karpathy/gist:587454dc0146a6ae21fc
An efficient, batched LSTM.
"""
This is a batched LSTM forward and backward pass
"""
import numpy as np
import code
class LSTM:
@staticmethod
def init(input_size, hidden_size, fancy_forget_bias_init = 3):
@allanino
allanino / keras_silly_example.py
Created November 5, 2015 19:12
A fully working (but silly) example of MLP in Keras.
from keras.models import Sequential
from keras.layers.core import Dense, Dropout, Activation
from keras.optimizers import SGD
import numpy as np
from keras.utils import np_utils
# Create a random matrix with 1000 rows (data points) and 15 columns (features)
train_rows = 1000
X_train = np.random.rand(train_rows, 15)
@allanino
allanino / tensorflow_docker.sh
Created November 11, 2015 15:01
Start Jupyter server with TensorFlow demo notebooks.
#!/bin/bash
# Run this
docker run -it -p 8888:8888 b.gcr.io/tensorflow/tensorflow-full /run_jupyter.sh --notebook-dir=/tensorflow/tensorflow/tools/docker/notebooks/
# Go to http://localhost:8888