Skip to content

Instantly share code, notes, and snippets.

View yenchenlin's full-sized avatar
👋
Hi!

Yen-Chen Lin yenchenlin

👋
Hi!
View GitHub Profile
@yenchenlin
yenchenlin / README.md
Created June 25, 2019 06:16
How to run Yen-Chen's code?

Environment

First, make sure we have the right environment. Comment out the conda command in ~/.bashrc and run

source ~/.bashrc
conda activate corl

After that, comment out the conda command and open a new tab will get back to python 2.7 environment.

import tensorflow as tf
import numpy
from sklearn.datasets import fetch_mldata
FLAGS = tf.app.flags.FLAGS
tf.app.flags.DEFINE_integer('seed', 1, "initial random seed")
tf.app.flags.DEFINE_string('layer_sizes', '784-1200-600-300-150-10', "layer sizes")
require 'torch'
require 'nn'
require 'optim'
-- to specify these at runtime, you can do, e.g.:
-- $ lr=0.001 th main.lua
opt = {
dataset = 'video2', -- indicates what dataset load to use (in data.lua)
nThreads = 32, -- how many threads to pre-fetch data
batchSize = 64, -- self-explanatory

I've tried to make SequentialDataset support Cython fused types, but it seems really expensive. You can find the modified code in this branch.

tl;dr - seq_dataset.pyx is heavily bound with sag_fast.pyx, sgd_fast.pyx.

After I modified seq_dataset.pyx, this line in sag_fast.pyx requires to change as well since this pointer is passed into SequentialDataset's function. However, my past experience is that one can only declare local floating variable when at least one of the function's argument variable also belongs to floating type. Nonetheless, that's not the case here, unless we make this function's arguments

np.ndarray[double, ndim=2, mode='c'] weights_array
np.ndarray[double, ndim=1, mode='c'] intercept_array
@yenchenlin
yenchenlin / caffe_install.md
Created July 12, 2016 13:49 — forked from titipata/caffe_install.md
My notes on how to install caffe on Ubuntu

Caffe Installation

Note on how to install caffe on Ubuntu. Sucessfully install using CPU, more information for GPU see this link

###Installation

  • verify all the preinstallation according to CUDA guide e.g.
lspci | grep -i nvidia
import timeit
import numpy as np
from sklearn.cluster import KMeans

np.random.seed(5)
X = np.random.rand(200000, 20)
X = np.float32(X)
estimator = KMeans()

In case you don't know, HTTP is stateless, which means the server you are communicating will not know who you are or what you've said to it before.

Say you logged in to a website, you will notice that you don't need to type your username, password etc when you visit the site again.

It looks like the server knows who you are, how could this be possible?

That's because a "session" is handling this for you.

Remember that HTTP is stateless, which means the server has no memories about what it said or what it heard.

import numpy as np
from scipy import sparse as sp
from sklearn.datasets.samples_generator import make_blobs
from csr_row_norms import csr_row_norms
import timeit

centers = np.array([
    [0.0, 5.0, 0.0, 0.0, 0.0],
 [1.0, 1.0, 4.0, 0.0, 0.0],