Skip to content

Instantly share code, notes, and snippets.

Yen-Chen Lin yenchenlin

Block or report user

Report or block yenchenlin

Hide content and notifications from this user.

Learn more about blocking users

Contact Support about this user’s behavior.

Learn more about reporting abuse

Report abuse
View GitHub Profile
yenchenlin /
Created Jun 25, 2019
How to run Yen-Chen's code?


First, make sure we have the right environment. Comment out the conda command in ~/.bashrc and run

source ~/.bashrc
conda activate corl

After that, comment out the conda command and open a new tab will get back to python 2.7 environment.

import tensorflow as tf
import numpy
from sklearn.datasets import fetch_mldata
FLAGS ='seed', 1, "initial random seed")'layer_sizes', '784-1200-600-300-150-10', "layer sizes")
View main.lua
require 'torch'
require 'nn'
require 'optim'
-- to specify these at runtime, you can do, e.g.:
-- $ lr=0.001 th main.lua
opt = {
dataset = 'video2', -- indicates what dataset load to use (in data.lua)
nThreads = 32, -- how many threads to pre-fetch data
batchSize = 64, -- self-explanatory

I've tried to make SequentialDataset support Cython fused types, but it seems really expensive. You can find the modified code in this branch.

tl;dr - seq_dataset.pyx is heavily bound with sag_fast.pyx, sgd_fast.pyx.

After I modified seq_dataset.pyx, this line in sag_fast.pyx requires to change as well since this pointer is passed into SequentialDataset's function. However, my past experience is that one can only declare local floating variable when at least one of the function's argument variable also belongs to floating type. Nonetheless, that's not the case here, unless we make this function's arguments

np.ndarray[double, ndim=2, mode='c'] weights_array
np.ndarray[double, ndim=1, mode='c'] intercept_array
yenchenlin /
Created Jul 12, 2016 — forked from titipata/
My notes on how to install caffe on Ubuntu

Caffe Installation

Note on how to install caffe on Ubuntu. Sucessfully install using CPU, more information for GPU see this link


  • verify all the preinstallation according to CUDA guide e.g.
lspci | grep -i nvidia
import timeit
import numpy as np
from sklearn.cluster import KMeans

X = np.random.rand(200000, 20)
X = np.float32(X)
estimator = KMeans()
View gist:eadaccf6ee986f08e7083c7db8b3589d

In case you don't know, HTTP is stateless, which means the server you are communicating will not know who you are or what you've said to it before.

Say you logged in to a website, you will notice that you don't need to type your username, password etc when you visit the site again.

It looks like the server knows who you are, how could this be possible?

That's because a "session" is handling this for you.

Remember that HTTP is stateless, which means the server has no memories about what it said or what it heard.

import numpy as np
from scipy import sparse as sp
from sklearn.datasets.samples_generator import make_blobs
from csr_row_norms import csr_row_norms
import timeit

centers = np.array([
    [0.0, 5.0, 0.0, 0.0, 0.0],
    [1.0, 1.0, 4.0, 0.0, 0.0],
You can’t perform that action at this time.