Navigation Menu

Skip to content

Instantly share code, notes, and snippets.

View dongzhuoyao's full-sized avatar
💭
I may be slow to respond.

Tao Hu dongzhuoyao

💭
I may be slow to respond.
View GitHub Profile
@dongzhuoyao
dongzhuoyao / bp.py
Created March 27, 2016 04:40
three layer back propagation
#http://www.cnblogs.com/hhh5460/p/4304628.html
import math
import random
import string
random.seed(0)
# 生成区间[a, b)内的随机数
def rand(a, b):
@dongzhuoyao
dongzhuoyao / Caffe + Ubuntu 14.04 64bit + CUDA 6.5 配置说明.md
Last active May 28, 2016 08:06 — forked from bearpaw/Caffe + Ubuntu 12.04 64bit + CUDA 6.5 配置说明.md
Caffe + Ubuntu 12.04 / 14.04 64bit + CUDA 6.5 / 7.0 配置说明

Caffe + Ubuntu 14.04 64bit + CUDA 6.5 配置说明

本步骤能实现用Intel核芯显卡来进行显示, 用NVIDIA GPU进行计算。

1. 安装开发所需的依赖包

安装开发所需要的一些基本包

sudo apt-get install build-essential  # basic requirement
sudo apt-get install libprotobuf-dev libleveldb-dev libsnappy-dev libopencv-dev libboost-all-dev libhdf5-serial-dev libgflags-dev libgoogle-glog-dev liblmdb-dev protobuf-compiler #required by caffe
@dongzhuoyao
dongzhuoyao / build-caffe.md
Created August 16, 2016 01:17 — forked from kylemcdonald/build-caffe.md
How to build Caffe for OS X.

Theory of Building Caffe on OS X

Introduction

Our goal is to run python -c "import caffe" without crashing. For anyone who doesn't spend most of their time with build systems, getting to this point can be extremely difficult on OS X. Instead of providing a list of steps to follow, I'll try to epxlain why each step happens.

This page has OS X specific install instructions.

I assume:

@dongzhuoyao
dongzhuoyao / multiclass_svm.py
Created November 13, 2016 07:48 — forked from mblondel/multiclass_svm.py
Multiclass SVMs
"""
Multiclass SVMs (Crammer-Singer formulation).
A pure Python re-implementation of:
Large-scale Multiclass Support Vector Machine Training via Euclidean Projection onto the Simplex.
Mathieu Blondel, Akinori Fujino, and Naonori Ueda.
ICPR 2014.
http://www.mblondel.org/publications/mblondel-icpr2014.pdf
"""
@dongzhuoyao
dongzhuoyao / deeplab-attention-to-scale
Created December 14, 2016 13:46
deeplab-attention-to-scale
# VGG 16-layer network convolutional finetuning
# Network modified to have smaller receptive field (128 pixels)
# nand smaller stride (8 pixels) when run in convolutional mode.
#
# In this model we also change max pooling size in the first 4 layers
# from 2 to 3 while retaining stride = 2
# which makes it easier to exactly align responses at different layers.
#
# For alignment to work, we set (we choose 32x so as to be able to evaluate
# the model for all different subsampling sizes):
@dongzhuoyao
dongzhuoyao / deeplabv2_resnet_test
Last active December 25, 2016 05:02
deeplabv2_resnet_test
name: "deeplabv2_resnet101_test"
layer {
name: "data"
type: "ImageSegData"
top: "data"
top: "label"
top: "data_dim"
include {
phase: TEST
@dongzhuoyao
dongzhuoyao / gist:7e1101209dffcc9264d082d2803ca0ce
Last active December 19, 2016 01:53
deeplabv2_resnet101_deploy
name: "deeplabv2_resnet101_deploy"
layers {
name: "data"
type: MEMORY_DATA
top: "data"
top: "label"
top: "data_dim"
memory_data_param {
@dongzhuoyao
dongzhuoyao / deeplabv2_resnet101_train
Created December 25, 2016 05:11
deeplabv2_resnet101_train
name: "deeplabv2_resnet101_train"
layer {
name: "data"
type: "ImageSegData"
top: "data"
top: "label"
#top: "data_dim"
include {
phase: TRAIN
@dongzhuoyao
dongzhuoyao / deeplabv2_vgg16_train
Created December 25, 2016 05:14
deeplabv2_vgg16_train
# VGG 16-layer network convolutional finetuning
# Network modified to have smaller receptive field (128 pixels)
# nand smaller stride (8 pixels) when run in convolutional mode.
#
# In this model we also change max pooling size in the first 4 layers
# from 2 to 3 while retaining stride = 2
# which makes it easier to exactly align responses at different layers.
#
# For alignment to work, we set (we choose 32x so as to be able to evaluate
# the model for all different subsampling sizes):