Skip to content

Instantly share code, notes, and snippets.

View JayKimBravekjh's full-sized avatar

Data Scientist JayKimBravekjh

  • LA, USA
View GitHub Profile

Build tensorflow on OSX with NVIDIA CUDA support (GPU acceleration)

These instructions are based on Mistobaan's gist but expanded and updated to work with the latest tensorflow OSX CUDA PR.

Requirements

OS X 10.10 (Yosemite) or newer

@JayKimBravekjh
JayKimBravekjh / DistBelief.md
Created March 25, 2018 04:09 — forked from shagunsodhani/DistBelief.md
Notes for "Large Scale Distributed Deep Networks" paper

Large Scale Distributed Deep Networks

Introduction

  • In machine learning, accuracy tends to increase with an increase in the number of training examples and number of model parameters.
  • For large data, training becomes slow on even GPU (due to increase CPU-GPU data transfer).
  • Solution: Distributed training and inference - DistBelief
  • Link to paper

DistBelief

@JayKimBravekjh
JayKimBravekjh / DistBelief.md
Created March 25, 2018 04:09 — forked from shagunsodhani/DistBelief.md
Notes for "Large Scale Distributed Deep Networks" paper

Large Scale Distributed Deep Networks

Introduction

  • In machine learning, accuracy tends to increase with an increase in the number of training examples and number of model parameters.
  • For large data, training becomes slow on even GPU (due to increase CPU-GPU data transfer).
  • Solution: Distributed training and inference - DistBelief
  • Link to paper

DistBelief

@JayKimBravekjh
JayKimBravekjh / keras_xent_rnn.py
Created February 24, 2018 16:28 — forked from DingKe/keras_xent_rnn.py
reproduce keras tf backend sparse crossentropy issu
'''
All cases pass for theano backend, but some fail with tensorflow bakcend
'''
from __future__ import print_function
from keras.layers import Input, Embedding, GRU, TimeDistributed, Dense
from keras.models import Model
def build_model(batch_size, input_length):
from threading import Thread
from time import sleep
import uuid
from dask.distributed import LocalCluster, Client
import dask.dataframe as dd
import pandas as pd
import pyspark
from threading import Thread
from time import sleep
import uuid
from dask.distributed import LocalCluster, Client
import dask.dataframe as dd
import pandas as pd
import pyspark
from threading import Thread
from time import sleep
import uuid
from dask.distributed import LocalCluster, Client
import dask.dataframe as dd
import pandas as pd
import pyspark
from threading import Thread
from time import sleep
import uuid
from dask.distributed import LocalCluster, Client
import dask.dataframe as dd
import pandas as pd
import pyspark
@JayKimBravekjh
JayKimBravekjh / test_multi_gpu.py
Created February 6, 2018 09:08 — forked from j-min/test_multi_gpu.py
TensorFlow multi GPU example
from __future__ import print_function
'''
Basic Multi GPU computation example using TensorFlow library.
Author: Aymeric Damien
Project: https://github.com/aymericdamien/TensorFlow-Examples/
'''
'''
This tutorial requires your machine to have 2 GPUs
"/cpu:0": The CPU of your machine.
@JayKimBravekjh
JayKimBravekjh / test_multi_gpu.py
Created February 6, 2018 09:07 — forked from j-min/test_multi_gpu.py
TensorFlow multi GPU example
from __future__ import print_function
'''
Basic Multi GPU computation example using TensorFlow library.
Author: Aymeric Damien
Project: https://github.com/aymericdamien/TensorFlow-Examples/
'''
'''
This tutorial requires your machine to have 2 GPUs
"/cpu:0": The CPU of your machine.