Skip to content

Instantly share code, notes, and snippets.

@bstriner
bstriner / eigen-win32-patch.txt
Created August 16, 2018 03:28
TensorFlow patch for eigen3
--- eigen_archive/Eigen/src/Core/arch/CUDA/Half.h 2018-06-22 18:09:44.000000000 -0400
+++ eigen_archive/Eigen/src/Core/arch/CUDA/Half.h 2018-07-25 01:19:55.462313100 -0400
@@ -209,7 +209,7 @@
// conversion steps back and forth.
EIGEN_STRONG_INLINE __device__ half operator + (const half& a, const half& b) {
- return __hadd(a, b);
+ return __hadd(::__half(a), ::__half(b));
}
EIGEN_STRONG_INLINE __device__ half operator * (const half& a, const half& b) {
@bstriner
bstriner / protobuf-win32-patch.txt
Created August 16, 2018 03:25
TensorFlow patch for protobuf
diff --git a/src/google/protobuf/generated_message_reflection.h b/src/google/protobuf/generated_message_reflection.h
index 177312cf..9a2e8e26 100644
--- a/src/google/protobuf/generated_message_reflection.h
+++ b/src/google/protobuf/generated_message_reflection.h
@@ -58,6 +58,8 @@
#error "You cannot SWIG proto headers"
#endif
+#undef GetMessage
+
@bstriner
bstriner / eigen-patch.txt
Created July 25, 2018 05:52
Patch for eigen3 to build for tensorflow
--- Eigen/src/Core/arch/CUDA/Half.h 2018-06-22 18:09:44.000000000 -0400
+++ Eigen/src/Core/arch/CUDA/Half.h 2018-07-25 01:19:55.462313100 -0400
@@ -209,7 +209,7 @@
// conversion steps back and forth.
EIGEN_STRONG_INLINE __device__ half operator + (const half& a, const half& b) {
- return __hadd(a, b);
+ return __hadd(::__half(a), ::__half(b));
}
EIGEN_STRONG_INLINE __device__ half operator * (const half& a, const half& b) {
@bstriner
bstriner / lstm_speed_test.py
Last active November 10, 2020 02:09
Performance tests for Pytorch LSTMs
"""
A series of speed tests on pytorch LSTMs.
- LSTM is fastest (no surprise)
- When you have to go timestep-by-timestep, LSTMCell is faster than LSTM
- Iterating using chunks is slightly faster than __iter__ or indexing depending on setup
**Results**
My Ubuntu server:
OS: posix, pytorch version: 0.4.0a0+67bbf58
import keras.backend as K
from keras.callbacks import CSVLogger
from keras.datasets import mnist
from keras.layers import Input, Lambda, Dense, Flatten, BatchNormalization, Activation
from keras.models import Model
def main():
# Both inputs and targets are `Input` tensors
input_x = Input((28, 28), name='input_x', dtype='uint8') # uint8 [0-255]
@bstriner
bstriner / wikimodel.py
Created March 8, 2017 03:44
Read output from wikiextractor
import glob
import os
import json
class WikiDoc(object):
def __init__(self, url, text, id, title):
self.url = url
self.text = text
self.id = id
2017-01-25 03:23:43,801 - root - INFO - Checking rate limit for user github-bot-bot
2017-01-25 03:23:44,107 - root - INFO - Limit: 5000, Remaining: 97, Reset: 2017-01-25 04:00:30
2017-01-25 03:23:44,108 - root - INFO - Starting close_inactive_issues
2017-01-25 03:23:44,108 - root - INFO - Repo: fchollet/keras, User: github-bot-bot
2017-01-25 03:23:44,108 - root - INFO - warning_start: 14, warning_frequency: 7, closing: 56
2017-01-25 03:23:44,200 - root - INFO - Current limit: 97. Sleeping for 300 seconds.
2017-01-25 03:28:44,315 - root - INFO - Current limit: 97. Sleeping for 300 seconds.
2017-01-25 03:33:44,428 - root - INFO - Current limit: 97. Sleeping for 300 seconds.
2017-01-25 03:38:44,538 - root - INFO - Current limit: 97. Sleeping for 300 seconds.
2017-01-25 03:43:44,653 - root - INFO - Current limit: 97. Sleeping for 300 seconds.
@bstriner
bstriner / keras_backend_optimizer_example.py
Last active October 13, 2021 01:23
How to use Keras backend and optimizers directly outside of a Keras model
from keras.optimizers import Adam
from keras import backend as K
from keras.datasets import mnist
from keras.utils.np_utils import to_categorical
from keras.metrics import categorical_accuracy
from keras.initializations import glorot_uniform, zero
import numpy as np
# inputs and targets are placeholders
input_dim = 28*28