Skip to content

Instantly share code, notes, and snippets.

View tap222's full-sized avatar
💭
Tech Lead Capgemini R and D

tap222

💭
Tech Lead Capgemini R and D
View GitHub Profile
@lyastro
lyastro / readme.md
Last active January 7, 2024 15:28
install Tensorflow with GPU support on Centos 7

install Tensorflow with GPU support on Centos 7

system: fully installed Centos 7

  • system update
$ sudo yum -y update
$ sudo yum -y install epel-release
$ sudo yum -y install gcc gcc-c++ python-pip python-devel atlas atlas-devel gcc-gfortran openssl-devel libffi-devel
@aunyks
aunyks / snakecoin-server-full-code.py
Last active March 8, 2024 19:22
The code in this gist isn't as succinct as I'd like it to be. Please bare with me and ask plenty of questions that you may have about it.
from flask import Flask
from flask import request
import json
import requests
import hashlib as hasher
import datetime as date
node = Flask(__name__)
# Define what a Snakecoin block is
class Block:
import hashlib as hasher
import datetime as date
# Define what a Snakecoin block is
class Block:
def __init__(self, index, timestamp, data, previous_hash):
self.index = index
self.timestamp = timestamp
self.data = data
self.previous_hash = previous_hash
@oarriaga
oarriaga / spatial_transformer_network.py
Last active January 5, 2021 06:51
Implementation of Spatial Transformer Networks (https://arxiv.org/abs/1506.02025) in Keras 2.
from keras.layers.core import Layer
import keras.backend as K
if K.backend() == 'tensorflow':
import tensorflow as tf
def K_arange(start, stop=None, step=1, dtype='int32'):
result = tf.range(start, limit=stop, delta=step, name='arange')
if dtype != 'int32':
result = K.cast(result, dtype)
return result
@aparrish
aparrish / understanding-word-vectors.ipynb
Last active April 29, 2024 17:57
Understanding word vectors: A tutorial for "Reading and Writing Electronic Text," a class I teach at ITP. (Python 2.7) Code examples released under CC0 https://creativecommons.org/choose/zero/, other text released under CC BY 4.0 https://creativecommons.org/licenses/by/4.0/
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
@prakashjayy
prakashjayy / Tranfer_Learning_Keras_02.py
Last active September 17, 2019 17:10
Transfer Learning using Keras
from keras import applications
from keras.preprocessing.image import ImageDataGenerator
from keras import optimizers
from keras.models import Sequential, Model
from keras.layers import Dropout, Flatten, Dense, GlobalAveragePooling2D
from keras import backend as k
from keras.callbacks import ModelCheckpoint, LearningRateScheduler, TensorBoard, EarlyStopping
img_width, img_height = 256, 256
@Vestride
Vestride / encoding-video.md
Last active April 24, 2024 09:59
Encoding video for the web

Encoding Video

Installing

Install FFmpeg with homebrew. You'll need to install it with a couple flags for webm and the AAC audio codec.

brew install ffmpeg --with-libvpx --with-libvorbis --with-fdk-aac --with-opus
@miguelmalvarez
miguelmalvarez / kaggle_digits_23-02-2015.py
Last active September 11, 2018 07:01
kaggle_digits.py
import pandas as pd
import numpy as np
import logging
import time
import datetime
from sklearn.ensemble import RandomForestClassifier
from sklearn import cross_validation
from sklearn.svm import SVC
from sklearn.metrics import accuracy_score
from sklearn.feature_selection import VarianceThreshold
@kastnerkyle
kastnerkyle / gmmhmm.py
Last active March 9, 2023 06:14
GMM-HMM (Hidden markov model with Gaussian mixture emissions) implementation for speech recognition and other uses
# (C) Kyle Kastner, June 2014
# License: BSD 3 clause
import scipy.stats as st
import numpy as np
class gmmhmm:
#This class converted with modifications from https://code.google.com/p/hmm-speech-recognition/source/browse/Word.m
def __init__(self, n_states):
self.n_states = n_states
@ttezel
ttezel / gist:4138642
Last active March 24, 2024 03:24
Natural Language Processing Notes

#A Collection of NLP notes

##N-grams

###Calculating unigram probabilities:

P( wi ) = count ( wi ) ) / count ( total number of words )

In english..