Skip to content

Instantly share code, notes, and snippets.

View jovsa's full-sized avatar

Jovan Sardinha jovsa

  • Mountain View, California
  • 13:35 (UTC -07:00)
View GitHub Profile
from pdb import set_trace
from tqdm import tqdm
from heapq import *
class Node:
def __init__(self, val, next=None):
self.val = val
self.next = next
from functools import lru_cache
from typing import Dict, List, NamedTuple, Optional, Tuple
from pdb import set_trace
import numpy as np
class Config(NamedTuple):
idx: int
rotation: int # 0, 1, 2, 3
flipped: bool
# Topic: My Notes for Neural Machine Translation with Attention
# source: https://colab.research.google.com/github/tensorflow/tensorflow/blob/master/tensorflow/contrib/eager/python/examples/nmt_with_attention/nmt_with_attention.ipynb#scrollTo=yJ_B3mhW3jFk
def gru(units):
# If you have a GPU, we recommend using CuDNNGRU(provides a 3x speedup than GRU)
# the code automatically does that.
# You could also use an RNN or LSTM based on your requirements
if tf.test.is_gpu_available():
return tf.keras.layers.CuDNNGRU(units,
return_sequences=True,
Here are the steps to work initialize a virtual environment:
pip install virtualenv
$ pip install virtualenv // This installs virtual environment //for python3 > pip3 install virtualenv
$ cd my_project_folder
$ python -m virtualenv --no-site-packages venv //Initialization //for python3 > python3 -m virtualenv .venv
$ source venv/bin/activate // activates it
$ pip install -r requirements.txt
$ pip freeze > requirements.txt (if there is no requirements file)
OR
$ pip freeze >> requirements.txt (if a requirements file exists and you want to override it)
@jovsa
jovsa / important_numbers.txt
Last active January 14, 2019 10:04 — forked from jboner/latency.txt
Numbers Every Programmer Should Know (2019)
IMPORTANT NOTE: The numbers reported below are just estimates that are meant to be used for back-of-the-envelope calculations
========================================================================================================================================
Software Engineering Numbers (~2019)
========================================================================================================================================
Cache/Lookup
----------------------------------
L1 cache reference 0.5 ns
L2 cache reference 4 ns 8x L1 cache
Main memory reference 100 ns 25x L2 cache, 200x L1 cache
@jovsa
jovsa / oof_regression_stacker.py
Last active September 3, 2018 21:49
Ensembling Script: Implementing a regression stacking script.
"""Ensembling Script: Implementing a regression stacker.
A ensembling script that can stack models from XGBoost and Sklearn.
Inspiration has been drawn from Emanuele Olivetti (emanuele)
[link: https://github.com/emanuele/kaggle_pbr/blob/master/blend.py].
This script uses a K-fold split to train and predicts out-of-fold the blended datasets.
The first half contains function definition and the second half contains a functions call.
Use this script as a framework and tweak it to fit your needs.
"""
@jovsa
jovsa / bagging-sonar.py
Last active February 17, 2020 00:39
Experiments with Bagging (without sklearn)
# Bagging Algorithm on the Sonar dataset (ref: http://machinelearningmastery.com/implement-bagging-scratch-python/)
from random import seed
from random import randrange
from csv import reader
# Load a CSV file
def load_csv(filename):
dataset = list()
with open(filename, 'r') as file:
csv_reader = reader(file)
@jovsa
jovsa / bagging-sonar-with-sklearn.py
Last active January 27, 2019 23:15
Experiments with Bagging (using sklearn)
from csv import reader
import numpy as np
from random import seed
from sklearn.ensemble import BaggingClassifier, RandomForestClassifier
from sklearn.model_selection import cross_val_score
## Pre-processing functions
# Load a CSV file
def load_csv(filename):
dataset = list()