Skip to content

Instantly share code, notes, and snippets.

View j-min's full-sized avatar

Jaemin Cho j-min

View GitHub Profile
@danijar
danijar / blog_tensorflow_variable_sequence_labelling.py
Last active May 15, 2022 14:28
TensorFlow Variable-Length Sequence Labelling
# Working example for my blog post at:
# http://danijar.com/variable-sequence-lengths-in-tensorflow/
import functools
import sets
import tensorflow as tf
from tensorflow.models.rnn import rnn_cell
from tensorflow.models.rnn import rnn
def lazy_property(function):
@danijar
danijar / blog_tensorflow_sequence_classification.py
Last active December 24, 2021 03:53
TensorFlow Sequence Classification
# Example for my blog post at:
# https://danijar.com/introduction-to-recurrent-networks-in-tensorflow/
import functools
import sets
import tensorflow as tf
def lazy_property(function):
attribute = '_' + function.__name__
@vadimkantorov
vadimkantorov / compact_bilinear_pooling.py
Last active September 22, 2021 07:51
Compact Bilinear Pooling in PyTorch using the new FFT support
# References:
# [1] Multimodal Compact Bilinear Pooling for Visual Question Answering and Visual Grounding, Fukui et al., https://arxiv.org/abs/1606.01847
# [2] Compact Bilinear Pooling, Gao et al., https://arxiv.org/abs/1511.06062
# [3] Fast and Scalable Polynomial Kernels via Explicit Feature Maps, Pham and Pagh, https://chbrown.github.io/kdd-2013-usb/kdd/p239.pdf
# [4] Fastfood — Approximating Kernel Expansions in Loglinear Time, Le et al., https://arxiv.org/abs/1408.3060
# [5] Original implementation in Caffe: https://github.com/gy20073/compact_bilinear_pooling
# TODO: migrate to use of new native complex64 types
# TODO: change strided x coo matmul to torch.matmul(): M[sparse_coo] @ M[strided] -> M[strided]

NLTK API to Stanford NLP Tools compiled on 2015-12-09

Stanford NER

With NLTK version 3.1 and Stanford NER tool 2015-12-09, it is possible to hack the StanfordNERTagger._stanford_jar to include other .jar files that are necessary for the new tagger.

First set up the environment variables as per instructed at https://github.com/nltk/nltk/wiki/Installing-Third-Party-Software

@odashi
odashi / chainer_encoder_decoder.py
Last active January 22, 2021 14:03
Training and generation processes for neural encoder-decoder machine translation.
#!/usr/bin/python3
import datetime
import sys
import math
import numpy as np
from argparse import ArgumentParser
from collections import defaultdict
from chainer import FunctionSet, Variable, functions, optimizers
@lampts
lampts / gensim2projector_tf.py
Last active December 7, 2020 22:37
how to convert/port gensim word2vec to tensorflow projector board.
# required tensorflow 0.12
# required gensim 0.13.3+ for new api model.wv.index2word or just use model.index2word
from gensim.models import Word2Vec
import tensorflow as tf
from tensorflow.contrib.tensorboard.plugins import projector
# loading your gensim
model = Word2Vec.load("YOUR-MODEL")
@ririw
ririw / RWA.py
Created April 16, 2017 10:22
Recurrent Weighted Average RNN in pytorch
# An implementation of "Machine Learning on Sequential Data Using a Recurrent Weighted Average" using pytorch
# https://arxiv.org/pdf/1703.01253.pdf
#
#
# This is a RNN (recurrent neural network) type that uses a weighted average of values seen in the past, rather
# than a separate running state.
#
# Check the test code at the bottom for an example of usage, where you can compare it's performance
# against LSTM and GRU, at a classification task from the paper. It handily beats both the LSTM and
# GRU :)
@monikkinom
monikkinom / rnn-lstm.py
Last active September 3, 2019 04:44
Tensorflow RNN-LSTM implementation to count number of set bits in a binary string
#Source code with the blog post at http://monik.in/a-noobs-guide-to-implementing-rnn-lstm-using-tensorflow/
import numpy as np
import random
from random import shuffle
import tensorflow as tf
# from tensorflow.models.rnn import rnn_cell
# from tensorflow.models.rnn import rnn
NUM_EXAMPLES = 10000
@volkancirik
volkancirik / treernn.py
Last active October 9, 2018 13:01
Pytorch TreeRNN
"""
TreeLSTM[1] implementation in Pytorch
Based on dynet benchmarks :
https://github.com/neulab/dynet-benchmark/blob/master/dynet-py/treenn.py
https://github.com/neulab/dynet-benchmark/blob/master/chainer/treenn.py
Other References:
https://github.com/pytorch/examples/tree/master/word_language_model
https://github.com/pfnet/chainer/blob/29c67fe1f2140fa8637201505b4c5e8556fad809/chainer/functions/activation/slstm.py
https://github.com/stanfordnlp/treelstm
@ckinsey
ckinsey / chain.py
Last active August 19, 2018 04:05
Chain.py Builds the Markov Models from Slack
import json
import markovify
import re
import time
from slackclient import SlackClient
BOT_TOKEN = "insert bot token here"