Skip to content

Instantly share code, notes, and snippets.

View Tensorflow_intro-Ru.ipynb
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
View DeepNLP.md

DeepNLP

  1. Intro

    • backprop,
    • theano,
    • mnist classification;
  2. Embeddings

    • Text classification: bag of words, TF-IDF
    • NLTK: lemmatization, stemming;
View minQueue.cpp
#include <iostream>
#include <algorithm>
#include <stack>
#include <stdexcept>
#include <map>
#include <sstream>
#include <utility>
#include <climits>
using namespace std;
View sort_my_photos.py
import os
src_path = '/Users/fogside/Desktop/all_photos/Camera/'
dst_path = '/Users/fogside/Desktop/all_photos/'
###############################################################################################
##### Creating folders like <year-month> in dst_path ######################################
##### and moving photos from all folders in the src_path ######################################
##### to the corresponding folder <year-month> ######################################
##### filenames must be like: ######################################
View lstm_1_2.ipynb
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
View LSTM_1_2.py
## Задание 1 -- нужно было просто сделать только одно матричное умножение в имплементации ячейки lstm:
in_mtx = tf.Variable(tf.truncated_normal([vocabulary_size, num_nodes*4], -0.1, 0.1))
out_mtx = tf.Variable(tf.truncated_normal([num_nodes, num_nodes*4], -0.1, 0.1))
b_vec = tf.Variable(tf.zeros([1, num_nodes*4]))
# Variables saving state across unrollings.
saved_output = tf.Variable(tf.zeros([batch_size, num_nodes]), trainable=False)
saved_state = tf.Variable(tf.zeros([batch_size, num_nodes]), trainable=False)
View Word2Vec_CBOW.py
from itertools import compress
data_index = 0
def generate_batch_cbow(batch_size, num_skips, skip_window):
'''
Batch generator for CBOW (Continuous Bag of Words)
batch should be a shape of (batch_size, num_skips)
Parameters
----------
batch_size: number of words in each mini-batch
View knn.py
import numpy as np
import math
from scipy import stats
from sklearn.neighbors import KDTree
import warnings
warnings.filterwarnings("ignore", category=DeprecationWarning)
from sklearn.metrics import confusion_matrix
import math
View script.py
#!/usr/local/bin/python
# -*- coding: utf-8 -*-
"""
Summer School problem.
usage:
http letnyayashkola.org/api/v1.0/workshop-programs/ | ./script.py
"""
You can’t perform that action at this time.