Skip to content

Instantly share code, notes, and snippets.

Install a few dependencies

sudo apt-get install python-pip apache2 libapache2-mod-wsgi

Edit Apache's configuration file

If you're going from a fresh apache install then the following command may simply work for you.

container=$1
for i in `docker ps -a | grep $1 | awk '{split($0,a," "); print a[1]}'`; do
docker rm $i;
done
# python module.py
# |
# (thread)
# |
# |
# (GIL unlocked or in a state where it can
# be release upon request, or when doing
# blocking operation)
# |
@alreadytaikeune
alreadytaikeune / features_config.json
Created September 8, 2017 13:55
Example features config
{
"format": {"type":"aggregate_n_features",
"merge_channels":true},
"features": {
"MelSp1": {
"feature_type": "PostProcessedMelSpectrum",
"parameters": {
"blockSize": 1024,
"stepSize": 1024,
"MelMaxFreq": 10000.0,
#!/usr/bin/env python
# -*- encoding: utf-8 -*-
# Copyright 2018, Anis KHLIF
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
@alreadytaikeune
alreadytaikeune / logs_base_word2vec_no_loss
Last active July 23, 2018 12:21
logs_base_word2vec_no_loss
2018-07-23 09:35:00,063 : MainThread : INFO : running /usr/local/lib/python2.7/site-packages/gensim-3.5.0-py2.7-linux-x86_64.egg/gensim/scripts/word2vec_standalone.py -train data/text9 -output /tmp/test -window 5 -negative 5 -threads 4 -min_count 5 -iter 5 -cbow 0
2018-07-23 09:35:00,064 : MainThread : INFO : collecting all words and their counts
2018-07-23 09:35:11,715 : MainThread : INFO : PROGRESS: at sentence #0, processed 0 words, keeping 0 word types
2018-07-23 09:35:29,976 : MainThread : INFO : PROGRESS: at sentence #10000, processed 100000000 words, keeping 694463 word types
2018-07-23 09:35:40,645 : MainThread : INFO : collected 833184 word types from a corpus of 124301826 raw words and 12431 sentences
2018-07-23 09:35:40,645 : MainThread : INFO : Loading a fresh vocabulary
2018-07-23 09:35:42,254 : MainThread : INFO : effective_min_count=5 retains 218316 unique words (26% of original 833184, drops 614868)
2018-07-23 09:35:42,254 : MainThread : INFO : effective_min_count=5 leaves 123353509 word corpu
2018-07-23 09:55:28,275 : MainThread : INFO : running /usr/local/lib/python2.7/site-packages/gensim-3.5.0-py2.7-linux-x86_64.egg/gensim/scripts/word2vec_standalone.py -train data/text9 -output /tmp/test -window 5 -negative 5 -threads 4 -min_count 5 -iter 5 -cbow 0
2018-07-23 09:55:28,276 : MainThread : INFO : collecting all words and their counts
2018-07-23 09:55:39,924 : MainThread : INFO : PROGRESS: at sentence #0, processed 0 words, keeping 0 word types
2018-07-23 09:55:57,855 : MainThread : INFO : PROGRESS: at sentence #10000, processed 100000000 words, keeping 694463 word types
2018-07-23 09:56:08,638 : MainThread : INFO : collected 833184 word types from a corpus of 124301826 raw words and 12431 sentences
2018-07-23 09:56:08,638 : MainThread : INFO : Loading a fresh vocabulary
2018-07-23 09:56:10,308 : MainThread : INFO : effective_min_count=5 retains 218316 unique words (26% of original 833184, drops 614868)
2018-07-23 09:56:10,308 : MainThread : INFO : effective_min_count=5 leaves 123353509 word corpu
@alreadytaikeune
alreadytaikeune / logs_new_word2vec_loss
Created July 23, 2018 12:22
logs_new_word2vec_loss
2018-07-23 10:34:49,659 : MainThread : INFO : running /usr/local/lib/python2.7/site-packages/gensim-3.5.0-py2.7-linux-x86_64.egg/gensim/scripts/word2vec_standalone.py -train data/text9 -output /tmp/test -window 5 -negative 5 -threads 4 -min_count 5 -iter 5 -cbow 0 -loss
2018-07-23 10:34:49,661 : MainThread : INFO : collecting all words and their counts
2018-07-23 10:35:01,095 : MainThread : INFO : PROGRESS: at sentence #0, processed 0 words, keeping 0 word types
2018-07-23 10:35:18,754 : MainThread : INFO : PROGRESS: at sentence #10000, processed 100000000 words, keeping 694463 word types
2018-07-23 10:35:29,348 : MainThread : INFO : collected 833184 word types from a corpus of 124301826 raw words and 12431 sentences
2018-07-23 10:35:29,349 : MainThread : INFO : Loading a fresh vocabulary
2018-07-23 10:35:30,889 : MainThread : INFO : effective_min_count=5 retains 218316 unique words (26% of original 833184, drops 614868)
2018-07-23 10:35:30,889 : MainThread : INFO : effective_min_count=5 leaves 123353509 word
bazel build tensorflow/tools/graph_transforms:summarize_graph
# While you are at it, you can also build other very helpful utilities that you may need:
bazel build tensorflow/python/tools:freeze_graph
bazel build tensorflow/tools/graph_transforms:summarize_graph
bazel build -c opt tensorflow/tools/benchmark:benchmark_model
from tensorflow.python.framework import graph_util
# Suppose you have obtained in a way or the other a graph object, and suppose
# you have a list of the output nodes names (manually created after inspectection
# with tensorboard for example). Then, one way to build a frozen graph is the following:
with tf.Session(graph=graph) as sess:
graph_def = graph.as_graph_def()
frozen_graph_def = graph_util.convert_variables_to_constants(
sess, graph_def, output_node_names)