Let machine M be the Main machine with the repo, and A the Auxiliary machine which wants to help out.
- Machine M creates bundle with complete repo:
git bundle create repo.bundle HEAD master
- M sends
repo.bundle
to A. - A clones repo from bundle:
=== Use conda environment to run a python script === | |
SHELL=/bin/bash | |
CONDA_PREFIX=/home/code/packages/miniconda3 | |
CONDA_INIT="/home/code/packages/miniconda3/etc/conda/activate.d/proj4-activate.sh" | |
PYTHON=/home/code/packages/miniconda3/bin/python | |
12 13 * * * . $CONDA_INIT ; $PYTHON <python file> | |
=== Use date utility for naming a log file === | |
DATEVAR="date +%Y%m%d_%H%M" | |
17 * * * * echo "logs_$($DATEVAR).txt" |
======================= | |
FLASK module -> | |
from flask import Flask, g, request, redirect, url_for | |
======================= |
## Save tf model under graph-model | |
saver = tf.train.Saver() # general saver object | |
#saves a model every 2 hours and maximum 4 latest models are saved. | |
saver = tf.train.Saver(max_to_keep=4, keep_checkpoint_every_n_hours=2) | |
saver = tf.train.Saver([w1,w2]) # save a list of variables | |
saver.save(sess, 'my-test-model') / saver.save(sess, 'my_test_model',global_step=1000) | |
saver.save(sess, 'my-model', global_step=step,write_meta_graph=False) # dont save the model graph |
$ sudo mkdir -p /var/www/test.com/public_html | |
$ sudo mkdir -p /var/www/example.com/public_html | |
$ sudo chown -R $USER:$USER /var/www/example.com/public_html | |
$ sudo chown -R $USER:$USER /var/www/test.com/public_html | |
$ sudo chmod -R 755 /var/www | |
Create the virtual hosts configuration files for our two sites | |
$ sudo cp /etc/apache2/sites-available/000-default.conf /etc/apache2/sites-available/test.com.conf |
import cv2 # still used to save images out | |
import os | |
import numpy as np | |
from decord import VideoReader | |
from decord import cpu, gpu | |
def extract_frames(video_path, frames_dir, overwrite=False, start=-1, end=-1, every=1): | |
""" | |
Extract frames from a video using decord's VideoReader |
from concurrent.futures import ProcessPoolExecutor, as_completed | |
import cv2 | |
import multiprocessing | |
import os | |
import sys | |
def print_progress(iteration, total, prefix='', suffix='', decimals=3, bar_length=100): | |
""" | |
Call in a loop to create standard out progress bar |
# Which attributes/features to choose? | |
# Which model to use? | |
# Tune/optimize the chosen model for the best performance | |
# Ensuring the trained model will generalize to unseen data | |
# Estimate performance of the trained model on unseen data | |
# imports | |
import sklearn | |
import IPython.display | |
import matplotlib.pyplot as plt |
""" | |
Training | |
Validation on a holdout set generated from the original training data | |
Evaluation on the test data | |
- correct and test batch generation | |
- Normalize input by 255? | |
- add batchnorm layers? use model(x, training=False) then | |
- tf.data.Dataset.from_tensor_slices((x_train, y_train)).batch(batch_size) ? dataset = dataset.cache()? | |
- get_compiled_model() | |
- test last batch having non-dividing batch-size aka residual batch issue |