Skip to content

Instantly share code, notes, and snippets.

View hcl14's full-sized avatar

Egor Beliaev hcl14

View GitHub Profile
@hcl14
hcl14 / peopledetect.cpp
Last active October 10, 2016 20:51 — forked from foundry/peopledetect.cpp
OpenCV Hog Descriptor trial
#include "opencv2/imgproc/imgproc.hpp"
#include "opencv2/objdetect/objdetect.hpp"
#include "opencv2/highgui/highgui.hpp"
#include <stdio.h>
#include <string.h>
#include <ctype.h>
//#include "openCVUtils.h"
using namespace cv;
@hcl14
hcl14 / newton_tensorflow.py
Last active January 5, 2022 09:09
Simple example of second-order optimization using Newton's method in Tensorflow
# Newton's method in Tensorflow
# 'Vanilla' N.m. intended to work when loss function to be optimized is convex.
# One-layer linear network without activation is convex.
# If activation function is monotonic, the error surface associated with a single-layer model is convex.
# In other cases, Hessian will have negative eigenvalues in saddle points and other non-convex places of the surface
# To fix that, you can try different methods. One of those approaches is to do eigendecomposition of H and invert negative eigenvalues,
# making H "pushing out" in those directions, as described in this paper: Identifying and attacking the saddle point problem in high-dimensional non-convex optimization (https://papers.nips.cc/paper/5486-identifying-and-attacking-the-saddle-point-problem-in-high-dimensional-non-convex-optimization.pdf)
@hcl14
hcl14 / newton_tensorflow_iris.py
Created September 20, 2018 10:50
Simple example of second-order optimization via Newton's method in Tensorflow on Iris dataset
# Newton's method in Tensorflow
# 'Vanilla' N.m. intended to work when loss function to be optimized is convex.
# One-layer linear network without activation is convex.
# If activation function is monotonic, the error surface associated with a single-layer model is convex.
# In other cases, Hessian will have negative eigenvalues in saddle points and other non-convex places of the surface
# To fix that, you can try different methods. One of those approaches is to do eigendecomposition of H and invert negative eigenvalues,
# making H "pushing out" in those directions, as described in this paper: Identifying and attacking the saddle point problem in high-dimensional non-convex optimization (https://papers.nips.cc/paper/5486-identifying-and-attacking-the-saddle-point-problem-in-high-dimensional-non-convex-optimization.pdf)
@hcl14
hcl14 / newton_tensorflow_sklearn_digits.py
Last active September 20, 2018 11:56
Simple example of Netwon's method of second order optimization in Tensorflow on sklearn digits dataset
# Newton's method in Tensorflow
# WARNING! This code is memory and computationally intensive, better run it on GPU
# having bigger dimensionality increases computing time significantly
# Original dataset is passable on GTX 1050 GPU, but if you have time/memory problems, uncomment PCA compression
# Also, you can probably remove line 159 (hessian fixing) if you use PCA
# 'Vanilla' N.m. intended to work when loss function to be optimized is convex.
# One-layer linear network without activation is convex.
# If activation function is monotonic, the error surface associated with a single-layer model is convex.
# Layer-wise training neural network with second order (Newton).
# New layer is added on each iteration and optimized with Newton Method.
# And example of Tensorflow eager execution
# Combines gradiets, hessian, and call to Optimizer
# Might contain logical errors though, so think yourself when adapting this code
# Newton's method in Tensorflow
@hcl14
hcl14 / database.txt
Created October 10, 2018 14:32
database.txt
Which color is normally a cat?;Black
How tall was the longest man on earth?;272 cm
Is the earth round?;Yes
Which color is normally a cat?;Black
How tall was the longest man on earth?;272 cm
Is the earth round?;Yes
Which color is normally a cat?;Black
How tall was the longest man on earth?;272 cm
Is the earth round?;Yes
Which color is normally a cat?;Black
@hcl14
hcl14 / qlearn_simple.py
Last active October 16, 2018 08:53
Q-learning behavior
# Simple example of Q-learning inability to go in loops
# Though it is strictly forbibben by the code (line 101),
# but you can comment out that logic and see that algorithm just becomes less stable
# The reason is that loop is impossible in this setup,
# as only a single Q-value exists for each position on the map
import numpy as np
np.random.seed(0)
@hcl14
hcl14 / flask_app.py
Created October 16, 2018 09:37
Logging into separate files for multiprocessing and tornado/flask in python
# create flask app to be run by tornado process
# process-specific globals
import global_vars
from flask import Flask, request, Response, json, abort, jsonify
# oridnary (non-flask) json if needed
import json as json2
@hcl14
hcl14 / 1_tkinter_separate.py
Last active October 16, 2018 10:20
Running tkinter (packange pythin3-tk in Linux) as a separate process to display images, generated in main program
# My answer here: https://stackoverflow.com/questions/52793096/reload-and-zoom-image/52818151#52818151
from PIL import ImageTk, Image
from scipy.ndimage import rotate
from scipy.misc import imresize
import numpy as np
import time
## Author: Victor Dibia
## Load hand tracking model, spin up web socket and web application.
from utils import detector_utils as detector_utils
from utils import object_id_utils as id_utils
import cv2
import tensorflow as tf
import multiprocessing
from multiprocessing import Queue, Pool