Skip to content

Instantly share code, notes, and snippets.

@wolterlw
wolterlw / hand_ssh_anchors.cpp
Created September 20, 2019 19:10
c++ program to get anchors needed for hand detection
/*
parameter JSON
{
"num_layers": 5,
"min_scale": 0.1171875,
"max_scale": 0.75,
"input_size_height": 256,
"input_size_width": 256,
"anchor_offset_x": 0.5,
"anchor_offset_y": 0.5,
@JohanAR
JohanAR / PolynomialRegression.h
Last active January 20, 2020 14:46 — forked from chrisengelsma/PolynomialRegression.h
Polynomial regression in c++
#ifndef _POLYNOMIAL_REGRESSION_H
#define _POLYNOMIAL_REGRESSION_H __POLYNOMIAL_REGRESSION_H
/**
* PURPOSE:
*
* Polynomial Regression aims to fit a non-linear relationship to a set of
* points. It approximates this by solving a series of linear equations using
* a least-squares approach.
*
* We can model the expected value y as an nth degree polynomial, yielding
@sonots
sonots / preprocessor.pyx
Created October 13, 2017 15:54
preprocessor in cython
# A trick to embed preprocessors in cython
cdef extern from *:
cdef void EMIT_IF_PYTHON_VERSION_HEX_LT_37 "#if PY_VERSION_HEX < 0x03070000 //" ()
cdef void EMIT_ELSE "#else //" ()
cdef void EMIT_ENDIF "#endif //" ()
EMIT_IF_PYTHON_VERSION_HEX_LT_37()
EMIT_ELSE()
@wassname
wassname / jaccard_coef_loss.py
Last active January 30, 2024 15:45
jaccard_coef_loss for keras. This loss is usefull when you have unbalanced classes within a sample such as segmenting each pixel of an image. For example you are trying to predict if each pixel is cat, dog, or background. You may have 80% background, 10% dog, and 10% cat. Should a model that predicts 100% background be 80% right, or 30%? Categor…
from keras import backend as K
def jaccard_distance_loss(y_true, y_pred, smooth=100):
"""
Jaccard = (|X & Y|)/ (|X|+ |Y| - |X & Y|)
= sum(|A*B|)/(sum(|A|)+sum(|B|)-sum(|A*B|))
The jaccard distance loss is usefull for unbalanced datasets. This has been
shifted so it converges on 0 and is smoothed to avoid exploding or disapearing
gradient.
@chrisengelsma
chrisengelsma / PolynomialRegression.h
Last active February 28, 2024 14:07
Polynomial Regression (Quadratic Fit) in C++
#ifndef _POLYNOMIAL_REGRESSION_H
#define _POLYNOMIAL_REGRESSION_H __POLYNOMIAL_REGRESSION_H
/**
* PURPOSE:
*
* Polynomial Regression aims to fit a non-linear relationship to a set of
* points. It approximates this by solving a series of linear equations using
* a least-squares approach.
*
* We can model the expected value y as an nth degree polynomial, yielding
@wsargent
wsargent / win10-dev.md
Last active March 21, 2024 04:27
Windows Development Environment for Scala

Windows 10 Development Environment for Scala

This is a guide for Scala and Java development on Windows, using Windows Subsystem for Linux, although a bunch of it is applicable to a VirtualBox / Vagrant / Docker subsystem environment. This is not complete, but is intended to be as step by step as possible.

Harden Windows 10

Read the entire Decent Security guide, and follow the instructions, especially:

@wassname
wassname / dice_loss_for_keras.py
Created September 26, 2016 08:32
dice_loss_for_keras
"""
Here is a dice loss for keras which is smoothed to approximate a linear (L1) loss.
It ranges from 1 to 0 (no error), and returns results similar to binary crossentropy
"""
# define custom loss and metric functions
from keras import backend as K
def dice_coef(y_true, y_pred, smooth=1):
@shagunsodhani
shagunsodhani / Batch Normalization.md
Last active July 25, 2023 18:07
Notes for "Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift" paper

The Batch Normalization paper describes a method to address the various issues related to training of Deep Neural Networks. It makes normalization a part of the architecture itself and reports significant improvements in terms of the number of iterations required to train the network.

Issues With Training Deep Neural Networks

Internal Covariate shift

Covariate shift refers to the change in the input distribution to a learning system. In the case of deep networks, the input to each layer is affected by parameters in all the input layers. So even small changes to the network get amplified down the network. This leads to change in the input distribution to internal layers of the deep network and is known as internal covariate shift.

It is well established that networks converge faster if the inputs have been whitened (ie zero mean, unit variances) and are uncorrelated and internal covariate shift leads to just the opposite.

@kingjr
kingjr / interactive_mri.py
Created January 30, 2016 01:10
This allows plotting an MRI interactively
import numpy as np
import matplotlib.pyplot as plt
from nilearn.plotting.img_plotting import _load_anat
fname = '/home/jrking/nilearn_data/haxby2001/subj1/anat.nii.gz'
class MRI_viewer():
def __init__(self, fname):
# setup figure
fig, axes = plt.subplots(1, 3)
@baraldilorenzo
baraldilorenzo / readme.md
Last active June 13, 2024 03:07
VGG-16 pre-trained model for Keras

##VGG16 model for Keras

This is the Keras model of the 16-layer network used by the VGG team in the ILSVRC-2014 competition.

It has been obtained by directly converting the Caffe model provived by the authors.

Details about the network architecture can be found in the following arXiv paper:

Very Deep Convolutional Networks for Large-Scale Image Recognition

K. Simonyan, A. Zisserman