Skip to content

Instantly share code, notes, and snippets.

Alrecenk Alrecenk

View GitHub Profile
Alrecenk /
Last active Aug 29, 2015
A logistic regression algorithm for binary classification implemented using Newton's method and a Wolfe condition based inexact line-search.
/* A logistic regression algorithm for binary classification implemented using Newton's method and
* a Wolfe condition based inexact line-search.
*created by Alrecenk for May 2014
public class LogisticRegressionSimple {
double w[] ; //the weights for the logistic regression
int degree ; // degree of polynomial used for preprocessing
//preprocessed list of input/output used for calculating error and its gradients
Alrecenk /
Created May 17, 2014
A simple initial guess for a logistic regression. P and N are the average inputs of the positives and negatives respectively.
for(int k=0;k<pos.length;k++){
pp += pos[k]*pos[k] ;
pn += pos[k]*neg[k] ;// calculate relevant dot products
nn += neg[k]*neg[k] ;
//assuming w = alpha * (pos- neg) with b in the last w slot
//solve for alpha and b so that pos returns 0.75 and neg returns 0.25
double alphab[] = lineIntersection(pp-pn,1,sinv(0.75),pn-nn,1,sinv(0.25)) ;
w = new double[input[0].length] ;
for(int k=0;k<w.length-1;k++){
Alrecenk /
Last active Aug 29, 2015
Performs an inexact line-search for optimization . Finds a step-size satisfying the Wolfe conditions by binary search.
//Performs a binary search to satisfy the Wolfe conditions
//returns alpha where next x =should be x0 + alpha*d
//guarantees convergence as long as search direction is bounded away from being orthogonal with gradient
//x0 is starting point, d is search direction, alpha is starting step size, maxit is max iterations
//c1 and c2 are the constants of the Wolfe conditions (0.1 and 0.9 can work)
public static double stepSize(OptimizationProblem problem, double[] x0, double[] d, double alpha, int maxit, double c1, double c2){
//get error and gradient at starting point
double fx0 = problem.error(x0);
double gx0 = dot(problem.gradient(x0), d);
Alrecenk /
Created May 20, 2014
Functions required for gradient descent to fit a Logistic Regression model.
//starting from w0 searches for a weight vector using gradient descent
//and Wolfe condition line-search until the gradient magnitude is below tolerance
//or a maximum number of iterations is reached.
public double[] gradientDescent(double w0[], double tolerance, int maxiter){
double w[] = w0 ;
double gradient[] = gradient(w0) ;
int iteration = 0 ;
while(Math.sqrt(dot(gradient,gradient)) > tolerance && iteration < maxiter){
iteration++ ;
//calculate step-size in direction of negative gradient
Alrecenk /
Created May 21, 2014
Hessian (second derivative) of error calculation for a logistic regression model.
//returns the hessian (gradient of gradient) of error with respect to weights
//for a logistic regression with weights w on the given input and output
//output should be in the form 0 for negative, 1 for positive
public double[][] hessian(double w[]){
heval++;//keep track of how many times this has been called
double h[][] = new double[w.length][] ;
//second derivative matrices are always symmetric so we only need triangular portion
for(int j=0;j<h.length;j++){
h[j] = new double [j+1] ;
Alrecenk /
Last active Aug 29, 2015
Scales down an image by an area weighted averaging of all overlapping pixels. Equivalent to infinite supersampling.
//scales an image, maintaining aspect ratio, to fit within the desired width and height
//averages color over squares weighted by overlap area to get "perfect" results when scaling down
//does not interpolate and is not designed for scaling up, only down (hence thumbnail)
public static BufferedImage makethumbnail(BufferedImage img, double desiredwidth, double desiredheight){
//System.out.println("Original Image size: " + img.getWidth(null)+ " x "+img.getHeight(null));
if(img ==null || img.getWidth(null) <1 || img.getHeight(null) <1){
return null; // something wrong with image
byte image[][][] = convertimage(img) ; // convert to byte array, first index is x, then y, then {r,g,b}
Alrecenk /
Created Oct 21, 2015
LRU Cache using java generics.
/* This is an efficient implementation of a fixed size cache using Java generics.
* It uses a least recently used eviction policy implemented via a combination hashtable/linked list data structure.
* Written by Alrecenk October 2015 because I felt like playing with generics.
import java.util.HashMap;
import java.util.Random;
public class Cache<KeyType, RecordType>{
Alrecenk /
Created Sep 8, 2013
Naive Bayes Example Calculation
public double ProbabilityOfInputIfPositive(double in[]){
double prob = 1/Math.sqrt(2 * Math.PI) ;
for(int j=0; j<in.length;j++){
prob*= Math.exp(- (in[j]-posmean[j])*(in[j]-posmean[j]) / (2*posvariance[j]) ) ;
return prob ;
Alrecenk /
Last active Dec 22, 2015
Naive Bayes Classifier Construction
//constructs a naive Bayes binary classifier
public NaiveBayes(double in[][], boolean out[]){
int inputs = in[0].length ;
//initialize sums and sums of squares for each class
double[] poss = new double[inputs], poss2 = new double[inputs];
double[] negs = new double[inputs], negs2 = new double[inputs];
//calculate amount of each class, sums, and sums of squares
for(int k=0;k<in.length;k++){//for each data point
positives++;//keep track of total positives
Alrecenk /
Created Sep 8, 2013
Calculate the output for a Naive Bayes classifier.
//Calculate the probability that the given input is in the positive class
public double probability(double in[]){
double relativepositive=0,relativenegative=0;
for(int j=0; j<in.length; j++){
relativepositive += (in[j]-posmean[j])*(in[j]-posmean[j]) / posvariance[j] ;
relativenegative += (in[j]-negmean[j])*(in[j]-negmean[j]) / negvariance[j] ;
relativepositive = positives*Math.exp(0.5*relativepositive) ;
relativenegative = negatives*Math.exp(0.5*relativenegative) ;
return relativepositive / (relativepositive + relativenegative) ;
You can’t perform that action at this time.