Instantly share code, notes, and snippets.

# AlrecenkAlrecenk

• Sort options
Last active Aug 29, 2015
A logistic regression algorithm for binary classification implemented using Newton's method and a Wolfe condition based inexact line-search.
View LogisticRegressionSimple.java
 /* A logistic regression algorithm for binary classification implemented using Newton's method and * a Wolfe condition based inexact line-search. *created by Alrecenk for inductivebias.com May 2014 */ public class LogisticRegressionSimple { double w[] ; //the weights for the logistic regression int degree ; // degree of polynomial used for preprocessing //preprocessed list of input/output used for calculating error and its gradients
Created May 17, 2014
A simple initial guess for a logistic regression. P and N are the average inputs of the positives and negatives respectively.
View LogisticRegressionInitialGuess.java
 for(int k=0;k
Last active Aug 29, 2015
Performs an inexact line-search for optimization . Finds a step-size satisfying the Wolfe conditions by binary search.
View WolfeConditionInexactLineSearch.java
 //Performs a binary search to satisfy the Wolfe conditions //returns alpha where next x =should be x0 + alpha*d //guarantees convergence as long as search direction is bounded away from being orthogonal with gradient //x0 is starting point, d is search direction, alpha is starting step size, maxit is max iterations //c1 and c2 are the constants of the Wolfe conditions (0.1 and 0.9 can work) public static double stepSize(OptimizationProblem problem, double[] x0, double[] d, double alpha, int maxit, double c1, double c2){ //get error and gradient at starting point double fx0 = problem.error(x0); double gx0 = dot(problem.gradient(x0), d);
Created May 20, 2014
Functions required for gradient descent to fit a Logistic Regression model.
 //starting from w0 searches for a weight vector using gradient descent //and Wolfe condition line-search until the gradient magnitude is below tolerance //or a maximum number of iterations is reached. public double[] gradientDescent(double w0[], double tolerance, int maxiter){ double w[] = w0 ; double gradient[] = gradient(w0) ; int iteration = 0 ; while(Math.sqrt(dot(gradient,gradient)) > tolerance && iteration < maxiter){ iteration++ ; //calculate step-size in direction of negative gradient
Created May 21, 2014
Hessian (second derivative) of error calculation for a logistic regression model.
View LogisticRegressionHessian.java
 //returns the hessian (gradient of gradient) of error with respect to weights //for a logistic regression with weights w on the given input and output //output should be in the form 0 for negative, 1 for positive public double[][] hessian(double w[]){ heval++;//keep track of how many times this has been called double h[][] = new double[w.length][] ; //second derivative matrices are always symmetric so we only need triangular portion for(int j=0;j
Last active Aug 29, 2015
Scales down an image by an area weighted averaging of all overlapping pixels. Equivalent to infinite supersampling.
View PerfectThumbnail.java
 //scales an image, maintaining aspect ratio, to fit within the desired width and height //averages color over squares weighted by overlap area to get "perfect" results when scaling down //does not interpolate and is not designed for scaling up, only down (hence thumbnail) public static BufferedImage makethumbnail(BufferedImage img, double desiredwidth, double desiredheight){ //System.out.println("Original Image size: " + img.getWidth(null)+ " x "+img.getHeight(null)); if(img ==null || img.getWidth(null) <1 || img.getHeight(null) <1){ return null; // something wrong with image }else{ byte image[][][] = convertimage(img) ; // convert to byte array, first index is x, then y, then {r,g,b}
Created Oct 21, 2015
LRU Cache using java generics.
View Cache.java
 /* This is an efficient implementation of a fixed size cache using Java generics. * It uses a least recently used eviction policy implemented via a combination hashtable/linked list data structure. * Written by Alrecenk October 2015 because I felt like playing with generics. */ import java.util.HashMap; import java.util.Random; public class Cache{
Created Sep 8, 2013
Naive Bayes Example Calculation
View NaiveBayesExample.java
 public double ProbabilityOfInputIfPositive(double in[]){ double prob = 1/Math.sqrt(2 * Math.PI) ; for(int j=0; j
Last active Dec 22, 2015
Naive Bayes Classifier Construction
View NaiveBayesConstructor.java
 //constructs a naive Bayes binary classifier public NaiveBayes(double in[][], boolean out[]){ int inputs = in.length ; //initialize sums and sums of squares for each class double[] poss = new double[inputs], poss2 = new double[inputs]; double[] negs = new double[inputs], negs2 = new double[inputs]; //calculate amount of each class, sums, and sums of squares for(int k=0;k
Created Sep 8, 2013
Calculate the output for a Naive Bayes classifier.
View NaiveBayesApplication.java
 //Calculate the probability that the given input is in the positive class public double probability(double in[]){ double relativepositive=0,relativenegative=0; for(int j=0; j
You can’t perform that action at this time.