Skip to content

Instantly share code, notes, and snippets.

@marcelcaraciolo
Last active August 7, 2019 18:51
Show Gist options
  • Star 6 You must be signed in to star a gist
  • Fork 11 You must be signed in to fork a gist
  • Save marcelcaraciolo/1365841 to your computer and use it in GitHub Desktop.
Save marcelcaraciolo/1365841 to your computer and use it in GitHub Desktop.
Logistic prediction
def sigmoid(X):
'''Compute the sigmoid function '''
#d = zeros(shape=(X.shape))
den = 1.0 + e ** (-1.0 * X)
d = 1.0 / den
return d
def compute_cost(theta,X,y): #computes cost given predicted and actual values
m = X.shape[0] #number of training examples
theta = reshape(theta,(len(theta),1))
#y = reshape(y,(len(y),1))
J = (1./m) * (-transpose(y).dot(log(sigmoid(X.dot(theta)))) - transpose(1-y).dot(log(1-sigmoid(X.dot(theta)))))
grad = transpose((1./m)*transpose(sigmoid(X.dot(theta)) - y).dot(X))
#optimize.fmin expects a single value, so cannot return grad
return J[0][0]#,grad
def compute_grad(theta, X, y):
#print theta.shape
theta.shape = (1, 3)
grad = zeros(3)
h = sigmoid(X.dot(theta.T))
delta = h - y
l = grad.size
for i in range(l):
sumdelta = delta.T.dot(X[:, i])
grad[i] = (1.0 / m) * sumdelta * - 1
theta.shape = (3,)
return grad
@waylonflinn
Copy link

Here are some slightly simplified versions. I modified grad to be slightly more vectorized. I also took out the negatives in the cost function and gradient.

def sigmoid(X):
    return 1 / (1 + numpy.exp(- X))

def cost(theta, X, y):
    p_1 = sigmoid(numpy.dot(X, theta)) # predicted probability of label 1
    log_l = (-y)*numpy.log(p_1) - (1-y)*numpy.log(1-p_1) # log-likelihood vector

    return log_l.mean()

def grad(theta, X, y):
    p_1 = sigmoid(numpy.dot(X, theta))
    error = p_1 - y # difference between label and prediction
    grad = numpy.dot(error, X_1) / y.size # gradient vector

    return grad

Here's how I ran them:

import scipy.optimize as opt

# prefix an extra column of ones to the feature matrix (for intercept term)
theta = 0.1* numpy.random.randn(3)
X_1 = numpy.append( numpy.ones((X.shape[0], 1)), X, axis=1)

theta_1 = opt.fmin_bfgs(cost, theta, fprime=grad, args=(X_1, y))

Some initial values of theta cause it to fail to converge. Just run it again.

@marcelcaraciolo
Copy link
Author

Hi @waylonflinn I solved the problem updating the cost function.

@cipri-tom
Copy link

hi all,

why does the gradient have to be scaled by y.size ?
Thank you!

@graffaner
Copy link

graffaner commented Nov 7, 2016

@cipri-tom scaled by y.size to calculate the average since summed over all training data I think

@vinipachecov
Copy link

vinipachecov commented Feb 16, 2017

I think your code is not converging to the minimum. As I tested is not converging at all. Check this tutorial code that I'll post below here and I think it will make sense. Your full code, that is not this one is converging to the initial parameter vector, which is (0,0,0). Try using fmin_tnc instead of fmin_bfgs.
Btw, excellent work.

http://www.johnwittenauer.net/machine-learning-exercises-in-python-part-3/

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment