Skip to content

Instantly share code, notes, and snippets.

@egonSchiele
Created March 10, 2016 04:10
Show Gist options
  • Save egonSchiele/979922cfe7966033f209 to your computer and use it in GitHub Desktop.
Save egonSchiele/979922cfe7966033f209 to your computer and use it in GitHub Desktop.
Non-linear classification example
% my training data.
% so if x > 3 || x < 7, y = 1, otherwise y = 0.
x = 1:100;
y = [0, 0, 0, 1, 1, 1, 1, zeros(1, 93)];
% instead of theta' * x, I'm trying to create
% a non-linear decision boundary.
% So instead of y = theta_0 + theta_1 * x, I use:
function result = h(x, theta)
result = sigmoid(theta(1) + theta(2) * x + theta(3) * ((x - theta(4))^2));
end
function result = sigmoid(z)
result = 1 / (1 + e ^ (-z));
end
% cost function, works correctly
function distance = cost(theta)
distance = 0;
x = 1:100;
y = [0, 0, 0, 1, 1, 1, 1, zeros(1, 93)];
for i = 1:length(x) % arrays in octave are indexed starting at 1
if (y(i) == 1)
distance += -log(h(x(i), theta));
else
distance += -log(1 - h(x(i), theta));
end
end
% get how far off we were on average
distance = distance / length(x);
end
alpha = 1;
iters = 500;
m = length(x);
% initial values
theta = [0, 0, 0, 0];
% I'm not using gradient descent.
% Instead I use Octave's built-in function
% fminunc to find the optimal values of theta for me.
opt = fminunc(@cost, theta);
disp(opt);
% for each number between 1 and 10,
% print out the probability that y(i) = 1
for i = 1:100
disp([i, h(i, opt) * 100]);
end
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment