Skip to content

Instantly share code, notes, and snippets.

@songron
Last active August 25, 2018 08:11
Show Gist options
  • Save songron/7820723 to your computer and use it in GitHub Desktop.
Save songron/7820723 to your computer and use it in GitHub Desktop.
Codes for Machine Learning Foundations(NTU) 台湾国立大学《机器学习基石》(Coursera版)相关的代码、编程作业等。
#!/usr/bin/env python
#coding=utf8
""" DATA: https://d396qusza40orc.cloudfront.net/ntumlone%2Fhw1%2Fhw1_15_train.dat
Each line of the data set contains one (xn,yn) with xn∈R4. T
he first 4 numbers of the line contains the components of xn orderly, the last number is yn.
Please initialize your algorithm with w=0 and take sign(0) as −1
Question 15:
Implement a version of PLA by visiting examples in the naive cycle using the
order of examples in the data set. Run the algorithm on the data set.
What is the number of updates before the algorithm halts?
Question 16:
Implement a version of PLA by visiting examples in fixed, pre-determined random
cycles throughout the algorithm. Run the algorithm on the data set. Please repeat
your experiment for 2000 times, each with a different random seed. What is the average
number of updates before the algorithm halts?
Question 17:
Implement a version of PLA by visiting examples in fixed, pre-determined random cycles
throughout the algorithm, while changing the update rule to be:
wt+1 = wt + alpha * yn(t)xn(t)
with alpha =0.5. Note that your PLA in the previous Question corresponds to alpha=1.
Please repeat your experiment for 2000 times, each with a different random seed.
What is the average number of updates before the algorithm halts?
"""
import random
from numpy import array, inner, zeros
DATA_FILE = 'ntumlone_hw1_hw1_15_train.dat'
def sign(x):
if x <= 0:
return -1
return 1
def load_data(infile):
X = []
Y = []
with open(infile) as f:
for line in f:
recs = line.split()
x = [1] + [float(v) for v in recs[:-1]]
X.append(tuple(x))
Y.append(int(recs[-1]))
return array(X), array(Y)
def train(X, Y, rand=False, alpha=1):
n = len(Y)
d = len(X[0])
W = zeros(d)
idx = range(n)
if rand:
idx = random.sample(idx, n)
t = 0
k = 0
flag = True
while True:
if k == n:
if flag: break
k = 0
flag = True
i = idx[k]
if sign(inner(X[i], W)) != Y[i]:
flag = False
t += 1
W = W + alpha * Y[i] * X[i]
k += 1
return t
def naive_cycle():
X, Y = load_data(DATA_FILE)
t = train(X, Y)
print t
def predefined_random(n, alpha=1):
X, Y = load_data(DATA_FILE)
count = 0
for i in xrange(n):
print i
t = train(X, Y, rand=True, alpha=alpha)
count += t
print count/n
def main():
# question 15
# naive_cycle() # 45
# question 16
# predefined_random(2000) # 40
# question 17
predefined_random(2000, alpha=0.5) # 40
if __name__ == '__main__':
main()
#!/usr/bin/env python
#coding=utf8
"""
Train DATA: https://d396qusza40orc.cloudfront.net/ntumlone%2Fhw1%2Fhw1_18_train.dat
Test DATA: https://d396qusza40orc.cloudfront.net/ntumlone%2Fhw1%2Fhw1_18_test.dat
Question 18:
As the test set for "verifying" the g returned by your algorithm (see lecture 4 about verifying).
The sets are of the same format as the previous one.
Run the pocket algorithm with a total of 50 updates on D, and verify the performance of w using the test set.
Please repeat your experiment for 2000 times, each with a different random seed.
What is the average error rate on the test set?
Question 19:
Modify your algorithm in Question 18 to return w50 (the PLA vector after 50 updates) instead of
Wg (the pocket vector) after 50 updates. Run the modified algorithm on D, and verify the performance
using the test set. Please repeat your experiment for 2000 times, each with a different random seed.
What is the average error rate on the test set?
Question 20:
Modify your algorithm in Question 18 to run for 100 updates instead of 50, and verify the performance
of wPOCKET using the test set. Please repeat your experiment for 2000 times, each with a different random seed.
What is the average error rate on the test set?
"""
import random
from numpy import array, inner, zeros
TRAIN_FILE = 'pocket_hw1_18_train.dat'
TEST_FILE = 'pocket_hw1_18_test.dat'
def sign(x):
if x <= 0:
return -1
return 1
def load_data(infile):
X = []
Y = []
with open(infile) as f:
for line in f:
recs = line.split()
x = [1] + [float(v) for v in recs[:-1]]
X.append(tuple(x))
Y.append(int(recs[-1]))
return array(X), array(Y)
def test(X, Y, W):
n = len(Y)
ne = sum([1 for i in xrange(n) if sign(inner(X[i], W)) != Y[i]])
error = ne / float(n)
return error
def train(X, Y, updates=50, pocket=True):
n = len(Y)
d = len(X[0])
W = zeros(d)
Wg = W
error = test(X, Y, Wg)
for j in xrange(updates):
idx = random.sample(range(n), n)
for i in idx:
if sign(inner(X[i], W)) != Y[i]:
W = W + Y[i] * X[i]
e = test(X, Y, W)
if e < error:
error = e
Wg = W
break
if pocket:
return Wg
else:
return W
def main():
X, Y = load_data(TRAIN_FILE)
TX, TY = load_data(TEST_FILE)
error = 0
n = 200
for i in xrange(n):
# question 18, output: 0.1377
#W = train(X, Y, updates=50)
# question 19, output: 0.3559
#W = train(X, Y, updates=50, pocket=False)
# question 20, output: 0.1141
W = train(X, Y, updates=100, pocket=True)
error += test(TX, TY, W)
print error / n
if __name__ == '__main__':
main()
Codes for Machine Learning Foundations(NTU)
台湾国立大学《机器学习基石》(Coursera版)相关的代码、编程作业等。
课程地址:https://class.coursera.org/ntumlone-001/
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment