Skip to content

Instantly share code, notes, and snippets.

@lixingcong
Last active November 19, 2016 06:09
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save lixingcong/6b9d50a1b5d069ea8025b18d5b2da653 to your computer and use it in GitHub Desktop.
Save lixingcong/6b9d50a1b5d069ea8025b18d5b2da653 to your computer and use it in GitHub Desktop.
《统计学习方法》例2.1
#!/usr/bin/env python
# -*- coding: utf-8 -*-
# Time-stamp: < 1.py 2016-11-19 14:09:19 >
"""
对应《统计学习方法》李航课本的第2章“感知机”的例题2.1
"""
import os
# An example in that book, the training set and parameters' sizes are fixed
training_set = [[(3, 3), 1], [(4, 3), 1], [(1, 1), -1]] # 增广矩阵:后面加上+1 -1表示样本的正确输出
w = [0, 0]
b = 0
# update parameters using stochastic gradient descent
def update(item):
global w, b
w[0] = w[0] + 1 * item[1] * item[0][0]
w[1] = w[1] + 1 * item[1] * item[0][1]
b = b + 1 * item[1]
print w, b # you can uncomment this line to check the process of stochastic gradient descent
# calculate the functional distance between 'item' an the dicision surface
def cal(item):
global w, b
res = 0
for i in range(len(item[0])):
res += item[0][i] * w[i]
res += b
res *= item[1] # check results if divide correctly
return res
# check if the hyperplane can classify the examples correctly
def check():
is_need_to_check_next_time = False
for item in training_set:
if cal(item) <= 0:
is_need_to_check_next_time = True
update(item)
if not is_need_to_check_next_time:
print "RESULT: w: " + str(w) + " b: "+ str(b)
return is_need_to_check_next_time
if __name__=="__main__":
check_flag = True
for i in range(1000):# 最大1000次循环
print i
check_flag = check()
if not check_flag:
break
# 线性不可分
if check_flag:
print "The training_set is not linear separable. "
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment