在学习Scipy的leastsq扩展的时候,发现在中文前会用到'u'标签,这是干什么用的?
例如下述两行代码:
print u'真实参数',[A,k,theta]
pl.plot(x,y1,label = u'噪声数据')
为什么要加入u标签呢?否则中文报错!
some codes are so interesting,so I just push them in the github,and share them with our friends. |
#include <stdio.h> | |
int main(int argc,char* argv[]){ | |
int i = 0;/* counter */ | |
for(i = 0;i < 10000;i++){ | |
printf("just try more...\n"); | |
} | |
return 0; | |
} |
in most of the times,I think you must think more,then just do it!that's what I really need...come on man... |
Try your best! |
在学习Scipy的leastsq扩展的时候,发现在中文前会用到'u'标签,这是干什么用的?
例如下述两行代码:
print u'真实参数',[A,k,theta]
pl.plot(x,y1,label = u'噪声数据')
为什么要加入u标签呢?否则中文报错!
1.when you know that your measurements of X are uncertain,
or when you don’t want to focus on the errors of one variable over another.
2.the history of the ODR:
Orthogonal Distance Regression (ODR) is a method that can do this (orthogonal in this context means perpendicular –
so it calculates errors perpendicular to the line, rather than just ‘vertically’).
下述TODO摘自知友(匿名)
*1.*比如ROC曲线的计算,自己写的ROC脚本就是很慢,但是sklearn里就是特别的快。对比之后发现,sklearn会非常善用numpy的一些特性,而不是用list去遍历。
*2.*在sklearn里,是直接使用c写了kdtree来实现KNN(Calling c/c++ from python ?)
*3.*虽然不少常用算法,比如decision tree用了cython,svm直接wrap了libsvm,但后期的代码基本原则都是优先Python实现,只有Python实在太慢的时候才会考虑用cython加速。
plt.xlabel("$x_1$") | |
plt.ylabel("$x_2$") | |
import numpy as np | |
import matplotlib.pyplot as plt | |
from sklearn.decomposition import PCA, KernelPCA | |
from sklearn.datasets import make_circles |
'''载入训练数据'''
iris = load_iris()
X,y = iris.data,iris.target
'''打乱原始训练数据'''
'''获取行索引'''