Skip to content

Instantly share code, notes, and snippets.

@Vikrant79
Created January 31, 2016 09:34
Show Gist options
  • Save Vikrant79/4ab1bd58f8078eead214 to your computer and use it in GitHub Desktop.
Save Vikrant79/4ab1bd58f8078eead214 to your computer and use it in GitHub Desktop.
All Models
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"------------------------------------------------------Coding Challenge - 1 -----------------------------------------------------\n",
"\n",
"----------------------------------------------------------All Models------------------------------------------------------------\n",
"\n",
"--------------------------------------------------(Prepared by Vikrant Kumar)---------------------------------------------------\n",
"\n",
"About Challenge - Data science challenge is a regression problem where \"target\" prediction is to be done on test data set of 1000 records. Training set is consist of 5000 records with 254 features + 1 target.\n",
"\n",
"Data Files - \n",
"Training File (codetest_train.txt) - 5000 records with 254 features + 1 target\n",
"Test File (codetest_test.txt) - 1,000 records x 254 features\n",
"\n",
"Observations - Features explanation is not given therefore features importance can not be estimated just by observing the data. There are few missing values in both training and test data set. Four categorical features are observed in both the data sets which are -\n",
"f_61 = single lowercase character (a, b, c, d, e)\n",
"f_121 = single uppercase character (A, B, C, D, E, F)\n",
"f_215 = Color (red, orange, yellow, blue)\n",
"f_237 = Countries (USA, Mexico, Canada)\n",
"\n",
"Language and tool used - Primarily all the script is written in python (scikit learn) at the same time gretl(Open Source Tool) was also moderately used to see the features importance and model building."
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"#Step 1 - Import important libraries\n",
"import pandas as pd\n",
"import numpy as np\n",
"from sklearn import ensemble, preprocessing, cross_validation\n",
"from sklearn.preprocessing import Imputer\n",
"from pandas import Series, DataFrame\n",
"from sklearn import linear_model\n",
"from sklearn.ensemble import RandomForestRegressor\n",
"from sklearn import svm\n",
"from sklearn.linear_model import Ridge\n",
"from sklearn.svm import LinearSVC\n",
"from sklearn.cross_validation import cross_val_score, ShuffleSplit\n",
"from sklearn.metrics import mean_squared_error"
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"#Step 2 - Text files were converted to csv files (Drag and drop the text file in excel). Below step is creating pandas data frame \n",
"#for both traning and test data set from their's csv file \n",
"train = pd.read_csv('codetest_train.csv')\n",
"test = pd.read_csv('codetest_test.csv')"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"#Step 3 - Converting all 4 categorical features to dummy variables. Thid will create extra columns in dataframe for each of \n",
"#the categorical values in all the 4 features. At the same time we need to drop the original feature column to avoid the \n",
"#multicollinearity\n",
"train = pd.concat([train, pd.get_dummies(train['f_61'])], axis=1)\n",
"train = train.drop('f_61', 1)\n",
"test = pd.concat([test, pd.get_dummies(test['f_61'])], axis=1)\n",
"test = test.drop('f_61', 1)\n",
"train = pd.concat([train, pd.get_dummies(train['f_121'])], axis=1)\n",
"train = train.drop('f_121', 1)\n",
"test = pd.concat([test, pd.get_dummies(test['f_121'])], axis=1)\n",
"test = test.drop('f_121', 1)\n",
"train = pd.concat([train, pd.get_dummies(train['f_215'])], axis=1)\n",
"train = train.drop('f_215', 1)\n",
"test = pd.concat([test, pd.get_dummies(test['f_215'])], axis=1)\n",
"test = test.drop('f_215', 1)\n",
"train = pd.concat([train, pd.get_dummies(train['f_237'])], axis=1)\n",
"train = train.drop('f_237', 1)\n",
"test = pd.concat([test, pd.get_dummies(test['f_237'])], axis=1)\n",
"test = test.drop('f_237', 1)"
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"#Droping one dummy variable from each of the categorical feature improve the model accuracy because it reduces multicollinearity\n",
"#However in this case dropping these columns did not improve the model accuracy therefore skipped this step and did not drop \n",
"#these columns.\n",
"\n",
"#train = train.drop(['a','A','red','Canada'],axis=1)\n",
"#test = test.drop(['a','A','red','Canada'],axis=1)"
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"#Step 4\n",
"#This step will replace all the null values with the median value of the column. I chose \"median\" but \"mean\" value can also \n",
"#be taken. But it does have very little impact on model accuracy in this case\n",
"imp = Imputer(missing_values='NaN', strategy='median', axis=0)\n",
"train_imputed = imp.fit_transform(train)\n",
"test_imputed = imp.fit_transform(test)\n",
"train = pd.DataFrame(train_imputed, columns = train.columns)\n",
"test = pd.DataFrame(test_imputed, columns = test.columns)"
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"#Check point - Saving the resulted dataframe into csv files. These files were used in gretl (Open source tool) for feature \n",
"#selection and model building. This step is not important in this script.\n",
"\n",
"#train.to_csv('train.csv')\n",
"#test.to_csv('test.csv')"
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"#Step 5\n",
"#Separating the predictor and target column in X and Y values\n",
"X = train.drop('target',1); Y = train.target"
]
},
{
"cell_type": "code",
"execution_count": 8,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"#Step 6\n",
"#Breaking the training data set into two parts for cross validation\n",
"X_train, X_test, y_train, y_test = cross_validation.train_test_split(X, Y, test_size=0.3, random_state=4)"
]
},
{
"cell_type": "code",
"execution_count": 9,
"metadata": {
"collapsed": false
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Accuracy Metrics for holdout cross validation\n",
"Training Score(R2) 0.601505341549\n",
"Test Score(R2) 0.595468262972\n",
"Training Score(mse) 10.854581573\n",
"Test Score(mse) 11.5586547661\n",
"Accuracy Metrics for 10 fold cross validation\n",
"Accuracy (mse) 10 fold CV: -12.2032 (+/- 2.2822)\n",
"Accuracy (r2) 10 fold CV: 0.5570 (+/- 0.0545)\n"
]
}
],
"source": [
"#Step 7\n",
"# Model - 1\n",
"#Using Linear Regression to cross validate the model\n",
"clf = linear_model.LinearRegression(fit_intercept=True, normalize=False, copy_X=True, n_jobs=12)\n",
"\n",
"#Fit model to the training part\n",
"clf.fit(X_train, y_train)\n",
"\n",
"#Generate accuracy metrics (R2) for holdout cross validation\n",
"print \"Accuracy Metrics for holdout cross validation\"\n",
"print \"Training Score(R2)\", clf.score(X_train, y_train)\n",
"print \"Test Score(R2)\", clf.score(X_test, y_test)\n",
"\n",
"#Generate accuracy metrics (mse) for simple cross validation\n",
"y_pred_train = clf.predict(X_train)\n",
"y_pred_validation = clf.predict(X_test)\n",
"print \"Training Score(mse)\", mean_squared_error(y_train, y_pred_train)\n",
"print \"Test Score(mse)\", mean_squared_error(y_test, y_pred_validation)\n",
"\n",
"#Generate accuracy metrics (r 2 and mse) for 10 fold cross validation\n",
"print \"Accuracy Metrics for 10 fold cross validation\"\n",
"scores = cross_validation.cross_val_score(clf, X, Y, cv=10, scoring='mean_squared_error')\n",
"print(\"Accuracy (mse) 10 fold CV: %0.4f (+/- %0.4f)\" % (scores.mean(), scores.std() * 2))\n",
"scores = cross_validation.cross_val_score(clf, X, Y, cv=10, scoring='r2')\n",
"print(\"Accuracy (r2) 10 fold CV: %0.4f (+/- %0.4f)\" % (scores.mean(), scores.std() * 2))"
]
},
{
"cell_type": "code",
"execution_count": 10,
"metadata": {
"collapsed": false
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Accuracy Metrics for holdout cross validation\n",
"Training Score(R2) 0.583660659004\n",
"Test Score(R2) 0.575746530755\n",
"Training Score(mse) 11.3406522347\n",
"Test Score(mse) 12.1221623311\n",
"Accuracy Metrics for 10 fold cross validation\n",
"Accuracy (mse) 10 fold CV: -12.7333 (+/- 2.2217)\n",
"Accuracy (r2) 10 fold CV: 0.5452 (+/- 0.0435)\n"
]
}
],
"source": [
"#Step 7\n",
"# Model - 2\n",
"#Using SGD (Stochastic gradient descent) Regression to cross validate the model.\n",
"clf = linear_model.SGDRegressor(loss='squared_loss', penalty='l2', alpha=0.001, l1_ratio=0.15, fit_intercept=True, n_iter=10, \n",
" shuffle=True, verbose=0, epsilon=0.1, random_state=None, learning_rate='invscaling', eta0=0.01, \n",
" power_t=0.25, warm_start=False, average=False)\n",
"\n",
"#Fit model to the training part\n",
"clf.fit(X_train, y_train)\n",
"\n",
"#Generate accuracy metrics (R2) for holdout cross validation\n",
"print \"Accuracy Metrics for holdout cross validation\"\n",
"print \"Training Score(R2)\", clf.score(X_train, y_train)\n",
"print \"Test Score(R2)\", clf.score(X_test, y_test)\n",
"\n",
"#Generate accuracy metrics (mse) for simple cross validation\n",
"y_pred_train = clf.predict(X_train)\n",
"y_pred_validation = clf.predict(X_test)\n",
"print \"Training Score(mse)\", mean_squared_error(y_train, y_pred_train)\n",
"print \"Test Score(mse)\", mean_squared_error(y_test, y_pred_validation)\n",
"\n",
"#Generate accuracy metrics (r 2 and mse) for 10 fold cross validation\n",
"print \"Accuracy Metrics for 10 fold cross validation\"\n",
"scores = cross_validation.cross_val_score(clf, X, Y, cv=10, scoring='mean_squared_error')\n",
"print(\"Accuracy (mse) 10 fold CV: %0.4f (+/- %0.4f)\" % (scores.mean(), scores.std() * 2))\n",
"scores = cross_validation.cross_val_score(clf, X, Y, cv=10, scoring='r2')\n",
"print(\"Accuracy (r2) 10 fold CV: %0.4f (+/- %0.4f)\" % (scores.mean(), scores.std() * 2))"
]
},
{
"cell_type": "code",
"execution_count": 11,
"metadata": {
"collapsed": false
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Accuracy Metrics for holdout cross validation\n",
"Training Score(R2) 0.93088628402\n",
"Test Score(R2) 0.53680512941\n",
"Training Score(mse) 1.88258600714\n",
"Test Score(mse) 13.2348320503\n",
"Accuracy Metrics for 10 fold cross validation\n",
"Accuracy (mse) 10 fold CV: -12.2734 (+/- 3.2455)\n",
"Accuracy (r2) 10 fold CV: 0.5561 (+/- 0.0652)\n"
]
}
],
"source": [
"#Step 7\n",
"#Model - 3\n",
"#Using Random Forest Regression to cross validate the model. Model is found to have good training score but a bad test score. \n",
"#It cleary seems to be overfitting\n",
"clf = RandomForestRegressor(random_state=8, n_estimators=28,max_depth=36,n_jobs=5) \n",
"\n",
"#Fit model to the training part\n",
"clf.fit(X_train, y_train)\n",
"\n",
"#Generate accuracy metrics (R2) for holdout cross validation\n",
"print \"Accuracy Metrics for holdout cross validation\"\n",
"print \"Training Score(R2)\", clf.score(X_train, y_train)\n",
"print \"Test Score(R2)\", clf.score(X_test, y_test)\n",
"\n",
"#Generate accuracy metrics (mse) for simple cross validation\n",
"y_pred_train = clf.predict(X_train)\n",
"y_pred_validation = clf.predict(X_test)\n",
"print \"Training Score(mse)\", mean_squared_error(y_train, y_pred_train)\n",
"print \"Test Score(mse)\", mean_squared_error(y_test, y_pred_validation)\n",
"\n",
"#Generate accuracy metrics (r 2 and mse) for 10 fold cross validation\n",
"print \"Accuracy Metrics for 10 fold cross validation\"\n",
"scores = cross_validation.cross_val_score(clf, X, Y, cv=10, scoring='mean_squared_error')\n",
"print(\"Accuracy (mse) 10 fold CV: %0.4f (+/- %0.4f)\" % (scores.mean(), scores.std() * 2))\n",
"scores = cross_validation.cross_val_score(clf, X, Y, cv=10, scoring='r2')\n",
"print(\"Accuracy (r2) 10 fold CV: %0.4f (+/- %0.4f)\" % (scores.mean(), scores.std() * 2))"
]
},
{
"cell_type": "code",
"execution_count": 12,
"metadata": {
"collapsed": false
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Accuracy Metrics for holdout cross validation\n",
"Training Score(R2) 0.525796851027\n",
"Test Score(R2) 0.421659460018\n",
"Training Score(mse) 12.9168024051\n",
"Test Score(mse) 16.5248805644\n",
"Accuracy Metrics for 10 fold cross validation\n",
"Accuracy (mse) 10 fold CV: -15.3351 (+/- 3.0895)\n",
"Accuracy (r2) 10 fold CV: 0.4446 (+/- 0.0378)\n"
]
}
],
"source": [
"#Step 7\n",
"#Model - 4\n",
"#Using SVM (support Vector Machine) Regression to cross validate the model.\n",
"clf = svm.SVR(C=1.0, cache_size=200, coef0=0.0, degree=3, epsilon=0.1, gamma='auto',\n",
" kernel='rbf', max_iter=-1, shrinking=True, tol=0.001, verbose=False)\n",
"\n",
"#Fit model to the training part\n",
"clf.fit(X_train, y_train)\n",
"\n",
"#Generate accuracy metrics (R2) for holdout cross validation\n",
"print \"Accuracy Metrics for holdout cross validation\"\n",
"print \"Training Score(R2)\", clf.score(X_train, y_train)\n",
"print \"Test Score(R2)\", clf.score(X_test, y_test)\n",
"\n",
"#Generate accuracy metrics (mse) for simple cross validation\n",
"y_pred_train = clf.predict(X_train)\n",
"y_pred_validation = clf.predict(X_test)\n",
"print \"Training Score(mse)\", mean_squared_error(y_train, y_pred_train)\n",
"print \"Test Score(mse)\", mean_squared_error(y_test, y_pred_validation)\n",
"\n",
"#Generate accuracy metrics (r 2 and mse) for 10 fold cross validation\n",
"print \"Accuracy Metrics for 10 fold cross validation\"\n",
"scores = cross_validation.cross_val_score(clf, X, Y, cv=10, scoring='mean_squared_error')\n",
"print(\"Accuracy (mse) 10 fold CV: %0.4f (+/- %0.4f)\" % (scores.mean(), scores.std() * 2))\n",
"scores = cross_validation.cross_val_score(clf, X, Y, cv=10, scoring='r2')\n",
"print(\"Accuracy (r2) 10 fold CV: %0.4f (+/- %0.4f)\" % (scores.mean(), scores.std() * 2))"
]
},
{
"cell_type": "code",
"execution_count": 13,
"metadata": {
"collapsed": false
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Accuracy Metrics for holdout cross validation\n",
"Training Score(R2) 0.599762698333\n",
"Test Score(R2) 0.60487905\n",
"Training Score(mse) 10.9020493684\n",
"Test Score(mse) 11.2897610592\n",
"Accuracy Metrics for 10 fold cross validation\n",
"Accuracy (mse) 10 fold CV: -11.9677 (+/- 2.2751)\n",
"Accuracy (r2) 10 fold CV: 0.5656 (+/- 0.0533)\n"
]
}
],
"source": [
"#Step 7\n",
"# Model - 5\n",
"#Using Lasso (Use L1 regulerization and automatically eleminate unimportant features) to cross validate the model\n",
"clf = linear_model.Lasso(alpha=0.01, copy_X=True, fit_intercept=True, max_iter=1000,\n",
" normalize=False, positive=False, precompute=False, random_state=5,\n",
" selection='cyclic', tol=0.0001, warm_start=False)\n",
"\n",
"#Fit model to the training part\n",
"clf.fit(X_train, y_train)\n",
"\n",
"#Generate accuracy metrics (R2) for holdout cross validation\n",
"print \"Accuracy Metrics for holdout cross validation\"\n",
"print \"Training Score(R2)\", clf.score(X_train, y_train)\n",
"print \"Test Score(R2)\", clf.score(X_test, y_test)\n",
"\n",
"#Generate accuracy metrics (mse) for simple cross validation\n",
"y_pred_train = clf.predict(X_train)\n",
"y_pred_validation = clf.predict(X_test)\n",
"print \"Training Score(mse)\", mean_squared_error(y_train, y_pred_train)\n",
"print \"Test Score(mse)\", mean_squared_error(y_test, y_pred_validation)\n",
"\n",
"#Generate accuracy metrics (r 2 and mse) for 10 fold cross validation\n",
"print \"Accuracy Metrics for 10 fold cross validation\"\n",
"scores = cross_validation.cross_val_score(clf, X, Y, cv=10, scoring='mean_squared_error')\n",
"print(\"Accuracy (mse) 10 fold CV: %0.4f (+/- %0.4f)\" % (scores.mean(), scores.std() * 2))\n",
"scores = cross_validation.cross_val_score(clf, X, Y, cv=10, scoring='r2')\n",
"print(\"Accuracy (r2) 10 fold CV: %0.4f (+/- %0.4f)\" % (scores.mean(), scores.std() * 2))"
]
},
{
"cell_type": "code",
"execution_count": 14,
"metadata": {
"collapsed": false
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Accuracy Metrics for holdout cross validation\n",
"Training Score(R2) 0.592000213275\n",
"Test Score(R2) 0.616223986858\n",
"Training Score(mse) 11.1134914179\n",
"Test Score(mse) 10.9656030353\n",
"Accuracy Metrics for 10 fold cross validation\n",
"Accuracy (mse) 10 fold CV: -11.6176 (+/- 2.3170)\n",
"Accuracy (r2) 10 fold CV: 0.5786 (+/- 0.0509)\n"
]
}
],
"source": [
"#Step 7\n",
"#Model - 6\n",
"#Using LassoLars (Use L1 regulerization and automatically eleminate unimportant features) to cross validate the model\n",
"clf = linear_model.LassoLars(alpha=0.0006, copy_X=True, fit_intercept=True,\n",
" fit_path=True, max_iter=500, normalize=True, precompute='auto',\n",
" verbose=False)\n",
"\n",
"#Fit model to the training part\n",
"clf.fit(X_train, y_train)\n",
"\n",
"#Generate accuracy metrics (R2) for holdout cross validation\n",
"print \"Accuracy Metrics for holdout cross validation\"\n",
"print \"Training Score(R2)\", clf.score(X_train, y_train)\n",
"print \"Test Score(R2)\", clf.score(X_test, y_test)\n",
"\n",
"#Generate accuracy metrics (mse) for simple cross validation\n",
"y_pred_train = clf.predict(X_train)\n",
"y_pred_validation = clf.predict(X_test)\n",
"print \"Training Score(mse)\", mean_squared_error(y_train, y_pred_train)\n",
"print \"Test Score(mse)\", mean_squared_error(y_test, y_pred_validation)\n",
"\n",
"#Generate accuracy metrics (r 2 and mse) for 10 fold cross validation\n",
"print \"Accuracy Metrics for 10 fold cross validation\"\n",
"scores = cross_validation.cross_val_score(clf, X, Y, cv=10, scoring='mean_squared_error')\n",
"print(\"Accuracy (mse) 10 fold CV: %0.4f (+/- %0.4f)\" % (scores.mean(), scores.std() * 2))\n",
"scores = cross_validation.cross_val_score(clf, X, Y, cv=10, scoring='r2')\n",
"print(\"Accuracy (r2) 10 fold CV: %0.4f (+/- %0.4f)\" % (scores.mean(), scores.std() * 2))"
]
},
{
"cell_type": "code",
"execution_count": 15,
"metadata": {
"collapsed": false
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Accuracy Metrics for holdout cross validation\n",
"Training Score(R2) 0.60150358588\n",
"Test Score(R2) 0.595582970919\n",
"Training Score(mse) 10.8546293956\n",
"Test Score(mse) 11.5553772246\n",
"Accuracy Metrics for 10 fold cross validation\n",
"Accuracy (mse) 10 fold CV: -12.2016 (+/- 2.2817)\n",
"Accuracy (r2) 10 fold CV: 0.5571 (+/- 0.0545)\n"
]
}
],
"source": [
"#Step 7\n",
"#Model - 7\n",
"#Using Ridge (Use L2 regulerization and automatically eleminate unimportant features) to cross validate the model\n",
"clf = Ridge(alpha=1.0, copy_X=True, fit_intercept=True, max_iter=None, normalize=False, solver='auto', tol=0.001)\n",
"\n",
"#Fit model to the training part\n",
"clf.fit(X_train, y_train)\n",
"\n",
"#Generate accuracy metrics (R2) for holdout cross validation\n",
"print \"Accuracy Metrics for holdout cross validation\"\n",
"print \"Training Score(R2)\", clf.score(X_train, y_train)\n",
"print \"Test Score(R2)\", clf.score(X_test, y_test)\n",
"\n",
"#Generate accuracy metrics (mse) for simple cross validation\n",
"y_pred_train = clf.predict(X_train)\n",
"y_pred_validation = clf.predict(X_test)\n",
"print \"Training Score(mse)\", mean_squared_error(y_train, y_pred_train)\n",
"print \"Test Score(mse)\", mean_squared_error(y_test, y_pred_validation)\n",
"\n",
"#Generate accuracy metrics (r 2 and mse) for 10 fold cross validation\n",
"print \"Accuracy Metrics for 10 fold cross validation\"\n",
"scores = cross_validation.cross_val_score(clf, X, Y, cv=10, scoring='mean_squared_error')\n",
"print(\"Accuracy (mse) 10 fold CV: %0.4f (+/- %0.4f)\" % (scores.mean(), scores.std() * 2))\n",
"scores = cross_validation.cross_val_score(clf, X, Y, cv=10, scoring='r2')\n",
"print(\"Accuracy (r2) 10 fold CV: %0.4f (+/- %0.4f)\" % (scores.mean(), scores.std() * 2))"
]
},
{
"cell_type": "code",
"execution_count": 16,
"metadata": {
"collapsed": false
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Accuracy Metrics for holdout cross validation\n",
"Training Score(R2) 0.597156630535\n",
"Test Score(R2) 0.594162685502\n",
"Training Score(mse) 10.9730359548\n",
"Test Score(mse) 11.5959589326\n",
"Accuracy Metrics for 10 fold cross validation\n",
"Accuracy (mse) 10 fold CV: -12.1935 (+/- 2.3299)\n",
"Accuracy (r2) 10 fold CV: 0.5576 (+/- 0.0512)\n"
]
}
],
"source": [
"#Step 7\n",
"# Model - 9\n",
"#Using BayesianRidge (Use L2 regulerization and automatically eleminate unimportant features) to cross validate the model\n",
"clf = linear_model.BayesianRidge(alpha_1=1e-06, alpha_2=1e-06, compute_score=False, copy_X=True,\n",
" fit_intercept=True, lambda_1=1e-06, lambda_2=1e-06, n_iter=300,\n",
" normalize=False, tol=0.001, verbose=False)\n",
"\n",
"#Fit model to the training part\n",
"clf.fit(X_train, y_train)\n",
"\n",
"#Generate accuracy metrics (R2) for holdout cross validation\n",
"print \"Accuracy Metrics for holdout cross validation\"\n",
"print \"Training Score(R2)\", clf.score(X_train, y_train)\n",
"print \"Test Score(R2)\", clf.score(X_test, y_test)\n",
"\n",
"#Generate accuracy metrics (mse) for simple cross validation\n",
"y_pred_train = clf.predict(X_train)\n",
"y_pred_validation = clf.predict(X_test)\n",
"print \"Training Score(mse)\", mean_squared_error(y_train, y_pred_train)\n",
"print \"Test Score(mse)\", mean_squared_error(y_test, y_pred_validation)\n",
"\n",
"#Generate accuracy metrics (r 2 and mse) for 10 fold cross validation\n",
"print \"Accuracy Metrics for 10 fold cross validation\"\n",
"scores = cross_validation.cross_val_score(clf, X, Y, cv=10, scoring='mean_squared_error')\n",
"print(\"Accuracy (mse) 10 fold CV: %0.4f (+/- %0.4f)\" % (scores.mean(), scores.std() * 2))\n",
"scores = cross_validation.cross_val_score(clf, X, Y, cv=10, scoring='r2')\n",
"print(\"Accuracy (r2) 10 fold CV: %0.4f (+/- %0.4f)\" % (scores.mean(), scores.std() * 2))"
]
},
{
"cell_type": "code",
"execution_count": 17,
"metadata": {
"collapsed": false
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Accuracy Metrics for holdout cross validation\n",
"Training Score(R2) 0.594623029698\n",
"Test Score(R2) 0.60889585419\n",
"Training Score(mse) 11.0420486163\n",
"Test Score(mse) 11.1749892165\n",
"Accuracy Metrics for 10 fold cross validation\n",
"Accuracy (mse) 10 fold CV: -11.8419 (+/- 2.3486)\n",
"Accuracy (r2) 10 fold CV: 0.5705 (+/- 0.0506)\n"
]
}
],
"source": [
"#Step 7\n",
"# Model 10\n",
"#Using ElasticNet (Use L1 and L2 regulerization and automatically eleminate unimportant features) to cross validate the model\n",
"clf = linear_model.ElasticNet(alpha=0.04, l1_ratio=0.5, fit_intercept=True, normalize=False, precompute=False, max_iter=1000, \n",
" copy_X=True, tol=0.0001, warm_start=False, positive=False, random_state=None, selection='cyclic')\n",
"\n",
"#Fit model to the training part\n",
"clf.fit(X_train, y_train)\n",
"\n",
"#Generate accuracy metrics (R2) for holdout cross validation\n",
"print \"Accuracy Metrics for holdout cross validation\"\n",
"print \"Training Score(R2)\", clf.score(X_train, y_train)\n",
"print \"Test Score(R2)\", clf.score(X_test, y_test)\n",
"\n",
"#Generate accuracy metrics (mse) for simple cross validation\n",
"y_pred_train = clf.predict(X_train)\n",
"y_pred_validation = clf.predict(X_test)\n",
"print \"Training Score(mse)\", mean_squared_error(y_train, y_pred_train)\n",
"print \"Test Score(mse)\", mean_squared_error(y_test, y_pred_validation)\n",
"\n",
"#Generate accuracy metrics (r 2 and mse) for 10 fold cross validation\n",
"print \"Accuracy Metrics for 10 fold cross validation\"\n",
"scores = cross_validation.cross_val_score(clf, X, Y, cv=10, scoring='mean_squared_error')\n",
"print(\"Accuracy (mse) 10 fold CV: %0.4f (+/- %0.4f)\" % (scores.mean(), scores.std() * 2))\n",
"scores = cross_validation.cross_val_score(clf, X, Y, cv=10, scoring='r2')\n",
"print(\"Accuracy (r2) 10 fold CV: %0.4f (+/- %0.4f)\" % (scores.mean(), scores.std() * 2))"
]
},
{
"cell_type": "code",
"execution_count": 18,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"#Step 8\n",
"#Apply to best chosen model to whole training data\n",
"clf.fit(X,Y)\n",
"\n",
"#Step 9\n",
"#Making prediction on test data \n",
"target_prediction = clf.predict(test)"
]
},
{
"cell_type": "code",
"execution_count": 19,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"#Step 10\n",
"#Crating CSV file for sumission\n",
"np.savetxt(\"Test_Result_Vikrant_Kumar.csv\", target_prediction, delimiter=\",\")"
]
},
{
"cell_type": "code",
"execution_count": 20,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"#Step 10\n",
"#Creating TXT file for Submission\n",
"np.savetxt(\"Test_Result_Vikrant_Kumar.txt\", target_prediction, delimiter=\",\")"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 2",
"language": "python",
"name": "python2"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 2
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython2",
"version": "2.7.11"
}
},
"nbformat": 4,
"nbformat_minor": 0
}
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment