Skip to content

Instantly share code, notes, and snippets.

Show Gist options
  • Save josh1968/3073217e21f33103eb8075bf9f02193e to your computer and use it in GitHub Desktop.
Save josh1968/3073217e21f33103eb8075bf9f02193e to your computer and use it in GitHub Desktop.
Display the source blob
Display the rendered blob
Raw
{
"cells": [
{
"metadata": {
"button": false,
"new_sheet": false,
"run_control": {
"read_only": false
}
},
"cell_type": "markdown",
"source": "<center>\n <img src=\"https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/Logos/organization_logo/organization_logo.png\" width=\"300\" alt=\"cognitiveclass.ai logo\" />\n</center>\n\n<h1 align=\"center\"><font size=\"5\">Classification with Python</font></h1>\n"
},
{
"metadata": {
"button": false,
"new_sheet": false,
"run_control": {
"read_only": false
}
},
"cell_type": "markdown",
"source": "In this notebook we try to practice all the classification algorithms that we learned in this course.\n\nWe load a dataset using Pandas library, and apply the following algorithms, and find the best one for this specific dataset by accuracy evaluation methods.\n\nLets first load required libraries:\n"
},
{
"metadata": {
"button": false,
"new_sheet": false,
"run_control": {
"read_only": false
}
},
"cell_type": "code",
"source": "import itertools\nimport pylab as pl\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom matplotlib.ticker import NullFormatter\nimport scipy.optimize as opt\nimport pandas as pd\nimport numpy as np\nimport matplotlib.ticker as ticker\nfrom sklearn import preprocessing\n%matplotlib inline",
"execution_count": 1,
"outputs": []
},
{
"metadata": {
"button": false,
"new_sheet": false,
"run_control": {
"read_only": false
}
},
"cell_type": "markdown",
"source": "### About dataset\n"
},
{
"metadata": {
"button": false,
"new_sheet": false,
"run_control": {
"read_only": false
}
},
"cell_type": "markdown",
"source": "This dataset is about past loans. The **Loan_train.csv** data set includes details of 346 customers whose loan are already paid off or defaulted. It includes following fields:\n\n| Field | Description |\n| -------------- | ------------------------------------------------------------------------------------- |\n| Loan_status | Whether a loan is paid off on in collection |\n| Principal | Basic principal loan amount at the |\n| Terms | Origination terms which can be weekly (7 days), biweekly, and monthly payoff schedule |\n| Effective_date | When the loan got originated and took effects |\n| Due_date | Since it\u2019s one-time payoff schedule, each loan has one single due date |\n| Age | Age of applicant |\n| Education | Education of applicant |\n| Gender | The gender of applicant |\n"
},
{
"metadata": {
"button": false,
"new_sheet": false,
"run_control": {
"read_only": false
}
},
"cell_type": "markdown",
"source": "Lets download the dataset\n"
},
{
"metadata": {
"button": false,
"new_sheet": false,
"run_control": {
"read_only": false
}
},
"cell_type": "code",
"source": "!wget -O loan_train.csv https://cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud/IBMDeveloperSkillsNetwork-ML0101EN-SkillsNetwork/labs/FinalModule_Coursera/data/loan_train.csv",
"execution_count": null,
"outputs": []
},
{
"metadata": {
"button": false,
"new_sheet": false,
"run_control": {
"read_only": false
}
},
"cell_type": "markdown",
"source": "### Load Data From CSV File\n"
},
{
"metadata": {
"button": false,
"new_sheet": false,
"run_control": {
"read_only": false
}
},
"cell_type": "code",
"source": "df = pd.read_csv('loan_train.csv')\ndf.head()",
"execution_count": null,
"outputs": []
},
{
"metadata": {},
"cell_type": "code",
"source": "df.shape",
"execution_count": null,
"outputs": []
},
{
"metadata": {
"button": false,
"new_sheet": false,
"run_control": {
"read_only": false
}
},
"cell_type": "markdown",
"source": "### Convert to date time object\n"
},
{
"metadata": {
"button": false,
"new_sheet": false,
"run_control": {
"read_only": false
}
},
"cell_type": "code",
"source": "df['due_date'] = pd.to_datetime(df['due_date'])\ndf['effective_date'] = pd.to_datetime(df['effective_date'])\ndf.head()",
"execution_count": null,
"outputs": []
},
{
"metadata": {
"button": false,
"new_sheet": false,
"run_control": {
"read_only": false
}
},
"cell_type": "markdown",
"source": "# Data visualization and pre-processing\n"
},
{
"metadata": {
"button": false,
"new_sheet": false,
"run_control": {
"read_only": false
}
},
"cell_type": "markdown",
"source": "Let\u2019s see how many of each class is in our data set \n"
},
{
"metadata": {
"button": false,
"new_sheet": false,
"run_control": {
"read_only": false
}
},
"cell_type": "code",
"source": "df['loan_status'].value_counts()",
"execution_count": null,
"outputs": []
},
{
"metadata": {
"button": false,
"new_sheet": false,
"run_control": {
"read_only": false
}
},
"cell_type": "markdown",
"source": "260 people have paid off the loan on time while 86 have gone into collection \n"
},
{
"metadata": {},
"cell_type": "markdown",
"source": "Lets plot some columns to underestand data better:\n"
},
{
"metadata": {},
"cell_type": "code",
"source": "# notice: installing seaborn might takes a few minutes\n!conda install -c anaconda seaborn -y",
"execution_count": null,
"outputs": []
},
{
"metadata": {},
"cell_type": "code",
"source": "import seaborn as sns\n\nbins = np.linspace(df.Principal.min(), df.Principal.max(), 10)\ng = sns.FacetGrid(df, col=\"Gender\", hue=\"loan_status\", palette=\"Set1\", col_wrap=2)\ng.map(plt.hist, 'Principal', bins=bins, ec=\"k\")\n\ng.axes[-1].legend()\nplt.show()",
"execution_count": null,
"outputs": []
},
{
"metadata": {
"button": false,
"new_sheet": false,
"run_control": {
"read_only": false
}
},
"cell_type": "code",
"source": "bins = np.linspace(df.age.min(), df.age.max(), 10)\ng = sns.FacetGrid(df, col=\"Gender\", hue=\"loan_status\", palette=\"Set1\", col_wrap=2)\ng.map(plt.hist, 'age', bins=bins, ec=\"k\")\n\ng.axes[-1].legend()\nplt.show()",
"execution_count": null,
"outputs": []
},
{
"metadata": {
"button": false,
"new_sheet": false,
"run_control": {
"read_only": false
}
},
"cell_type": "markdown",
"source": "# Pre-processing: Feature selection/extraction\n"
},
{
"metadata": {
"button": false,
"new_sheet": false,
"run_control": {
"read_only": false
}
},
"cell_type": "markdown",
"source": "### Lets look at the day of the week people get the loan\n"
},
{
"metadata": {
"button": false,
"new_sheet": false,
"run_control": {
"read_only": false
}
},
"cell_type": "code",
"source": "df['dayofweek'] = df['effective_date'].dt.dayofweek\nbins = np.linspace(df.dayofweek.min(), df.dayofweek.max(), 10)\ng = sns.FacetGrid(df, col=\"Gender\", hue=\"loan_status\", palette=\"Set1\", col_wrap=2)\ng.map(plt.hist, 'dayofweek', bins=bins, ec=\"k\")\ng.axes[-1].legend()\nplt.show()\n",
"execution_count": null,
"outputs": []
},
{
"metadata": {
"button": false,
"new_sheet": false,
"run_control": {
"read_only": false
}
},
"cell_type": "markdown",
"source": "We see that people who get the loan at the end of the week dont pay it off, so lets use Feature binarization to set a threshold values less then day 4 \n"
},
{
"metadata": {
"button": false,
"new_sheet": false,
"run_control": {
"read_only": false
}
},
"cell_type": "code",
"source": "df['weekend'] = df['dayofweek'].apply(lambda x: 1 if (x>3) else 0)\ndf.head()",
"execution_count": null,
"outputs": []
},
{
"metadata": {
"button": false,
"new_sheet": false,
"run_control": {
"read_only": false
}
},
"cell_type": "markdown",
"source": "## Convert Categorical features to numerical values\n"
},
{
"metadata": {
"button": false,
"new_sheet": false,
"run_control": {
"read_only": false
}
},
"cell_type": "markdown",
"source": "Lets look at gender:\n"
},
{
"metadata": {
"button": false,
"new_sheet": false,
"run_control": {
"read_only": false
}
},
"cell_type": "code",
"source": "df.groupby(['Gender'])['loan_status'].value_counts(normalize=True)",
"execution_count": null,
"outputs": []
},
{
"metadata": {
"button": false,
"new_sheet": false,
"run_control": {
"read_only": false
}
},
"cell_type": "markdown",
"source": "86 % of female pay there loans while only 73 % of males pay there loan\n"
},
{
"metadata": {
"button": false,
"new_sheet": false,
"run_control": {
"read_only": false
}
},
"cell_type": "markdown",
"source": "Lets convert male to 0 and female to 1:\n"
},
{
"metadata": {
"button": false,
"new_sheet": false,
"run_control": {
"read_only": false
}
},
"cell_type": "code",
"source": "df['Gender'].replace(to_replace=['male','female'], value=[0,1],inplace=True)\ndf.head()",
"execution_count": null,
"outputs": []
},
{
"metadata": {
"button": false,
"new_sheet": false,
"run_control": {
"read_only": false
}
},
"cell_type": "markdown",
"source": "## One Hot Encoding\n\n#### How about education?\n"
},
{
"metadata": {
"button": false,
"new_sheet": false,
"run_control": {
"read_only": false
}
},
"cell_type": "code",
"source": "df.groupby(['education'])['loan_status'].value_counts(normalize=True)",
"execution_count": null,
"outputs": []
},
{
"metadata": {
"button": false,
"new_sheet": false,
"run_control": {
"read_only": false
}
},
"cell_type": "markdown",
"source": "#### Feature befor One Hot Encoding\n"
},
{
"metadata": {
"button": false,
"new_sheet": false,
"run_control": {
"read_only": false
}
},
"cell_type": "code",
"source": "df[['Principal','terms','age','Gender','education']].head()",
"execution_count": null,
"outputs": []
},
{
"metadata": {
"button": false,
"new_sheet": false,
"run_control": {
"read_only": false
}
},
"cell_type": "markdown",
"source": "#### Use one hot encoding technique to conver categorical varables to binary variables and append them to the feature Data Frame\n"
},
{
"metadata": {
"button": false,
"new_sheet": false,
"run_control": {
"read_only": false
}
},
"cell_type": "code",
"source": "Feature = df[['Principal','terms','age','Gender','weekend']]\nFeature = pd.concat([Feature,pd.get_dummies(df['education'])], axis=1)\nFeature.drop(['Master or Above'], axis = 1,inplace=True)\nFeature.head()\n",
"execution_count": null,
"outputs": []
},
{
"metadata": {
"button": false,
"new_sheet": false,
"run_control": {
"read_only": false
}
},
"cell_type": "markdown",
"source": "### Feature selection\n"
},
{
"metadata": {
"button": false,
"new_sheet": false,
"run_control": {
"read_only": false
}
},
"cell_type": "markdown",
"source": "Lets defind feature sets, X:\n"
},
{
"metadata": {
"button": false,
"new_sheet": false,
"run_control": {
"read_only": false
}
},
"cell_type": "code",
"source": "X = Feature\nX[0:5]",
"execution_count": null,
"outputs": []
},
{
"metadata": {
"button": false,
"new_sheet": false,
"run_control": {
"read_only": false
}
},
"cell_type": "markdown",
"source": "What are our lables?\n"
},
{
"metadata": {
"button": false,
"new_sheet": false,
"run_control": {
"read_only": false
}
},
"cell_type": "code",
"source": "y = df['loan_status'].values\ny[0:5]",
"execution_count": null,
"outputs": []
},
{
"metadata": {
"button": false,
"new_sheet": false,
"run_control": {
"read_only": false
}
},
"cell_type": "markdown",
"source": "## Normalize Data\n"
},
{
"metadata": {
"button": false,
"new_sheet": false,
"run_control": {
"read_only": false
}
},
"cell_type": "markdown",
"source": "Data Standardization give data zero mean and unit variance (technically should be done after train test split )\n"
},
{
"metadata": {
"button": false,
"new_sheet": false,
"run_control": {
"read_only": false
}
},
"cell_type": "code",
"source": "X= preprocessing.StandardScaler().fit(X).transform(X)\nX[0:5]",
"execution_count": null,
"outputs": []
},
{
"metadata": {
"button": false,
"new_sheet": false,
"run_control": {
"read_only": false
}
},
"cell_type": "markdown",
"source": "# Classification\n"
},
{
"metadata": {
"button": false,
"new_sheet": false,
"run_control": {
"read_only": false
}
},
"cell_type": "markdown",
"source": "Now, it is your turn, use the training set to build an accurate model. Then use the test set to report the accuracy of the model\nYou should use the following algorithm:\n\n- K Nearest Neighbor(KNN)\n- Decision Tree\n- Support Vector Machine\n- Logistic Regression\n\n** Notice:** \n\n- You can go above and change the pre-processing, feature selection, feature-extraction, and so on, to make a better model.\n- You should use either scikit-learn, Scipy or Numpy libraries for developing the classification algorithms.\n- You should include the code of the algorithm in the following cells.\n"
},
{
"metadata": {},
"cell_type": "markdown",
"source": "# K Nearest Neighbor(KNN)\n\nNotice: You should find the best k to build the model with the best accuracy. \n**warning:** You should not use the **loan_test.csv** for finding the best k, however, you can split your train_loan.csv into train and test to find the best **k**.\n"
},
{
"metadata": {},
"cell_type": "code",
"source": "#Getting the Data\ndf=pd.read.csv('loan_train.csv')\ndf.head()\ndf.shape\n\n#Convert to Data time object\ndf['due_date']=pd.to_datetime(df['due_date'])\ndf['effective_date']=pd.to_datetime(df['effective_date'])\nf.head()\n\n#Data visualization/processing\ndf[loan_status].value_counts()\n\n#preprocessing\ndf['weekend']=df['day of week'].apply(lamda x:1 if (x>3) else 0)\n\n#Convert categorical features to numerical values\ndf.groupby['Gender']['loan_status'].value_counts(normalize=True)\n\n#Convert male 0 female 1\ndf['Gender'].replace(to_replace)=['Male', 'female'],value=[0,1], inplace=True)\ndf.head()\n\n#Convert education\ndf.groupby['education']['loan_status'].value_counts(normalize=True)\n\n#Encoder\ndf[['principal', 'terms', 'age', 'Gender', 'weekend']].head()\n\n#Convert category variables to binary\n\nFeature=df[['principal', 'terms', 'age', 'Gender', 'weekend']]\nFeature=pd.concat[Feature,dp.get_dummies(df['education'])], axis=1)\nfeature=drop(['Master' or 'above'] axis=1, inplace=True)\nFeature.head()\n\n#Feature selection\nx=Feature\nx[0:5]\n\n#labels\ny=df['loan_status'].values\ny[0:5]\n\n#standardization\nx=process.standardScaler().fit(x).transform(x)\nx[0:5]\n",
"execution_count": null,
"outputs": []
},
{
"metadata": {},
"cell_type": "code",
"source": "#train-test-split\nfrom sklearn.model_selection import train_test_split\nx_train,X_test, y_train, y_test=train_test_split(x, y, test_size=0.2, random_state=4 )\nprint('Train set:', x_train.shape, y_train.shape)\nprint('Test set:', x_test.shape, y_test.shape)",
"execution_count": null,
"outputs": []
},
{
"metadata": {},
"cell_type": "code",
"source": "#CLASSIFICATION\nfrom sklearn.neighbors import KNeighborsClassifier\nK=4\nneigh=KNeighborsClassifier(n_neighbors=k.fit(x_train, y_train))\nneigh\n\n#predict\nyhat=neigh.predict(x_test)\nyhat[0:5]\n",
"execution_count": 15,
"outputs": [
{
"output_type": "error",
"ename": "NameError",
"evalue": "name 'k' is not defined",
"traceback": [
"\u001b[0;31m---------------------------------------------------------------------------\u001b[0m",
"\u001b[0;31mNameError\u001b[0m Traceback (most recent call last)",
"\u001b[0;32m<ipython-input-15-edeec85ec240>\u001b[0m in \u001b[0;36m<module>\u001b[0;34m\u001b[0m\n\u001b[1;32m 2\u001b[0m \u001b[0;32mfrom\u001b[0m \u001b[0msklearn\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mneighbors\u001b[0m \u001b[0;32mimport\u001b[0m \u001b[0mKNeighborsClassifier\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 3\u001b[0m \u001b[0mK\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0;36m4\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m----> 4\u001b[0;31m \u001b[0mneigh\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0mKNeighborsClassifier\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mn_neighbors\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0mk\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mfit\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mx_train\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0my_train\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 5\u001b[0m \u001b[0mneigh\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 6\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n",
"\u001b[0;31mNameError\u001b[0m: name 'k' is not defined"
]
}
]
},
{
"metadata": {},
"cell_type": "markdown",
"source": "# Decision Tree\n"
},
{
"metadata": {},
"cell_type": "code",
"source": "#Get Data\ndf=pd.read.csv('loan_train.csv')\ndf.head()\ndf.shape\n\n#convert to data time object\ndf['due_date']=pd.to_datetime(df['due_date'])\ndf['effective_date']=pd.to_datetime(['effective_date'])\ndf.head()\n\n#Data visualization\ndf['loan_status'].value_counts()\n\n#processing\ndf['weekend']=df['day of week'].apply(lamda x:1 if (x>3) else 0)\n\n#convert category fetures to numerical\ndf.groupby['Gender']['loan_status'].value_counts(normalize=True)\n\n#convert male 0 Female 1\ndf['Gender'].replace(to_replace=['male', 'female'], value=[0,1], inplace=True)\ndf.head()\n\n#convert education\ndf.groupby(['education'])['loan_status'].value_counts(normalize=True)\n\n#Encoder\ndf[['principal', 'terms', 'age', 'Gender', 'weekend']].head()\n\n# convert category variables to binary\nFeature=df[['principal', 'terms', 'age', 'Gender', 'weekend']]\nFeature=pd.concat[Feature,dp.get_dummies(df['education'])], axis=1)\nFeature.drop(['Master' or 'above'], axis=1, inplace=True)\nFeature.head()\n\n#Feature selection\nx=Feature\nx[0:5]\n\n#labels\ny=df['loan_status'].values\ny[0:5]\n\n# standardization\nx=processing standardScaler.fit(x).transform(x)\nx[0:5]\n",
"execution_count": null,
"outputs": []
},
{
"metadata": {},
"cell_type": "code",
"source": "#train/split\nfrom sklearn.model_selection import train_test_split\nfrom sklearn import preprocessing\nx_train, x_test, y_train,y_test= train_test_split(x, y, test_size=0.3\n , random_state=3\nx= processing.StandardScaler().fit(x).transform(x) ",
"execution_count": 20,
"outputs": [
{
"output_type": "error",
"ename": "SyntaxError",
"evalue": "invalid syntax (<ipython-input-20-ca62d01419f7>, line 6)",
"traceback": [
"\u001b[0;36m File \u001b[0;32m\"<ipython-input-20-ca62d01419f7>\"\u001b[0;36m, line \u001b[0;32m6\u001b[0m\n\u001b[0;31m x= processing.StandardScaler().fit(x).transform(x)\u001b[0m\n\u001b[0m ^\u001b[0m\n\u001b[0;31mSyntaxError\u001b[0m\u001b[0;31m:\u001b[0m invalid syntax\n"
]
}
]
},
{
"metadata": {},
"cell_type": "code",
"source": "#modeling\nfrom sklearn.tree import DecisionTreeClassifier\nLoan_status=DecisionTreeClassifier(criterion=\"entropy\", max_depth=4)\n\n#predict\npredTree=Loan_status.predict(x)\nprint(predTree[0:5])\nprint(y[0:5])",
"execution_count": 21,
"outputs": [
{
"output_type": "error",
"ename": "NameError",
"evalue": "name 'x' is not defined",
"traceback": [
"\u001b[0;31m---------------------------------------------------------------------------\u001b[0m",
"\u001b[0;31mNameError\u001b[0m Traceback (most recent call last)",
"\u001b[0;32m<ipython-input-21-b5c4c30a760a>\u001b[0m in \u001b[0;36m<module>\u001b[0;34m\u001b[0m\n\u001b[1;32m 4\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 5\u001b[0m \u001b[0;31m#predict\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m----> 6\u001b[0;31m \u001b[0mpredTree\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0mLoan_status\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mpredict\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mx\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 7\u001b[0m \u001b[0mprint\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mpredTree\u001b[0m\u001b[0;34m[\u001b[0m\u001b[0;36m0\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;36m5\u001b[0m\u001b[0;34m]\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 8\u001b[0m \u001b[0mprint\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0my\u001b[0m\u001b[0;34m[\u001b[0m\u001b[0;36m0\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;36m5\u001b[0m\u001b[0;34m]\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
"\u001b[0;31mNameError\u001b[0m: name 'x' is not defined"
]
}
]
},
{
"metadata": {},
"cell_type": "markdown",
"source": "# Support Vector Machine\n"
},
{
"metadata": {},
"cell_type": "code",
"source": "#get data\ndf=pd.read.csv('loan_train.csv')\ndf.head()\ndf.shape\n\n#convert to data time object\ndf['due_date']=pd.to_datetime(['due_date'])\ndf['effective_date']=pd.to_datetime(['effective_date'])\ndf.head()\n\n#Data visualization\ndf['loan_status']. value_counts()\n\n#preprocessing\ndf['weekend']=df['day of week'].apply(lamda x:1 if (x>3) or else 0)\n\n#convert category features to numerical\ndf.groupby['Gender']['Loan_status'].value_counts(normalize=True)\n\n#convert male 1 female 1\ndf['Gender']. replace(to_replace=['Male', 'Female'], value=[0,1], inplace=True)\ndf.head()\n\n#convert education\ndf.groupby(['education'])['loan_status'].value_counts(normalize=True)\n\n#Encoder\ndf[['principal', 'terms', 'age', 'Gender', 'weekend']]. head()\n\n#Convert category variables to binary\nFeature=df[['principal', 'terms', 'age', 'Gender', 'weekend']]\nFeature=pd.concat[Feature, dp.get_dummies(df['education'])], axis=1)\nFeature.drop(['Master'or 'above'], axis=1, inplace=True)\nFeature.head()\n\n# Feature selection\nx=Feature\nx[0:5]\n\n#labels\ny=df['loan_status'].values\ny[0:5]\n\n#standardization\nx=processing.standardScaler().fit(x).transform(x)\nx[0:5]\n \n\n",
"execution_count": 9,
"outputs": [
{
"output_type": "error",
"ename": "SyntaxError",
"evalue": "invalid syntax (<ipython-input-9-94135fd239c5>, line 15)",
"traceback": [
"\u001b[0;36m File \u001b[0;32m\"<ipython-input-9-94135fd239c5>\"\u001b[0;36m, line \u001b[0;32m15\u001b[0m\n\u001b[0;31m df['weekend']=df['day of week'].apply(lamda x:1 if (x>3) or else 0)\u001b[0m\n\u001b[0m ^\u001b[0m\n\u001b[0;31mSyntaxError\u001b[0m\u001b[0;31m:\u001b[0m invalid syntax\n"
]
}
]
},
{
"metadata": {},
"cell_type": "code",
"source": "# TRAIN/test\nfrom sklearn.model_selection import train_test_split\nx_train, x_test, y_train, Y_test=train_test_split(x, y, test_size=0.2, random_state=4)\nprint('Train set:', x_train.shape, y_train.shape)\nprint('Test set:', x_test.shape, y_test.shape) ",
"execution_count": 10,
"outputs": [
{
"output_type": "error",
"ename": "SyntaxError",
"evalue": "invalid syntax (<ipython-input-10-72396a380b7f>, line 3)",
"traceback": [
"\u001b[0;36m File \u001b[0;32m\"<ipython-input-10-72396a380b7f>\"\u001b[0;36m, line \u001b[0;32m3\u001b[0m\n\u001b[0;31m x_train, x_test, y_train, , Y_test=train_test_split(x, y, test_size=0.2, random_state=4)\u001b[0m\n\u001b[0m ^\u001b[0m\n\u001b[0;31mSyntaxError\u001b[0m\u001b[0;31m:\u001b[0m invalid syntax\n"
]
}
]
},
{
"metadata": {},
"cell_type": "code",
"source": "# Modeling\nfrom sklearn import svm\nfrom sklearn import preprocessing\nclf=svm.SVC(kernel=\"rbf\")\nclf.fit(x, y)\nx =processing.StandardScaler().fit(x).transform(x)\n#predict\nyhat=clf.predict(x)\nyhat[0:5]",
"execution_count": 23,
"outputs": [
{
"output_type": "error",
"ename": "NameError",
"evalue": "name 'x' is not defined",
"traceback": [
"\u001b[0;31m---------------------------------------------------------------------------\u001b[0m",
"\u001b[0;31mNameError\u001b[0m Traceback (most recent call last)",
"\u001b[0;32m<ipython-input-23-412acdb03a59>\u001b[0m in \u001b[0;36m<module>\u001b[0;34m\u001b[0m\n\u001b[1;32m 3\u001b[0m \u001b[0;32mfrom\u001b[0m \u001b[0msklearn\u001b[0m \u001b[0;32mimport\u001b[0m \u001b[0mpreprocessing\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 4\u001b[0m \u001b[0mclf\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0msvm\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mSVC\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mkernel\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0;34m\"rbf\"\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m----> 5\u001b[0;31m \u001b[0mclf\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mfit\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mx\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0my\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 6\u001b[0m \u001b[0mx\u001b[0m \u001b[0;34m=\u001b[0m\u001b[0mprocessing\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mStandardScaler\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mfit\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mx\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mtransform\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mx\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 7\u001b[0m \u001b[0;31m#predict\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
"\u001b[0;31mNameError\u001b[0m: name 'x' is not defined"
]
}
]
},
{
"metadata": {},
"cell_type": "markdown",
"source": "# Logistic Regression\n"
},
{
"metadata": {},
"cell_type": "code",
"source": "# get data \ndf=pd.read.csv('loan_train.csv')\ndf.head()\ndf.shape\n\n# convert to data time object\n\ndf['due_date']=pd. to_datetime(['due_date'])\ndf['effctive_date']=pd. to_datetime(df['effective_date'])\ndf.head()\n\n# data visualization\ndf['loan_status'].values_counts()\n\n#preprocessing\ndf['weekend']=df['day of week']. apply(lamda x: 1 if (x>3) or else 0)\n\n#convert category features to numerical\ndf.groupby['Gender']['loan_status'].value_counts(normalize=True)\n\n#convert male 0 female 1\ndf['Gender']. replace(to_replace=['Male', 'Female'], value=[0,1]), inplace=True)\ndf.head()\n\n#convert education\ndf.groupby(['education'])['loan_status'].value_counts(normalize=True)\n\n#encoder\ndf[['principal', 'terms', 'age', 'Gender', 'weekend']].head()\n\n#Convert category variables to binary\nFeature=df[['principal', 'terms', 'age', 'Gender', 'weekend']]\nFeature=pd.concat[Feature, dp.get_dummies(df['education'])], axis=1)\nFeature.drop(['Master' or 'above']), axis=1, inplace=True\nFeature.head()\n\n#Feature selection\nx=Feature\nx[0:5]\n\n#labels\ny=df['loan_status'].values\ny[0:5]\n\n#standardization\nx=processing.standardScaler().fit(x).transform(x)\nx[0:5]\n\n",
"execution_count": 7,
"outputs": [
{
"output_type": "error",
"ename": "SyntaxError",
"evalue": "invalid syntax (<ipython-input-7-6c8f0a4d84b9>, line 16)",
"traceback": [
"\u001b[0;36m File \u001b[0;32m\"<ipython-input-7-6c8f0a4d84b9>\"\u001b[0;36m, line \u001b[0;32m16\u001b[0m\n\u001b[0;31m df['weekend']=df['day of week']. apply(lamda x: 1 if (x>3) or else 0)\u001b[0m\n\u001b[0m ^\u001b[0m\n\u001b[0;31mSyntaxError\u001b[0m\u001b[0;31m:\u001b[0m invalid syntax\n"
]
}
]
},
{
"metadata": {},
"cell_type": "code",
"source": "#train/split\nfrom sklearn.model_selection import train_test_split\nx_train, x_test, y_train, y_test=train_test_split(x, y, test_size=0.2, random_state=4)\nprint('Train set:', x_train.shape, y_train.shape)\nprint('Test set:', x_test.shape, y_test.shape)",
"execution_count": null,
"outputs": []
},
{
"metadata": {},
"cell_type": "code",
"source": "#Modeling\nfrom sklearn.linear_model import LogisticRegression\nfrom sklearn.metrics import confusion_matrix\nLR=LogisticRegression(C=0.01, solver='liblinear').fit(x_train, y_train)\nLR\n\n#Predict\nyhat=LR.predict(x_test)\nyhat\n\n#predict prob\nyhat_prob=LR.predict_prob(x_test)\nyhat_prob",
"execution_count": null,
"outputs": []
},
{
"metadata": {},
"cell_type": "markdown",
"source": "# Model Evaluation using Test set\n"
},
{
"metadata": {},
"cell_type": "code",
"source": "from sklearn.metrics import jaccard_score\nfrom sklearn.metrics import f1_score\nfrom sklearn.metrics import log_loss\n",
"execution_count": null,
"outputs": []
},
{
"metadata": {},
"cell_type": "markdown",
"source": "First, download and load the test set:\n"
},
{
"metadata": {},
"cell_type": "code",
"source": "!wget -O loan_test.csv https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/ML0101ENv3/labs/loan_test.csv",
"execution_count": null,
"outputs": []
},
{
"metadata": {
"button": false,
"new_sheet": false,
"run_control": {
"read_only": false
}
},
"cell_type": "markdown",
"source": "### Load Test set for evaluation\n"
},
{
"metadata": {
"button": false,
"new_sheet": false,
"run_control": {
"read_only": false
},
"scrolled": true
},
"cell_type": "code",
"source": "test_df = pd.read_csv('loan_test.csv')\ntest_df.head()",
"execution_count": 22,
"outputs": [
{
"output_type": "error",
"ename": "FileNotFoundError",
"evalue": "[Errno 2] File loan_test.csv does not exist: 'loan_test.csv'",
"traceback": [
"\u001b[0;31m---------------------------------------------------------------------------\u001b[0m",
"\u001b[0;31mFileNotFoundError\u001b[0m Traceback (most recent call last)",
"\u001b[0;32m<ipython-input-22-5998c8396b46>\u001b[0m in \u001b[0;36m<module>\u001b[0;34m\u001b[0m\n\u001b[0;32m----> 1\u001b[0;31m \u001b[0mtest_df\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mpd\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mread_csv\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m'loan_test.csv'\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 2\u001b[0m \u001b[0mtest_df\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mhead\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
"\u001b[0;32m/opt/conda/envs/Python-3.7-main/lib/python3.7/site-packages/pandas/io/parsers.py\u001b[0m in \u001b[0;36mparser_f\u001b[0;34m(filepath_or_buffer, sep, delimiter, header, names, index_col, usecols, squeeze, prefix, mangle_dupe_cols, dtype, engine, converters, true_values, false_values, skipinitialspace, skiprows, skipfooter, nrows, na_values, keep_default_na, na_filter, verbose, skip_blank_lines, parse_dates, infer_datetime_format, keep_date_col, date_parser, dayfirst, cache_dates, iterator, chunksize, compression, thousands, decimal, lineterminator, quotechar, quoting, doublequote, escapechar, comment, encoding, dialect, error_bad_lines, warn_bad_lines, delim_whitespace, low_memory, memory_map, float_precision)\u001b[0m\n\u001b[1;32m 674\u001b[0m )\n\u001b[1;32m 675\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m--> 676\u001b[0;31m \u001b[0;32mreturn\u001b[0m \u001b[0m_read\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mfilepath_or_buffer\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mkwds\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 677\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 678\u001b[0m \u001b[0mparser_f\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0m__name__\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mname\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
"\u001b[0;32m/opt/conda/envs/Python-3.7-main/lib/python3.7/site-packages/pandas/io/parsers.py\u001b[0m in \u001b[0;36m_read\u001b[0;34m(filepath_or_buffer, kwds)\u001b[0m\n\u001b[1;32m 446\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 447\u001b[0m \u001b[0;31m# Create the parser.\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m--> 448\u001b[0;31m \u001b[0mparser\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mTextFileReader\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mfp_or_buf\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;34m**\u001b[0m\u001b[0mkwds\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 449\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 450\u001b[0m \u001b[0;32mif\u001b[0m \u001b[0mchunksize\u001b[0m \u001b[0;32mor\u001b[0m \u001b[0miterator\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
"\u001b[0;32m/opt/conda/envs/Python-3.7-main/lib/python3.7/site-packages/pandas/io/parsers.py\u001b[0m in \u001b[0;36m__init__\u001b[0;34m(self, f, engine, **kwds)\u001b[0m\n\u001b[1;32m 878\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0moptions\u001b[0m\u001b[0;34m[\u001b[0m\u001b[0;34m\"has_index_names\"\u001b[0m\u001b[0;34m]\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mkwds\u001b[0m\u001b[0;34m[\u001b[0m\u001b[0;34m\"has_index_names\"\u001b[0m\u001b[0;34m]\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 879\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m--> 880\u001b[0;31m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0m_make_engine\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mengine\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 881\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 882\u001b[0m \u001b[0;32mdef\u001b[0m \u001b[0mclose\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mself\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
"\u001b[0;32m/opt/conda/envs/Python-3.7-main/lib/python3.7/site-packages/pandas/io/parsers.py\u001b[0m in \u001b[0;36m_make_engine\u001b[0;34m(self, engine)\u001b[0m\n\u001b[1;32m 1112\u001b[0m \u001b[0;32mdef\u001b[0m \u001b[0m_make_engine\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mself\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mengine\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0;34m\"c\"\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 1113\u001b[0m \u001b[0;32mif\u001b[0m \u001b[0mengine\u001b[0m \u001b[0;34m==\u001b[0m \u001b[0;34m\"c\"\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m-> 1114\u001b[0;31m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0m_engine\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mCParserWrapper\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mf\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;34m**\u001b[0m\u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0moptions\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 1115\u001b[0m \u001b[0;32melse\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 1116\u001b[0m \u001b[0;32mif\u001b[0m \u001b[0mengine\u001b[0m \u001b[0;34m==\u001b[0m \u001b[0;34m\"python\"\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
"\u001b[0;32m/opt/conda/envs/Python-3.7-main/lib/python3.7/site-packages/pandas/io/parsers.py\u001b[0m in \u001b[0;36m__init__\u001b[0;34m(self, src, **kwds)\u001b[0m\n\u001b[1;32m 1889\u001b[0m \u001b[0mkwds\u001b[0m\u001b[0;34m[\u001b[0m\u001b[0;34m\"usecols\"\u001b[0m\u001b[0;34m]\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0musecols\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 1890\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m-> 1891\u001b[0;31m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0m_reader\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mparsers\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mTextReader\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0msrc\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;34m**\u001b[0m\u001b[0mkwds\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 1892\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0munnamed_cols\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0m_reader\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0munnamed_cols\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 1893\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n",
"\u001b[0;32mpandas/_libs/parsers.pyx\u001b[0m in \u001b[0;36mpandas._libs.parsers.TextReader.__cinit__\u001b[0;34m()\u001b[0m\n",
"\u001b[0;32mpandas/_libs/parsers.pyx\u001b[0m in \u001b[0;36mpandas._libs.parsers.TextReader._setup_parser_source\u001b[0;34m()\u001b[0m\n",
"\u001b[0;31mFileNotFoundError\u001b[0m: [Errno 2] File loan_test.csv does not exist: 'loan_test.csv'"
]
}
]
},
{
"metadata": {},
"cell_type": "code",
"source": "#jaccard score\ntest_df=pd.read_csv('loan_test.csv')\ntest_df.head()\n\nfrom sklearn.metrics import jaccard_similarity_score\njaccard_simlarity_score(y_test, yhat)\ny_test\nyhat",
"execution_count": null,
"outputs": []
},
{
"metadata": {},
"cell_type": "code",
"source": "#f 1 score (confusion matrix)\ntest_df=pd.read_csv('loan_test.csv')\ntest_df.head()\nfrom sklearn.metrics import classification_report, confusion_matrix\nimport itertools\ncnf_matrix=confusion_matrix(y_test, yhat, labels=[1,0])\nnp.set_printoptions(precision=2)\nprint(classification_report(y_test, yhat))\n",
"execution_count": 13,
"outputs": [
{
"output_type": "error",
"ename": "FileNotFoundError",
"evalue": "[Errno 2] File loan_test.csv does not exist: 'loan_test.csv'",
"traceback": [
"\u001b[0;31m---------------------------------------------------------------------------\u001b[0m",
"\u001b[0;31mFileNotFoundError\u001b[0m Traceback (most recent call last)",
"\u001b[0;32m<ipython-input-13-b3ab49bf9103>\u001b[0m in \u001b[0;36m<module>\u001b[0;34m\u001b[0m\n\u001b[1;32m 1\u001b[0m \u001b[0;31m#f 1 score (confusion matrix)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m----> 2\u001b[0;31m \u001b[0mtest_df\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0mpd\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mread_csv\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m'loan_test.csv'\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 3\u001b[0m \u001b[0mtest_df\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mhead\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 4\u001b[0m \u001b[0;32mfrom\u001b[0m \u001b[0msklearn\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mmetrics\u001b[0m \u001b[0;32mimport\u001b[0m \u001b[0mclassification_report\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mconfusion_matrix\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 5\u001b[0m \u001b[0;32mimport\u001b[0m \u001b[0mitertools\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
"\u001b[0;32m/opt/conda/envs/Python-3.7-main/lib/python3.7/site-packages/pandas/io/parsers.py\u001b[0m in \u001b[0;36mparser_f\u001b[0;34m(filepath_or_buffer, sep, delimiter, header, names, index_col, usecols, squeeze, prefix, mangle_dupe_cols, dtype, engine, converters, true_values, false_values, skipinitialspace, skiprows, skipfooter, nrows, na_values, keep_default_na, na_filter, verbose, skip_blank_lines, parse_dates, infer_datetime_format, keep_date_col, date_parser, dayfirst, cache_dates, iterator, chunksize, compression, thousands, decimal, lineterminator, quotechar, quoting, doublequote, escapechar, comment, encoding, dialect, error_bad_lines, warn_bad_lines, delim_whitespace, low_memory, memory_map, float_precision)\u001b[0m\n\u001b[1;32m 674\u001b[0m )\n\u001b[1;32m 675\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m--> 676\u001b[0;31m \u001b[0;32mreturn\u001b[0m \u001b[0m_read\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mfilepath_or_buffer\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mkwds\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 677\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 678\u001b[0m \u001b[0mparser_f\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0m__name__\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mname\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
"\u001b[0;32m/opt/conda/envs/Python-3.7-main/lib/python3.7/site-packages/pandas/io/parsers.py\u001b[0m in \u001b[0;36m_read\u001b[0;34m(filepath_or_buffer, kwds)\u001b[0m\n\u001b[1;32m 446\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 447\u001b[0m \u001b[0;31m# Create the parser.\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m--> 448\u001b[0;31m \u001b[0mparser\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mTextFileReader\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mfp_or_buf\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;34m**\u001b[0m\u001b[0mkwds\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 449\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 450\u001b[0m \u001b[0;32mif\u001b[0m \u001b[0mchunksize\u001b[0m \u001b[0;32mor\u001b[0m \u001b[0miterator\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
"\u001b[0;32m/opt/conda/envs/Python-3.7-main/lib/python3.7/site-packages/pandas/io/parsers.py\u001b[0m in \u001b[0;36m__init__\u001b[0;34m(self, f, engine, **kwds)\u001b[0m\n\u001b[1;32m 878\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0moptions\u001b[0m\u001b[0;34m[\u001b[0m\u001b[0;34m\"has_index_names\"\u001b[0m\u001b[0;34m]\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mkwds\u001b[0m\u001b[0;34m[\u001b[0m\u001b[0;34m\"has_index_names\"\u001b[0m\u001b[0;34m]\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 879\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m--> 880\u001b[0;31m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0m_make_engine\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mengine\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 881\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 882\u001b[0m \u001b[0;32mdef\u001b[0m \u001b[0mclose\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mself\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
"\u001b[0;32m/opt/conda/envs/Python-3.7-main/lib/python3.7/site-packages/pandas/io/parsers.py\u001b[0m in \u001b[0;36m_make_engine\u001b[0;34m(self, engine)\u001b[0m\n\u001b[1;32m 1112\u001b[0m \u001b[0;32mdef\u001b[0m \u001b[0m_make_engine\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mself\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mengine\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0;34m\"c\"\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 1113\u001b[0m \u001b[0;32mif\u001b[0m \u001b[0mengine\u001b[0m \u001b[0;34m==\u001b[0m \u001b[0;34m\"c\"\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m-> 1114\u001b[0;31m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0m_engine\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mCParserWrapper\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mf\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;34m**\u001b[0m\u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0moptions\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 1115\u001b[0m \u001b[0;32melse\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 1116\u001b[0m \u001b[0;32mif\u001b[0m \u001b[0mengine\u001b[0m \u001b[0;34m==\u001b[0m \u001b[0;34m\"python\"\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
"\u001b[0;32m/opt/conda/envs/Python-3.7-main/lib/python3.7/site-packages/pandas/io/parsers.py\u001b[0m in \u001b[0;36m__init__\u001b[0;34m(self, src, **kwds)\u001b[0m\n\u001b[1;32m 1889\u001b[0m \u001b[0mkwds\u001b[0m\u001b[0;34m[\u001b[0m\u001b[0;34m\"usecols\"\u001b[0m\u001b[0;34m]\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0musecols\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 1890\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m-> 1891\u001b[0;31m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0m_reader\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mparsers\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mTextReader\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0msrc\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;34m**\u001b[0m\u001b[0mkwds\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 1892\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0munnamed_cols\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0m_reader\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0munnamed_cols\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 1893\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n",
"\u001b[0;32mpandas/_libs/parsers.pyx\u001b[0m in \u001b[0;36mpandas._libs.parsers.TextReader.__cinit__\u001b[0;34m()\u001b[0m\n",
"\u001b[0;32mpandas/_libs/parsers.pyx\u001b[0m in \u001b[0;36mpandas._libs.parsers.TextReader._setup_parser_source\u001b[0;34m()\u001b[0m\n",
"\u001b[0;31mFileNotFoundError\u001b[0m: [Errno 2] File loan_test.csv does not exist: 'loan_test.csv'"
]
}
]
},
{
"metadata": {},
"cell_type": "code",
"source": "#Log loss\ntest_df=pd.read_csv('loan_test.csv')\ntest_df.head()\nfrom sklearn.metrics import log_loss\nlog_loss(y_test, yhat_prob)\ny_test\nyhat_prob",
"execution_count": 12,
"outputs": [
{
"output_type": "error",
"ename": "FileNotFoundError",
"evalue": "[Errno 2] File loan_test.csv does not exist: 'loan_test.csv'",
"traceback": [
"\u001b[0;31m---------------------------------------------------------------------------\u001b[0m",
"\u001b[0;31mFileNotFoundError\u001b[0m Traceback (most recent call last)",
"\u001b[0;32m<ipython-input-12-bf21316ee4a2>\u001b[0m in \u001b[0;36m<module>\u001b[0;34m\u001b[0m\n\u001b[1;32m 1\u001b[0m \u001b[0;31m#Log loss\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m----> 2\u001b[0;31m \u001b[0mtest_df\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0mpd\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mread_csv\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m'loan_test.csv'\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 3\u001b[0m \u001b[0mtest_df\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mhead\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 4\u001b[0m \u001b[0;32mfrom\u001b[0m \u001b[0msklearn\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mmetrics\u001b[0m \u001b[0;32mimport\u001b[0m \u001b[0mlog_loss\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 5\u001b[0m \u001b[0mlog_loss\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0my_test\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0myhat_prob\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
"\u001b[0;32m/opt/conda/envs/Python-3.7-main/lib/python3.7/site-packages/pandas/io/parsers.py\u001b[0m in \u001b[0;36mparser_f\u001b[0;34m(filepath_or_buffer, sep, delimiter, header, names, index_col, usecols, squeeze, prefix, mangle_dupe_cols, dtype, engine, converters, true_values, false_values, skipinitialspace, skiprows, skipfooter, nrows, na_values, keep_default_na, na_filter, verbose, skip_blank_lines, parse_dates, infer_datetime_format, keep_date_col, date_parser, dayfirst, cache_dates, iterator, chunksize, compression, thousands, decimal, lineterminator, quotechar, quoting, doublequote, escapechar, comment, encoding, dialect, error_bad_lines, warn_bad_lines, delim_whitespace, low_memory, memory_map, float_precision)\u001b[0m\n\u001b[1;32m 674\u001b[0m )\n\u001b[1;32m 675\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m--> 676\u001b[0;31m \u001b[0;32mreturn\u001b[0m \u001b[0m_read\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mfilepath_or_buffer\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mkwds\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 677\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 678\u001b[0m \u001b[0mparser_f\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0m__name__\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mname\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
"\u001b[0;32m/opt/conda/envs/Python-3.7-main/lib/python3.7/site-packages/pandas/io/parsers.py\u001b[0m in \u001b[0;36m_read\u001b[0;34m(filepath_or_buffer, kwds)\u001b[0m\n\u001b[1;32m 446\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 447\u001b[0m \u001b[0;31m# Create the parser.\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m--> 448\u001b[0;31m \u001b[0mparser\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mTextFileReader\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mfp_or_buf\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;34m**\u001b[0m\u001b[0mkwds\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 449\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 450\u001b[0m \u001b[0;32mif\u001b[0m \u001b[0mchunksize\u001b[0m \u001b[0;32mor\u001b[0m \u001b[0miterator\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
"\u001b[0;32m/opt/conda/envs/Python-3.7-main/lib/python3.7/site-packages/pandas/io/parsers.py\u001b[0m in \u001b[0;36m__init__\u001b[0;34m(self, f, engine, **kwds)\u001b[0m\n\u001b[1;32m 878\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0moptions\u001b[0m\u001b[0;34m[\u001b[0m\u001b[0;34m\"has_index_names\"\u001b[0m\u001b[0;34m]\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mkwds\u001b[0m\u001b[0;34m[\u001b[0m\u001b[0;34m\"has_index_names\"\u001b[0m\u001b[0;34m]\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 879\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m--> 880\u001b[0;31m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0m_make_engine\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mengine\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 881\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 882\u001b[0m \u001b[0;32mdef\u001b[0m \u001b[0mclose\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mself\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
"\u001b[0;32m/opt/conda/envs/Python-3.7-main/lib/python3.7/site-packages/pandas/io/parsers.py\u001b[0m in \u001b[0;36m_make_engine\u001b[0;34m(self, engine)\u001b[0m\n\u001b[1;32m 1112\u001b[0m \u001b[0;32mdef\u001b[0m \u001b[0m_make_engine\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mself\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mengine\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0;34m\"c\"\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 1113\u001b[0m \u001b[0;32mif\u001b[0m \u001b[0mengine\u001b[0m \u001b[0;34m==\u001b[0m \u001b[0;34m\"c\"\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m-> 1114\u001b[0;31m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0m_engine\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mCParserWrapper\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mf\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;34m**\u001b[0m\u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0moptions\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 1115\u001b[0m \u001b[0;32melse\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 1116\u001b[0m \u001b[0;32mif\u001b[0m \u001b[0mengine\u001b[0m \u001b[0;34m==\u001b[0m \u001b[0;34m\"python\"\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
"\u001b[0;32m/opt/conda/envs/Python-3.7-main/lib/python3.7/site-packages/pandas/io/parsers.py\u001b[0m in \u001b[0;36m__init__\u001b[0;34m(self, src, **kwds)\u001b[0m\n\u001b[1;32m 1889\u001b[0m \u001b[0mkwds\u001b[0m\u001b[0;34m[\u001b[0m\u001b[0;34m\"usecols\"\u001b[0m\u001b[0;34m]\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0musecols\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 1890\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m-> 1891\u001b[0;31m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0m_reader\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mparsers\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mTextReader\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0msrc\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;34m**\u001b[0m\u001b[0mkwds\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 1892\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0munnamed_cols\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0m_reader\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0munnamed_cols\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 1893\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n",
"\u001b[0;32mpandas/_libs/parsers.pyx\u001b[0m in \u001b[0;36mpandas._libs.parsers.TextReader.__cinit__\u001b[0;34m()\u001b[0m\n",
"\u001b[0;32mpandas/_libs/parsers.pyx\u001b[0m in \u001b[0;36mpandas._libs.parsers.TextReader._setup_parser_source\u001b[0;34m()\u001b[0m\n",
"\u001b[0;31mFileNotFoundError\u001b[0m: [Errno 2] File loan_test.csv does not exist: 'loan_test.csv'"
]
}
]
},
{
"metadata": {},
"cell_type": "markdown",
"source": "# Report\n\nYou should be able to report the accuracy of the built model using different evaluation metrics:\n\n"
},
{
"metadata": {},
"cell_type": "markdown",
"source": "| Algorithm | Jaccard | F1-score | LogLoss |\n| ------------------ | ------- | -------- | ------- |\n| KNN | ? | ? | NA |\n| Decision Tree | ? | ? | NA |\n| SVM | ? | ? | NA |\n| LogisticRegression | ? | ? | ? |\n"
},
{
"metadata": {
"button": false,
"new_sheet": false,
"run_control": {
"read_only": false
}
},
"cell_type": "markdown",
"source": "<h2>Want to learn more?</h2>\n\nIBM SPSS Modeler is a comprehensive analytics platform that has many machine learning algorithms. It has been designed to bring predictive intelligence to decisions made by individuals, by groups, by systems \u2013 by your enterprise as a whole. A free trial is available through this course, available here: <a href=\"http://cocl.us/ML0101EN-SPSSModeler\">SPSS Modeler</a>\n\nAlso, you can use Watson Studio to run these notebooks faster with bigger datasets. Watson Studio is IBM's leading cloud solution for data scientists, built by data scientists. With Jupyter notebooks, RStudio, Apache Spark and popular libraries pre-packaged in the cloud, Watson Studio enables data scientists to collaborate on their projects without having to install anything. Join the fast-growing community of Watson Studio users today with a free account at <a href=\"https://cocl.us/ML0101EN_DSX\">Watson Studio</a>\n\n<h3>Thanks for completing this lesson!</h3>\n\n<h4>Author: <a href=\"https://ca.linkedin.com/in/saeedaghabozorgi\">Saeed Aghabozorgi</a></h4>\n<p><a href=\"https://ca.linkedin.com/in/saeedaghabozorgi\">Saeed Aghabozorgi</a>, PhD is a Data Scientist in IBM with a track record of developing enterprise level applications that substantially increases clients\u2019 ability to turn data into actionable knowledge. He is a researcher in data mining field and expert in developing advanced analytic methods like machine learning and statistical modelling on large datasets.</p>\n\n<hr>\n\n## Change Log\n\n| Date (YYYY-MM-DD) | Version | Changed By | Change Description |\n| ----------------- | ------- | ------------- | ------------------------------------------------------------------------------ |\n| 2020-10-27 | 2.1 | Lakshmi Holla | Made changes in import statement due to updates in version of sklearn library |\n| 2020-08-27 | 2.0 | Malika Singla | Added lab to GitLab |\n\n<hr>\n\n## <h3 align=\"center\"> \u00a9 IBM Corporation 2020. All rights reserved. <h3/>\n\n<p>\n"
}
],
"metadata": {
"kernelspec": {
"name": "python3",
"display_name": "Python 3.7",
"language": "python"
},
"language_info": {
"name": "python",
"version": "3.7.9",
"mimetype": "text/x-python",
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"pygments_lexer": "ipython3",
"nbconvert_exporter": "python",
"file_extension": ".py"
}
},
"nbformat": 4,
"nbformat_minor": 2
}
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment