Skip to content

Instantly share code, notes, and snippets.

@tarunparmar
Last active February 3, 2017 04:32
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save tarunparmar/42e21e1c566a0f69d386cb0902b9b757 to your computer and use it in GitHub Desktop.
Save tarunparmar/42e21e1c566a0f69d386cb0902b9b757 to your computer and use it in GitHub Desktop.
Display the source blob
Display the rendered blob
Raw
{
"cells": [
{
"metadata": {},
"cell_type": "markdown",
"source": "# Machine Learning Engineer Nanodegree - Tarun Parmar\n## Supervised Learning\n## Project: Building a Student Intervention System"
},
{
"metadata": {},
"cell_type": "markdown",
"source": "Welcome to the second project of the Machine Learning Engineer Nanodegree! In this notebook, some template code has already been provided for you, and it will be your job to implement the additional functionality necessary to successfully complete this project. Sections that begin with **'Implementation'** in the header indicate that the following block of code will require additional functionality which you must provide. Instructions will be provided for each section and the specifics of the implementation are marked in the code block with a `'TODO'` statement. Please be sure to read the instructions carefully!\n\nIn addition to implementing code, there will be questions that you must answer which relate to the project and your implementation. Each section where you will answer a question is preceded by a **'Question X'** header. Carefully read each question and provide thorough answers in the following text boxes that begin with **'Answer:'**. Your project submission will be evaluated based on your answers to each of the questions and the implementation you provide. \n\n>**Note:** Code and Markdown cells can be executed using the **Shift + Enter** keyboard shortcut. In addition, Markdown cells can be edited by typically double-clicking the cell to enter edit mode."
},
{
"metadata": {},
"cell_type": "markdown",
"source": "### Question 1 - Classification vs. Regression\n*Your goal for this project is to identify students who might need early intervention before they fail to graduate. Which type of supervised learning problem is this, classification or regression? Why?*"
},
{
"metadata": {},
"cell_type": "markdown",
"source": "**Answer: ** Since the goal of the project is to identify whether a student is going to need early intervention or not before he/she fails to graducate. This output variable here is a class variable hence the supervised learning problem is of classification type. The problem is of regression type if the output variable takes continuous values."
},
{
"metadata": {},
"cell_type": "markdown",
"source": "## Exploring the Data\nRun the code cell below to load necessary Python libraries and load the student data. Note that the last column from this dataset, `'passed'`, will be our target label (whether the student graduated or didn't graduate). All other columns are features about each student."
},
{
"metadata": {
"collapsed": false,
"trusted": true
},
"cell_type": "code",
"source": "# Import libraries\nimport numpy as np\nimport pandas as pd\nfrom time import time\nfrom sklearn.metrics import f1_score\n\n# Read student data\nstudent_data = pd.read_csv(\"student-data.csv\")\nprint(\"Student data read successfully!\")",
"execution_count": 1,
"outputs": [
{
"output_type": "stream",
"text": "Student data read successfully!\n",
"name": "stdout"
}
]
},
{
"metadata": {},
"cell_type": "markdown",
"source": "### Implementation: Data Exploration\nLet's begin by investigating the dataset to determine how many students we have information on, and learn about the graduation rate among these students. In the code cell below, you will need to compute the following:\n- The total number of students, `n_students`.\n- The total number of features for each student, `n_features`.\n- The number of those students who passed, `n_passed`.\n- The number of those students who failed, `n_failed`.\n- The graduation rate of the class, `grad_rate`, in percent (%).\n"
},
{
"metadata": {
"collapsed": false,
"scrolled": true,
"trusted": true
},
"cell_type": "code",
"source": "student_data.head()",
"execution_count": null,
"outputs": []
},
{
"metadata": {
"collapsed": false,
"trusted": true
},
"cell_type": "code",
"source": "student_data[student_data['passed'] == 'no']['passed'].count()",
"execution_count": null,
"outputs": []
},
{
"metadata": {
"collapsed": true,
"trusted": true
},
"cell_type": "code",
"source": "len(student_data)",
"execution_count": null,
"outputs": []
},
{
"metadata": {
"collapsed": false,
"trusted": true
},
"cell_type": "code",
"source": "# TODO: Calculate number of students\nn_students = len(student_data)\n\n# TODO: Calculate number of features\nn_features = len(student_data) - 1 # 30 Feature cols and 1 target col \n\n# TODO: Calculate passing students\nn_passed = len(student_data[student_data['passed'] == 'yes'])\n\n# TODO: Calculate failing students\nn_failed = len(student_data[student_data['passed'] == 'no'])\n\n# TODO: Calculate graduation rate\ngrad_rate = n_passed/(n_students)*100\n\n# Print the results\nprint(\"Total number of students: {}\".format(n_students))\nprint(\"Number of features: {}\".format(n_features))\nprint(\"Number of students who passed: {}\".format(n_passed))\nprint(\"Number of students who failed: {}\".format(n_failed))\nprint(\"Graduation rate of the class: {:.2f}%\".format(grad_rate))",
"execution_count": 2,
"outputs": [
{
"output_type": "stream",
"text": "Total number of students: 395\nNumber of features: 394\nNumber of students who passed: 265\nNumber of students who failed: 130\nGraduation rate of the class: 67.09%\n",
"name": "stdout"
}
]
},
{
"metadata": {},
"cell_type": "markdown",
"source": "## Preparing the Data\nIn this section, we will prepare the data for modeling, training and testing.\n\n### Identify feature and target columns\nIt is often the case that the data you obtain contains non-numeric features. This can be a problem, as most machine learning algorithms expect numeric data to perform computations with.\n\nRun the code cell below to separate the student data into feature and target columns to see if any features are non-numeric."
},
{
"metadata": {
"collapsed": false,
"trusted": true
},
"cell_type": "code",
"source": "# Extract feature columns\nfeature_cols = list(student_data.columns[:-1])\n\n# Extract target column 'passed'\ntarget_col = student_data.columns[-1] \n\n# Show the list of columns\nprint(\"Feature columns:\\n{}\".format(feature_cols))\nprint(\"\\nTarget column: {}\".format(target_col))\n\n# Separate the data into feature data and target data (X_all and y_all, respectively)\nX_all = student_data[feature_cols]\ny_all = student_data[target_col]\n\n# Show the feature information by printing the first five rows\nprint(\"\\nFeature values:\")\nprint(X_all.head())",
"execution_count": 3,
"outputs": [
{
"output_type": "stream",
"text": "Feature columns:\n['school', 'sex', 'age', 'address', 'famsize', 'Pstatus', 'Medu', 'Fedu', 'Mjob', 'Fjob', 'reason', 'guardian', 'traveltime', 'studytime', 'failures', 'schoolsup', 'famsup', 'paid', 'activities', 'nursery', 'higher', 'internet', 'romantic', 'famrel', 'freetime', 'goout', 'Dalc', 'Walc', 'health', 'absences']\n\nTarget column: passed\n\nFeature values:\n school sex age address famsize Pstatus Medu Fedu Mjob Fjob \\\n0 GP F 18 U GT3 A 4 4 at_home teacher \n1 GP F 17 U GT3 T 1 1 at_home other \n2 GP F 15 U LE3 T 1 1 at_home other \n3 GP F 15 U GT3 T 4 2 health services \n4 GP F 16 U GT3 T 3 3 other other \n\n ... higher internet romantic famrel freetime goout Dalc Walc health \\\n0 ... yes no no 4 3 4 1 1 3 \n1 ... yes yes no 5 3 3 1 1 3 \n2 ... yes yes no 4 3 2 2 3 3 \n3 ... yes yes yes 3 2 2 1 1 5 \n4 ... yes no no 4 3 2 1 2 5 \n\n absences \n0 6 \n1 4 \n2 10 \n3 2 \n4 4 \n\n[5 rows x 30 columns]\n",
"name": "stdout"
}
]
},
{
"metadata": {},
"cell_type": "markdown",
"source": "### Preprocess Feature Columns\n\nAs you can see, there are several non-numeric columns that need to be converted! Many of them are simply `yes`/`no`, e.g. `internet`. These can be reasonably converted into `1`/`0` (binary) values.\n\nOther columns, like `Mjob` and `Fjob`, have more than two values, and are known as _categorical variables_. The recommended way to handle such a column is to create as many columns as possible values (e.g. `Fjob_teacher`, `Fjob_other`, `Fjob_services`, etc.), and assign a `1` to one of them and `0` to all others.\n\nThese generated columns are sometimes called _dummy variables_, and we will use the [`pandas.get_dummies()`](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.get_dummies.html?highlight=get_dummies#pandas.get_dummies) function to perform this transformation. Run the code cell below to perform the preprocessing routine discussed in this section."
},
{
"metadata": {
"collapsed": false,
"trusted": true
},
"cell_type": "code",
"source": "def preprocess_features(X):\n ''' Preprocesses the student data and converts non-numeric binary variables into\n binary (0/1) variables. Converts categorical variables into dummy variables. '''\n \n # Initialize new output DataFrame\n output = pd.DataFrame(index = X.index)\n\n # Investigate each feature column for the data\n for col, col_data in X.iteritems():\n \n # If data type is non-numeric, replace all yes/no values with 1/0\n if col_data.dtype == object:\n col_data = col_data.replace(['yes', 'no'], [1, 0])\n\n # If data type is categorical, convert to dummy variables\n if col_data.dtype == object:\n # Example: 'school' => 'school_GP' and 'school_MS'\n col_data = pd.get_dummies(col_data, prefix = col) \n \n # Collect the revised columns\n output = output.join(col_data)\n \n return output\n\nX_all = preprocess_features(X_all)\nprint(\"Processed feature columns ({} total features):\\n{}\".format(len(X_all.columns), list(X_all.columns)))",
"execution_count": 4,
"outputs": [
{
"output_type": "stream",
"text": "Processed feature columns (48 total features):\n['school_GP', 'school_MS', 'sex_F', 'sex_M', 'age', 'address_R', 'address_U', 'famsize_GT3', 'famsize_LE3', 'Pstatus_A', 'Pstatus_T', 'Medu', 'Fedu', 'Mjob_at_home', 'Mjob_health', 'Mjob_other', 'Mjob_services', 'Mjob_teacher', 'Fjob_at_home', 'Fjob_health', 'Fjob_other', 'Fjob_services', 'Fjob_teacher', 'reason_course', 'reason_home', 'reason_other', 'reason_reputation', 'guardian_father', 'guardian_mother', 'guardian_other', 'traveltime', 'studytime', 'failures', 'schoolsup', 'famsup', 'paid', 'activities', 'nursery', 'higher', 'internet', 'romantic', 'famrel', 'freetime', 'goout', 'Dalc', 'Walc', 'health', 'absences']\n",
"name": "stdout"
}
]
},
{
"metadata": {},
"cell_type": "markdown",
"source": "### Implementation: Training and Testing Data Split\nSo far, we have converted all _categorical_ features into numeric values. For the next step, we split the data (both features and corresponding labels) into training and test sets. In the following code cell below, you will need to implement the following:\n- Randomly shuffle and split the data (`X_all`, `y_all`) into training and testing subsets.\n - Use 300 training points (approximately 75%) and 95 testing points (approximately 25%).\n - Set a `random_state` for the function(s) you use, if provided.\n - Store the results in `X_train`, `X_test`, `y_train`, and `y_test`."
},
{
"metadata": {
"collapsed": false,
"trusted": true
},
"cell_type": "code",
"source": "type(X_all)",
"execution_count": null,
"outputs": []
},
{
"metadata": {
"collapsed": false,
"trusted": true
},
"cell_type": "code",
"source": "# TODO: Import any additional functionality you may need here\n#from sklearn.utils import shuffle\nfrom sklearn.cross_validation import train_test_split\n\n\n# TODO: Set the number of training points\nnum_train = 300\n\n# Set the number of testing points\nnum_test = X_all.shape[0] - num_train\n\n#shuffle(X_all)\n#shuffle(y_all)\n\n# TODO: Shuffle and split the dataset into the number of training and testing points above\n#X_train = X_all[:num_train]\n#X_test = X_all[:num_test]\n#y_train = y_all[:num_train]\n#y_test = y_all[:num_test]\n\nX_train, X_test, y_train, y_test = train_test_split(X_all, y_all, train_size = num_train, random_state = 123)\n\n# Show the results of the split\nprint(\"Training set has {} samples.\".format(X_train.shape[0]))\nprint(\"Testing set has {} samples.\".format(X_test.shape[0]))",
"execution_count": 5,
"outputs": [
{
"output_type": "stream",
"text": "Training set has 300 samples.\nTesting set has 95 samples.\n",
"name": "stdout"
}
]
},
{
"metadata": {},
"cell_type": "markdown",
"source": "## Training and Evaluating Models\nIn this section, you will choose 3 supervised learning models that are appropriate for this problem and available in `scikit-learn`. You will first discuss the reasoning behind choosing these three models by considering what you know about the data and each model's strengths and weaknesses. You will then fit the model to varying sizes of training data (100 data points, 200 data points, and 300 data points) and measure the F<sub>1</sub> score. You will need to produce three tables (one for each model) that shows the training set size, training time, prediction time, F<sub>1</sub> score on the training set, and F<sub>1</sub> score on the testing set.\n\n**The following supervised learning models are currently available in** [`scikit-learn`](http://scikit-learn.org/stable/supervised_learning.html) **that you may choose from:**\n- Gaussian Naive Bayes (GaussianNB)\n- Decision Trees\n- Ensemble Methods (Bagging, AdaBoost, Random Forest, Gradient Boosting)\n- K-Nearest Neighbors (KNeighbors)\n- Stochastic Gradient Descent (SGDC)\n- Support Vector Machines (SVM)\n- Logistic Regression"
},
{
"metadata": {},
"cell_type": "markdown",
"source": "### Question 2 - Model Application\n*List three supervised learning models that are appropriate for this problem. For each model chosen*\n- Describe one real-world application in industry where the model can be applied. *(You may need to do a small bit of research for this — give references!)* \n- What are the strengths of the model; when does it perform well? \n- What are the weaknesses of the model; when does it perform poorly?\n- What makes this model a good candidate for the problem, given what you know about the data?"
},
{
"metadata": {},
"cell_type": "markdown",
"source": "**Answer: ** The issue described in the project is a case of Classification Model, which is why I could choose models which map the input data into pre-defined classes. For instance, classifiers can be used to classify mortgage consumers as good (fully payback the mortgage on time) and bad (delayed payback). From the given list I would pick below 3 models:\n - Gaussian Naive Bayes (GaussianNB)\n - Support Vector Machines (SVM) &\n - Decision Trees\n"
},
{
"metadata": {},
"cell_type": "markdown",
"source": "#### For Gaussian NB\n - Strengths:\n - Weaknesses: \n - Why a good candidate?: \n \n#### For Support Vector Machines\n - Strengths: Work well in complicated domain where there is clear margin of separation,\n - Weaknesses: Don't perform so well in large datasets and data with lots of noise and may be prone to overfitting\n - Why a good candidate?: The dataset is not very large and also not does not have too many null values so SVM may perform well\n \n#### For Decision Trees\n - Strengths: Easy to use, easy to grow, allows easy interpretation of data, can build classifiers out of classifier \n - Weaknesses: Prone to overfitting for data with lots of features, \n - Why a good candidate?: As it is easy to use, it will be good to try the data on this model with other 2 models"
},
{
"metadata": {},
"cell_type": "markdown",
"source": "### Setup\nRun the code cell below to initialize three helper functions which you can use for training and testing the three supervised learning models you've chosen above. The functions are as follows:\n- `train_classifier` - takes as input a classifier and training data and fits the classifier to the data.\n- `predict_labels` - takes as input a fit classifier, features, and a target labeling and makes predictions using the F<sub>1</sub> score.\n- `train_predict` - takes as input a classifier, and the training and testing data, and performs `train_clasifier` and `predict_labels`.\n - This function will report the F<sub>1</sub> score for both the training and testing data separately."
},
{
"metadata": {
"collapsed": false,
"trusted": true
},
"cell_type": "code",
"source": "def train_classifier(clf, X_train, y_train):\n ''' Fits a classifier to the training data. '''\n \n # Start the clock, train the classifier, then stop the clock\n start = time()\n clf.fit(X_train, y_train)\n end = time()\n \n # Print the results\n print(\"Trained model in {:.4f} seconds\".format(end - start))\n\n \ndef predict_labels(clf, features, target):\n ''' Makes predictions using a fit classifier based on F1 score. '''\n \n # Start the clock, make predictions, then stop the clock\n start = time()\n y_pred = clf.predict(features)\n end = time()\n \n # Print and return results\n print(\"Made predictions in {:.4f} seconds.\".format(end - start))\n return f1_score(target.values, y_pred, pos_label='yes')\n\n\ndef train_predict(clf, X_train, y_train, X_test, y_test):\n ''' Train and predict using a classifer based on F1 score. '''\n \n # Indicate the classifier and the training set size\n print(\"Training a {} using a training set size of {}. . .\".format(clf.__class__.__name__, len(X_train)))\n \n # Train the classifier\n train_classifier(clf, X_train, y_train)\n \n # Print the results of prediction for both training and testing\n print(\"F1 score for training set: {:.4f}.\".format(predict_labels(clf, X_train, y_train)))\n print(\"F1 score for test set: {:.4f}.\".format(predict_labels(clf, X_test, y_test)))",
"execution_count": 6,
"outputs": []
},
{
"metadata": {},
"cell_type": "markdown",
"source": "### Implementation: Model Performance Metrics\nWith the predefined functions above, you will now import the three supervised learning models of your choice and run the `train_predict` function for each one. Remember that you will need to train and predict on each classifier for three different training set sizes: 100, 200, and 300. Hence, you should expect to have 9 different outputs below — 3 for each model using the varying training set sizes. In the following code cell, you will need to implement the following:\n- Import the three supervised learning models you've discussed in the previous section.\n- Initialize the three models and store them in `clf_A`, `clf_B`, and `clf_C`.\n - Use a `random_state` for each model you use, if provided.\n - **Note:** Use the default settings for each model — you will tune one specific model in a later section.\n- Create the different training set sizes to be used to train each model.\n - *Do not reshuffle and resplit the data! The new training points should be drawn from `X_train` and `y_train`.*\n- Fit each model with each training set size and make predictions on the test set (9 in total). \n**Note:** Three tables are provided after the following code cell which can be used to store your results."
},
{
"metadata": {
"collapsed": false,
"trusted": true
},
"cell_type": "code",
"source": "# TODO: Import the three supervised learning models from sklearn\n# from sklearn import model_A\n# from sklearn import model_B\n# from sklearn import model_C\n\n# Model_A\nfrom sklearn.naive_bayes import GaussianNB\n\n# Model_B\nfrom sklearn.tree import DecisionTreeClassifier\n\n# Model_C\nfrom sklearn.svm import SVC\n\nfrom sklearn.utils import shuffle\n\n\n# TODO: Initialize the three models\nclf_A = GaussianNB()\nclf_B = DecisionTreeClassifier()\nclf_C = SVC()\n\n# TODO: Set up the training set sizes\nX_train_100 = X_train[:100]\ny_train_100 = y_train[:100]\n\nX_train_200 = X_train[:200]\ny_train_200 = y_train[:200]\n\nX_train_300 = X_train[:300]\ny_train_300 = y_train[:300]\n\n# TODO: Execute the 'train_predict' function for each classifier and each training set size\n\ntrain_predict(clf_A, X_train_100, y_train_100, X_test, y_test)\ntrain_predict(clf_A, X_train_200, y_train_200, X_test, y_test)\ntrain_predict(clf_A, X_train_300, y_train_300, X_test, y_test)\n\ntrain_predict(clf_B, X_train_100, y_train_100, X_test, y_test)\ntrain_predict(clf_B, X_train_200, y_train_200, X_test, y_test)\ntrain_predict(clf_B, X_train_300, y_train_300, X_test, y_test)\n\ntrain_predict(clf_C, X_train_100, y_train_100, X_test, y_test)\ntrain_predict(clf_C, X_train_200, y_train_200, X_test, y_test)\ntrain_predict(clf_C, X_train_300, y_train_300, X_test, y_test)",
"execution_count": 7,
"outputs": [
{
"output_type": "stream",
"text": "Training a GaussianNB using a training set size of 100. . .\nTrained model in 0.0156 seconds\nMade predictions in 0.0000 seconds.\nF1 score for training set: 0.7879.\nMade predictions in 0.0000 seconds.\nF1 score for test set: 0.8000.\nTraining a GaussianNB using a training set size of 200. . .\nTrained model in 0.0000 seconds\nMade predictions in 0.0156 seconds.\nF1 score for training set: 0.7656.\nMade predictions in 0.0000 seconds.\nF1 score for test set: 0.8261.\nTraining a GaussianNB using a training set size of 300. . .\nTrained model in 0.0000 seconds\nMade predictions in 0.0000 seconds.\nF1 score for training set: 0.7666.\nMade predictions in 0.0000 seconds.\nF1 score for test set: 0.8286.\nTraining a DecisionTreeClassifier using a training set size of 100. . .\nTrained model in 0.0000 seconds\nMade predictions in 0.0000 seconds.\nF1 score for training set: 1.0000.\nMade predictions in 0.0010 seconds.\nF1 score for test set: 0.7023.\nTraining a DecisionTreeClassifier using a training set size of 200. . .\nTrained model in 0.0080 seconds\nMade predictions in 0.0010 seconds.\nF1 score for training set: 1.0000.\nMade predictions in 0.0000 seconds.\nF1 score for test set: 0.8058.\nTraining a DecisionTreeClassifier using a training set size of 300. . .\nTrained model in 0.0110 seconds\nMade predictions in 0.0010 seconds.\nF1 score for training set: 1.0000.\nMade predictions in 0.0010 seconds.\nF1 score for test set: 0.7761.\nTraining a SVC using a training set size of 100. . .\nTrained model in 0.0050 seconds\nMade predictions in 0.0040 seconds.\nF1 score for training set: 0.8228.\nMade predictions in 0.0000 seconds.\nF1 score for test set: 0.8625.\nTraining a SVC using a training set size of 200. . .\nTrained model in 0.0156 seconds\nMade predictions in 0.0156 seconds.\nF1 score for training set: 0.8182.\nMade predictions in 0.0000 seconds.\nF1 score for test set: 0.8734.\nTraining a SVC using a training set size of 300. . .\nTrained model in 0.0060 seconds\nMade predictions in 0.0372 seconds.\nF1 score for training set: 0.8515.\nMade predictions in 0.0000 seconds.\nF1 score for test set: 0.8790.\n",
"name": "stdout"
}
]
},
{
"metadata": {},
"cell_type": "markdown",
"source": "### Tabular Results\nEdit the cell below to see how a table can be designed in [Markdown](https://github.com/adam-p/markdown-here/wiki/Markdown-Cheatsheet#tables). You can record your results from above in the tables provided."
},
{
"metadata": {},
"cell_type": "markdown",
"source": "** Classifer 1 - GaussianNB** \n\n| Training Set Size | Training Time | Prediction Time (test) | F1 Score (train) | F1 Score (test) |\n| :---------------: | :---------------------: | :--------------------: | :--------------: | :-------------: |\n| 100 | 0.0050 | 0.0000 | 0.7879 | 0.8000 |\n| 200 | 0.0100 | 0.0020 | 0.7656 | 0.8261 |\n| 300 | 0.0125 | 0.0020 | 0.7666 | 0.8286 |\n\n** Classifer 2 - DecisionTree** \n\n| Training Set Size | Training Time | Prediction Time (test) | F1 Score (train) | F1 Score (test) |\n| :---------------: | :---------------------: | :--------------------: | :--------------: | :-------------: |\n| 100 | 0.0090 | 0.0010 | 1.0000 | 0.7023 |\n| 200 | 0.0110 | 0.0010 | 1.0000 | 0.8058 |\n| 300 | 0.0230 | 0.0010 | 1.0000 | 0.7761 |\n\n** Classifer 3 - SVM** \n\n| Training Set Size | Training Time | Prediction Time (test) | F1 Score (train) | F1 Score (test) |\n| :---------------: | :---------------------: | :--------------------: | :--------------: | :-------------: |\n| 100 | 0.0050 | 0.0050 | 0.8228 | 0.8625 |\n| 200 | 0.0225 | 0.0225 | 0.8182 | 0.8374 |\n| 300 | 0.0450 | 0.0075 | 0.8515 | 0.8790 |"
},
{
"metadata": {},
"cell_type": "markdown",
"source": "## Choosing the Best Model\nIn this final section, you will choose from the three supervised learning models the *best* model to use on the student data. You will then perform a grid search optimization for the model over the entire training set (`X_train` and `y_train`) by tuning at least one parameter to improve upon the untuned model's F<sub>1</sub> score. "
},
{
"metadata": {},
"cell_type": "markdown",
"source": "### Question 3 - Choosing the Best Model\n*Based on the experiments you performed earlier, in one to two paragraphs, explain to the board of supervisors what single model you chose as the best model. Which model is generally the most appropriate based on the available data, limited resources, cost, and performance?*"
},
{
"metadata": {},
"cell_type": "markdown",
"source": "**Answer: ** We tested 3 models out of which we pick Support Vector Machine (SVM) as it seems to be just right on predictions as per the F1 scores. Whereas GaussianNB model is underfitting, DecisionTree model appears to be overfitting. Based on the available data, it seems "
},
{
"metadata": {},
"cell_type": "markdown",
"source": "### Question 4 - Model in Layman's Terms\n*In one to two paragraphs, explain to the board of directors in layman's terms how the final model chosen is supposed to work. Be sure that you are describing the major qualities of the model, such as how the model is trained and how the model makes a prediction. Avoid using advanced mathematical or technical jargon, such as describing equations or discussing the algorithm implementation.*"
},
{
"metadata": {},
"cell_type": "markdown",
"source": "**Answer: **"
},
{
"metadata": {},
"cell_type": "markdown",
"source": "### Implementation: Model Tuning\nFine tune the chosen model. Use grid search (`GridSearchCV`) with at least one important parameter tuned with at least 3 different values. You will need to use the entire training set for this. In the code cell below, you will need to implement the following:\n- Import [`sklearn.grid_search.GridSearchCV`](http://scikit-learn.org/0.17/modules/generated/sklearn.grid_search.GridSearchCV.html) and [`sklearn.metrics.make_scorer`](http://scikit-learn.org/stable/modules/generated/sklearn.metrics.make_scorer.html).\n- Create a dictionary of parameters you wish to tune for the chosen model.\n - Example: `parameters = {'parameter' : [list of values]}`.\n- Initialize the classifier you've chosen and store it in `clf`.\n- Create the F<sub>1</sub> scoring function using `make_scorer` and store it in `f1_scorer`.\n - Set the `pos_label` parameter to the correct value!\n- Perform grid search on the classifier `clf` using `f1_scorer` as the scoring method, and store it in `grid_obj`.\n- Fit the grid search object to the training data (`X_train`, `y_train`), and store it in `grid_obj`."
},
{
"metadata": {
"collapsed": false,
"trusted": true
},
"cell_type": "code",
"source": "# TODO: Import 'GridSearchCV' and 'make_scorer'\nfrom sklearn.grid_search import GridSearchCV \nfrom sklearn.metrics import fbeta_score, make_scorer\nfrom sklearn.svm import SVC\n\n# TODO: Create the parameters list you wish to tune\nparameters = [{'C' : [1, 10, 100], 'kernel':['linear']}]\n#parameters = {'max_depth':(1,2,3,4,5,6,7,8,9,10), 'min_samples_leaf':(1,2,4,8,16,32,64)}\n\n# TODO: Initialize the classifier\nclf = SVC()\n#clf = DecisionTreeClassifier()\n\n# TODO: Make an f1 scoring function using 'make_scorer' \nf1_scorer = make_scorer(f1_score, pos_label = \"yes\")\n\n# TODO: Perform grid search on the classifier using the f1_scorer as the scoring method\ngrid_obj = GridSearchCV(estimator = clf, param_grid = parameters, scoring = f1_scorer)\n\n# TODO: Fit the grid search object to the training data and find the optimal parameters\ngrid_obj.fit(X_train, y_train)\n\n# Get the estimator\nclf = grid_obj.best_estimator_\n\n# Report the final F1 score for training and testing after parameter tuning\nprint(\"Tuned model has a training F1 score of {:.4f}.\".format(predict_labels(clf, X_train, y_train)))\nprint(\"Tuned model has a testing F1 score of {:.4f}.\".format(predict_labels(clf, X_test, y_test)))",
"execution_count": 8,
"outputs": [
{
"output_type": "stream",
"text": "Made predictions in 0.0000 seconds.\nTuned model has a training F1 score of 0.8241.\nMade predictions in 0.0010 seconds.\nTuned model has a testing F1 score of 0.8472.\n",
"name": "stdout"
}
]
},
{
"metadata": {},
"cell_type": "markdown",
"source": "### Question 5 - Final F<sub>1</sub> Score\n*What is the final model's F<sub>1</sub> score for training and testing? How does that score compare to the untuned model?*"
},
{
"metadata": {},
"cell_type": "markdown",
"source": "**Answer: **"
},
{
"metadata": {},
"cell_type": "markdown",
"source": "> **Note**: Once you have completed all of the code implementations and successfully answered each question above, you may finalize your work by exporting the iPython Notebook as an HTML document. You can do this by using the menu above and navigating to \n**File -> Download as -> HTML (.html)**. Include the finished document along with this notebook as your submission."
}
],
"metadata": {
"gist_id": "42e21e1c566a0f69d386cb0902b9b757",
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"name": "python",
"version": "3.5.1",
"pygments_lexer": "ipython3",
"mimetype": "text/x-python",
"nbconvert_exporter": "python",
"file_extension": ".py"
},
"kernelspec": {
"name": "python3",
"display_name": "Python 3",
"language": "python"
}
},
"nbformat": 4,
"nbformat_minor": 0
}
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment