Skip to content

Instantly share code, notes, and snippets.

@NSVEGUR
Last active December 8, 2022 16:41
Show Gist options
  • Save NSVEGUR/372981ed3af59912e94e0495b0e8fd4d to your computer and use it in GitHub Desktop.
Save NSVEGUR/372981ed3af59912e94e0495b0e8fd4d to your computer and use it in GitHub Desktop.
LED digit classification using Naive Bayes classifier in python.
Display the source blob
Display the rendered blob
Raw
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Naive Bayes Classifier"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Perform LED digit classification using Naive bayes classifier in python."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We will represent each digit as a 7-dimensional binary vector where a 1 in the representation implies the corresponding segment to be on. The representations for all ten digits, 0-9, is shown below.Furthermore, we assume the display to be faulty in the sense that with probability p a segment doesn’t turn on(off) when it is supposed to be on(off). Thus, we want to design a naive Bayes classifier that accepts a 7-dimensional binary vector as an input and predicts the digit that was meant to be displayed."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"|s1|s2|s3|s4|s5|s6|s7|Digit-ID|\n",
"|--|--|--|--|--|--|--|--------|\n",
"|0 |1 |1 |0 |0 |0 |0 |1 |\n",
"|1 |1 |0 |1 |1 |0 |1 |2 |\n",
"|1 |1 |1 |1 |0 |0 |1 |3 |\n",
"|0 |1 |1 |0 |0 |1 |1 |4 |\n",
"|1 |0 |1 |1 |0 |1 |1 |5 |\n",
"|0 |0 |1 |1 |1 |1 |1 |6 |\n",
"|1 |1 |1 |0 |0 |0 |0 |7 |\n",
"|1 |1 |1 |1 |1 |1 |1 |8 |\n",
"|1 |1 |1 |0 |0 |1 |1 |9 |\n",
"|1 |1 |1 |1 |1 |1 |0 |0 |"
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [],
"source": [
"import pandas as pd\n",
"import numpy as np\n",
"import random\n",
"random.seed(3)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Lets consider some noise level \n",
"Noiselevel in the formula refers to a cell where we store the probabilty p of a segment being faulty."
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [],
"source": [
"noise_level = 0.1"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Lets store the segment values of each digit in an array for later computation purpose."
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [],
"source": [
"dig1 = [0,1,1,0,0,0,0]\n",
"dig2 = [1,1,0,1,1,0,1]\n",
"dig3 = [1,1,1,1,0,0,1]\n",
"dig4 = [0,1,1,0,0,1,1]\n",
"dig5 = [1,0,1,1,0,1,1]\n",
"dig6 = [0,0,1,1,1,1,1]\n",
"dig7 = [1,1,1,0,0,0,0]\n",
"dig8 = [1,1,1,1,1,1,1]\n",
"dig9 = [1,1,1,0,0,1,1]\n",
"dig0 = [1,1,1,1,1,1,0]\n",
"\n",
"digs = [dig0, dig1, dig2, dig3, dig4, dig5, dig6,dig7, dig8, dig9]"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Generate training dataset"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Lets use python rand function to generate a random float between 0 and 1, and if that number is greater than the given noise level we keep the segment as same, but if not we flip the segment, i.e., 0 to 1 or 1 to 0. The below function generates segment representation of digits based on the noise level and the input digit original segment representation."
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {},
"outputs": [],
"source": [
"def generateDigit(digit, noise_level):\n",
" arr = []\n",
" for segment in digit:\n",
" rand = random.random()\n",
" if(rand > noise_level):\n",
" arr.append(segment)\n",
" else:\n",
" arr.append(1 - segment)\n",
" return arr"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now lets generate the training data set with each digit containting 100 examples each."
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {},
"outputs": [],
"source": [
"examples = 100"
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {},
"outputs": [],
"source": [
"training_set = []\n",
"for dig in digs:\n",
" arr = []\n",
" for example in range(examples):\n",
" arr.append(generateDigit(dig, noise_level))\n",
" training_set.append(arr)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"For visualisation lets print the training data set with lets say starting 5 examples each"
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {},
"outputs": [
{
"data": {
"text/html": [
"<div>\n",
"<style scoped>\n",
" .dataframe tbody tr th:only-of-type {\n",
" vertical-align: middle;\n",
" }\n",
"\n",
" .dataframe tbody tr th {\n",
" vertical-align: top;\n",
" }\n",
"\n",
" .dataframe thead th {\n",
" text-align: right;\n",
" }\n",
"</style>\n",
"<table border=\"1\" class=\"dataframe\">\n",
" <thead>\n",
" <tr style=\"text-align: right;\">\n",
" <th></th>\n",
" <th>0</th>\n",
" <th>1</th>\n",
" <th>2</th>\n",
" <th>3</th>\n",
" <th>4</th>\n",
" </tr>\n",
" </thead>\n",
" <tbody>\n",
" <tr>\n",
" <th>0</th>\n",
" <td>[1, 1, 1, 1, 1, 0, 1]</td>\n",
" <td>[1, 1, 1, 1, 1, 1, 0]</td>\n",
" <td>[1, 1, 1, 1, 1, 1, 0]</td>\n",
" <td>[0, 1, 1, 1, 0, 1, 0]</td>\n",
" <td>[1, 1, 1, 1, 1, 1, 0]</td>\n",
" </tr>\n",
" <tr>\n",
" <th>1</th>\n",
" <td>[1, 1, 1, 0, 0, 0, 1]</td>\n",
" <td>[0, 1, 0, 0, 0, 0, 1]</td>\n",
" <td>[0, 1, 1, 0, 0, 0, 0]</td>\n",
" <td>[0, 1, 1, 0, 1, 0, 0]</td>\n",
" <td>[1, 1, 1, 0, 1, 0, 0]</td>\n",
" </tr>\n",
" <tr>\n",
" <th>2</th>\n",
" <td>[1, 1, 0, 1, 1, 0, 1]</td>\n",
" <td>[1, 0, 0, 0, 1, 0, 1]</td>\n",
" <td>[0, 1, 0, 1, 1, 0, 1]</td>\n",
" <td>[1, 0, 0, 0, 1, 0, 0]</td>\n",
" <td>[1, 1, 0, 1, 1, 1, 1]</td>\n",
" </tr>\n",
" <tr>\n",
" <th>3</th>\n",
" <td>[1, 1, 1, 1, 0, 0, 1]</td>\n",
" <td>[1, 1, 1, 1, 0, 0, 1]</td>\n",
" <td>[1, 0, 1, 1, 1, 0, 1]</td>\n",
" <td>[0, 1, 0, 1, 0, 0, 1]</td>\n",
" <td>[1, 1, 1, 1, 1, 1, 1]</td>\n",
" </tr>\n",
" <tr>\n",
" <th>4</th>\n",
" <td>[0, 1, 1, 0, 0, 1, 1]</td>\n",
" <td>[0, 1, 1, 0, 0, 1, 1]</td>\n",
" <td>[0, 1, 1, 1, 0, 1, 1]</td>\n",
" <td>[0, 0, 1, 0, 0, 1, 1]</td>\n",
" <td>[0, 1, 1, 0, 0, 1, 1]</td>\n",
" </tr>\n",
" <tr>\n",
" <th>5</th>\n",
" <td>[1, 1, 0, 1, 0, 1, 0]</td>\n",
" <td>[1, 0, 1, 1, 0, 1, 1]</td>\n",
" <td>[1, 0, 1, 1, 0, 1, 1]</td>\n",
" <td>[1, 0, 0, 1, 0, 1, 1]</td>\n",
" <td>[1, 0, 1, 1, 0, 1, 1]</td>\n",
" </tr>\n",
" <tr>\n",
" <th>6</th>\n",
" <td>[0, 0, 1, 1, 1, 1, 1]</td>\n",
" <td>[0, 1, 1, 1, 1, 1, 1]</td>\n",
" <td>[0, 0, 1, 1, 1, 1, 1]</td>\n",
" <td>[0, 0, 1, 0, 1, 1, 1]</td>\n",
" <td>[0, 0, 1, 1, 1, 1, 1]</td>\n",
" </tr>\n",
" <tr>\n",
" <th>7</th>\n",
" <td>[1, 1, 1, 0, 0, 0, 0]</td>\n",
" <td>[1, 1, 1, 0, 0, 0, 0]</td>\n",
" <td>[1, 1, 0, 0, 0, 0, 0]</td>\n",
" <td>[1, 1, 1, 0, 0, 0, 0]</td>\n",
" <td>[1, 1, 1, 0, 0, 1, 0]</td>\n",
" </tr>\n",
" <tr>\n",
" <th>8</th>\n",
" <td>[1, 1, 1, 1, 1, 1, 1]</td>\n",
" <td>[0, 1, 1, 1, 1, 1, 1]</td>\n",
" <td>[1, 1, 1, 1, 0, 1, 1]</td>\n",
" <td>[1, 1, 1, 1, 1, 1, 0]</td>\n",
" <td>[1, 1, 1, 1, 1, 0, 1]</td>\n",
" </tr>\n",
" <tr>\n",
" <th>9</th>\n",
" <td>[1, 0, 1, 0, 0, 1, 1]</td>\n",
" <td>[1, 0, 1, 0, 0, 1, 1]</td>\n",
" <td>[1, 1, 1, 0, 0, 1, 1]</td>\n",
" <td>[1, 1, 1, 0, 0, 1, 1]</td>\n",
" <td>[1, 1, 1, 0, 0, 1, 0]</td>\n",
" </tr>\n",
" </tbody>\n",
"</table>\n",
"</div>"
],
"text/plain": [
" 0 1 2 \\\n",
"0 [1, 1, 1, 1, 1, 0, 1] [1, 1, 1, 1, 1, 1, 0] [1, 1, 1, 1, 1, 1, 0] \n",
"1 [1, 1, 1, 0, 0, 0, 1] [0, 1, 0, 0, 0, 0, 1] [0, 1, 1, 0, 0, 0, 0] \n",
"2 [1, 1, 0, 1, 1, 0, 1] [1, 0, 0, 0, 1, 0, 1] [0, 1, 0, 1, 1, 0, 1] \n",
"3 [1, 1, 1, 1, 0, 0, 1] [1, 1, 1, 1, 0, 0, 1] [1, 0, 1, 1, 1, 0, 1] \n",
"4 [0, 1, 1, 0, 0, 1, 1] [0, 1, 1, 0, 0, 1, 1] [0, 1, 1, 1, 0, 1, 1] \n",
"5 [1, 1, 0, 1, 0, 1, 0] [1, 0, 1, 1, 0, 1, 1] [1, 0, 1, 1, 0, 1, 1] \n",
"6 [0, 0, 1, 1, 1, 1, 1] [0, 1, 1, 1, 1, 1, 1] [0, 0, 1, 1, 1, 1, 1] \n",
"7 [1, 1, 1, 0, 0, 0, 0] [1, 1, 1, 0, 0, 0, 0] [1, 1, 0, 0, 0, 0, 0] \n",
"8 [1, 1, 1, 1, 1, 1, 1] [0, 1, 1, 1, 1, 1, 1] [1, 1, 1, 1, 0, 1, 1] \n",
"9 [1, 0, 1, 0, 0, 1, 1] [1, 0, 1, 0, 0, 1, 1] [1, 1, 1, 0, 0, 1, 1] \n",
"\n",
" 3 4 \n",
"0 [0, 1, 1, 1, 0, 1, 0] [1, 1, 1, 1, 1, 1, 0] \n",
"1 [0, 1, 1, 0, 1, 0, 0] [1, 1, 1, 0, 1, 0, 0] \n",
"2 [1, 0, 0, 0, 1, 0, 0] [1, 1, 0, 1, 1, 1, 1] \n",
"3 [0, 1, 0, 1, 0, 0, 1] [1, 1, 1, 1, 1, 1, 1] \n",
"4 [0, 0, 1, 0, 0, 1, 1] [0, 1, 1, 0, 0, 1, 1] \n",
"5 [1, 0, 0, 1, 0, 1, 1] [1, 0, 1, 1, 0, 1, 1] \n",
"6 [0, 0, 1, 0, 1, 1, 1] [0, 0, 1, 1, 1, 1, 1] \n",
"7 [1, 1, 1, 0, 0, 0, 0] [1, 1, 1, 0, 0, 1, 0] \n",
"8 [1, 1, 1, 1, 1, 1, 0] [1, 1, 1, 1, 1, 0, 1] \n",
"9 [1, 1, 1, 0, 0, 1, 1] [1, 1, 1, 0, 0, 1, 0] "
]
},
"execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"pd.DataFrame(training_set).iloc[:, :5]"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Naive Bayes Classification"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"A Naive Bayes (NB) classifier uses Bayes’ theorem and independent features assumption to perform classification. Although the feature independence assumption may not hold true, the resulting simplicity and performance close to complex classifiers offer complelling reasons to treat features to be independent. Suppose we have $d$ features, $x_1,\\cdots, x_d$, and two classes $c_1\\text{ and } c_2$. According to Bayes’ theorem, the probability that the observation vector ${X} = [x_1,\\cdots,x_d]^T$ belongs to class $c_j$ is given by the following relationship:"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"$$P(C_j|x) = \\frac{P(x_1,...,x_d|c_j)P(c_j)}{P(x_1,...,x_d)}, j=1,2$$ \n",
"Assuming features to be independent, the above expression reduces to: \n",
"$$P(C_j|x) = \\frac{P(c_j)\\prod_{i=1}^d P(x_i|c_j)}{P(x_1,...,x_d)}, j=1,2$$"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The denominator in above expression is constant for a given input. Thus, the classification rule for a given observation vector can be expressed as:\n",
"\n",
"Assign\n",
"\n",
"$x\\rightarrow c_1\\text{ if } P(c_1)\\prod_{i=1}^d P(x_i|c_1) \\geq P(c_2)\\prod_{i=1}^d P(x_i|c_2) $\n",
"\n",
"Otherwise\n",
"\n",
"$x\\rightarrow c_2$"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"For classification problems with C classes, we can write the classification rule as:\n",
"\n",
"$x\\rightarrow c_j \\text{ where } P(c_j)\\prod_{i=1}^d P(x_i|c_j) > P(c_k)\\prod_{i=1}^d P(x_i|c_k), k=1,..,C \\text{ and } k\\neq j $"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Naive bayes design for faulty digit recognition"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Designing NB classifier means we need to compute/estimate class priors and conditional probabilities. Class priors are taken as the fraction of examples from each class in the training set. In the present case, all class priors are equal. This means that class priors do not play any role in arriving at the class membership decision in our present example. Thus, we need to estimate only conditional probabilities. The conditional probabilities are the frequencies of each attribute value for each class in our training set. The following relationship provides us with the probability of segment 1 being equal to 1 conditioned on that the digit being displayed is digit 1."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"$$P(s_1 = 1|digit1) = \\frac{\\text{ count of 1's for segment 1 in digit 1 training examples }}{\\text{ number of digit 1 training examples}}$$"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Since only two possible states, 1 and 0, are possible for each segment, we can calculate the probability of segment 1 being equal to 0 conditioned on that the digit being displayed is digit 1 by the following relationship:"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"$$P(s_1 = 0|digit1) = 1 - P(s_1 = 1|digit1)$$"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"In practice, however, a correction is applied to conditional probabilities calculations to ensure that none of the probabilities is 0. This correction, known as Laplace smoothing, is given by the following relationship:"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"$$P(s_1 = 1|digit1) = \\frac{\\text{1 + count of 1's for segment 1 in digit 1 training examples }}{\\text{2 + number of digit 1 training examples}}$$"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Adding 1 to the numerator count ensures probability value doesnot become 0. Adding 2 to the denominator reflects the number of states that are possible for the attribute under consideration. In this case we have binary attributes."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"> **So for our case lets calculate the probabilities of each segment 1 with their respective digits producing $10\\;digits\\times7\\;segments$ matrix**"
]
},
{
"cell_type": "code",
"execution_count": 8,
"metadata": {},
"outputs": [
{
"data": {
"text/html": [
"<div>\n",
"<style scoped>\n",
" .dataframe tbody tr th:only-of-type {\n",
" vertical-align: middle;\n",
" }\n",
"\n",
" .dataframe tbody tr th {\n",
" vertical-align: top;\n",
" }\n",
"\n",
" .dataframe thead th {\n",
" text-align: right;\n",
" }\n",
"</style>\n",
"<table border=\"1\" class=\"dataframe\">\n",
" <thead>\n",
" <tr style=\"text-align: right;\">\n",
" <th></th>\n",
" <th>s1</th>\n",
" <th>s2</th>\n",
" <th>s3</th>\n",
" <th>s4</th>\n",
" <th>s5</th>\n",
" <th>s6</th>\n",
" <th>s7</th>\n",
" </tr>\n",
" </thead>\n",
" <tbody>\n",
" <tr>\n",
" <th>0</th>\n",
" <td>0.892157</td>\n",
" <td>0.921569</td>\n",
" <td>0.862745</td>\n",
" <td>0.931373</td>\n",
" <td>0.862745</td>\n",
" <td>0.843137</td>\n",
" <td>0.088235</td>\n",
" </tr>\n",
" <tr>\n",
" <th>1</th>\n",
" <td>0.107843</td>\n",
" <td>0.960784</td>\n",
" <td>0.901961</td>\n",
" <td>0.117647</td>\n",
" <td>0.127451</td>\n",
" <td>0.127451</td>\n",
" <td>0.117647</td>\n",
" </tr>\n",
" <tr>\n",
" <th>2</th>\n",
" <td>0.882353</td>\n",
" <td>0.921569</td>\n",
" <td>0.078431</td>\n",
" <td>0.882353</td>\n",
" <td>0.882353</td>\n",
" <td>0.117647</td>\n",
" <td>0.911765</td>\n",
" </tr>\n",
" <tr>\n",
" <th>3</th>\n",
" <td>0.882353</td>\n",
" <td>0.892157</td>\n",
" <td>0.921569</td>\n",
" <td>0.872549</td>\n",
" <td>0.088235</td>\n",
" <td>0.039216</td>\n",
" <td>0.892157</td>\n",
" </tr>\n",
" <tr>\n",
" <th>4</th>\n",
" <td>0.127451</td>\n",
" <td>0.882353</td>\n",
" <td>0.882353</td>\n",
" <td>0.098039</td>\n",
" <td>0.107843</td>\n",
" <td>0.911765</td>\n",
" <td>0.911765</td>\n",
" </tr>\n",
" <tr>\n",
" <th>5</th>\n",
" <td>0.882353</td>\n",
" <td>0.147059</td>\n",
" <td>0.862745</td>\n",
" <td>0.911765</td>\n",
" <td>0.088235</td>\n",
" <td>0.882353</td>\n",
" <td>0.852941</td>\n",
" </tr>\n",
" <tr>\n",
" <th>6</th>\n",
" <td>0.088235</td>\n",
" <td>0.107843</td>\n",
" <td>0.843137</td>\n",
" <td>0.872549</td>\n",
" <td>0.911765</td>\n",
" <td>0.901961</td>\n",
" <td>0.921569</td>\n",
" </tr>\n",
" <tr>\n",
" <th>7</th>\n",
" <td>0.911765</td>\n",
" <td>0.892157</td>\n",
" <td>0.882353</td>\n",
" <td>0.098039</td>\n",
" <td>0.068627</td>\n",
" <td>0.127451</td>\n",
" <td>0.098039</td>\n",
" </tr>\n",
" <tr>\n",
" <th>8</th>\n",
" <td>0.823529</td>\n",
" <td>0.901961</td>\n",
" <td>0.852941</td>\n",
" <td>0.901961</td>\n",
" <td>0.931373</td>\n",
" <td>0.901961</td>\n",
" <td>0.901961</td>\n",
" </tr>\n",
" <tr>\n",
" <th>9</th>\n",
" <td>0.862745</td>\n",
" <td>0.882353</td>\n",
" <td>0.950980</td>\n",
" <td>0.078431</td>\n",
" <td>0.078431</td>\n",
" <td>0.852941</td>\n",
" <td>0.794118</td>\n",
" </tr>\n",
" </tbody>\n",
"</table>\n",
"</div>"
],
"text/plain": [
" s1 s2 s3 s4 s5 s6 s7\n",
"0 0.892157 0.921569 0.862745 0.931373 0.862745 0.843137 0.088235\n",
"1 0.107843 0.960784 0.901961 0.117647 0.127451 0.127451 0.117647\n",
"2 0.882353 0.921569 0.078431 0.882353 0.882353 0.117647 0.911765\n",
"3 0.882353 0.892157 0.921569 0.872549 0.088235 0.039216 0.892157\n",
"4 0.127451 0.882353 0.882353 0.098039 0.107843 0.911765 0.911765\n",
"5 0.882353 0.147059 0.862745 0.911765 0.088235 0.882353 0.852941\n",
"6 0.088235 0.107843 0.843137 0.872549 0.911765 0.901961 0.921569\n",
"7 0.911765 0.892157 0.882353 0.098039 0.068627 0.127451 0.098039\n",
"8 0.823529 0.901961 0.852941 0.901961 0.931373 0.901961 0.901961\n",
"9 0.862745 0.882353 0.950980 0.078431 0.078431 0.852941 0.794118"
]
},
"execution_count": 8,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"prob = []\n",
"for dig in range(10):\n",
" arr = [0]*7\n",
" t_set = training_set[:][dig]\n",
" for row in t_set:\n",
" for index, segment in enumerate(row):\n",
" if(segment == 1):\n",
" arr[index] += 1\n",
" for index, value in enumerate(arr):\n",
" arr[index] = (value+1)/(2+examples)\n",
" prob.append(arr)\n",
"df = pd.DataFrame(prob)\n",
"df.rename(columns={0: \"s1\", 1: \"s2\",2: \"s3\", 3:\"s4\", 4: \"s5\", 5: \"s6\", 6: \"s7\"}, inplace=True)\n",
"df"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Testing the model"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Having calculated conditional probabilities, we are now ready to see how well our classifier will work on test examples. For this, we first generate three test examples for each digit following the steps outlined earlier"
]
},
{
"cell_type": "code",
"execution_count": 9,
"metadata": {},
"outputs": [],
"source": [
"examples_test = 3"
]
},
{
"cell_type": "code",
"execution_count": 10,
"metadata": {},
"outputs": [
{
"data": {
"text/html": [
"<div>\n",
"<style scoped>\n",
" .dataframe tbody tr th:only-of-type {\n",
" vertical-align: middle;\n",
" }\n",
"\n",
" .dataframe tbody tr th {\n",
" vertical-align: top;\n",
" }\n",
"\n",
" .dataframe thead th {\n",
" text-align: right;\n",
" }\n",
"</style>\n",
"<table border=\"1\" class=\"dataframe\">\n",
" <thead>\n",
" <tr style=\"text-align: right;\">\n",
" <th></th>\n",
" <th>0</th>\n",
" <th>1</th>\n",
" <th>2</th>\n",
" </tr>\n",
" </thead>\n",
" <tbody>\n",
" <tr>\n",
" <th>0</th>\n",
" <td>[1, 1, 1, 0, 1, 1, 0]</td>\n",
" <td>[1, 1, 1, 1, 1, 1, 0]</td>\n",
" <td>[0, 1, 1, 1, 1, 1, 0]</td>\n",
" </tr>\n",
" <tr>\n",
" <th>1</th>\n",
" <td>[0, 1, 1, 0, 0, 0, 0]</td>\n",
" <td>[0, 1, 1, 0, 0, 0, 1]</td>\n",
" <td>[0, 1, 1, 0, 0, 0, 0]</td>\n",
" </tr>\n",
" <tr>\n",
" <th>2</th>\n",
" <td>[1, 1, 0, 1, 1, 0, 1]</td>\n",
" <td>[1, 1, 0, 1, 0, 0, 1]</td>\n",
" <td>[0, 1, 0, 1, 1, 0, 1]</td>\n",
" </tr>\n",
" <tr>\n",
" <th>3</th>\n",
" <td>[1, 1, 1, 1, 0, 0, 1]</td>\n",
" <td>[1, 1, 1, 1, 0, 0, 1]</td>\n",
" <td>[0, 0, 0, 1, 0, 1, 1]</td>\n",
" </tr>\n",
" <tr>\n",
" <th>4</th>\n",
" <td>[0, 1, 1, 0, 0, 1, 1]</td>\n",
" <td>[0, 1, 1, 0, 0, 1, 1]</td>\n",
" <td>[0, 1, 1, 0, 0, 1, 1]</td>\n",
" </tr>\n",
" <tr>\n",
" <th>5</th>\n",
" <td>[0, 0, 1, 1, 0, 1, 1]</td>\n",
" <td>[1, 0, 1, 1, 0, 1, 1]</td>\n",
" <td>[1, 0, 1, 1, 1, 0, 1]</td>\n",
" </tr>\n",
" <tr>\n",
" <th>6</th>\n",
" <td>[0, 0, 1, 1, 1, 1, 1]</td>\n",
" <td>[0, 0, 1, 1, 1, 1, 1]</td>\n",
" <td>[0, 0, 0, 1, 1, 1, 1]</td>\n",
" </tr>\n",
" <tr>\n",
" <th>7</th>\n",
" <td>[1, 1, 1, 0, 0, 0, 0]</td>\n",
" <td>[1, 1, 1, 0, 0, 0, 0]</td>\n",
" <td>[1, 1, 1, 0, 0, 0, 0]</td>\n",
" </tr>\n",
" <tr>\n",
" <th>8</th>\n",
" <td>[0, 1, 1, 1, 1, 1, 1]</td>\n",
" <td>[1, 1, 1, 1, 1, 1, 1]</td>\n",
" <td>[1, 1, 0, 1, 1, 1, 1]</td>\n",
" </tr>\n",
" <tr>\n",
" <th>9</th>\n",
" <td>[1, 1, 1, 0, 0, 1, 1]</td>\n",
" <td>[1, 1, 1, 0, 0, 1, 1]</td>\n",
" <td>[1, 1, 1, 0, 0, 0, 1]</td>\n",
" </tr>\n",
" </tbody>\n",
"</table>\n",
"</div>"
],
"text/plain": [
" 0 1 2\n",
"0 [1, 1, 1, 0, 1, 1, 0] [1, 1, 1, 1, 1, 1, 0] [0, 1, 1, 1, 1, 1, 0]\n",
"1 [0, 1, 1, 0, 0, 0, 0] [0, 1, 1, 0, 0, 0, 1] [0, 1, 1, 0, 0, 0, 0]\n",
"2 [1, 1, 0, 1, 1, 0, 1] [1, 1, 0, 1, 0, 0, 1] [0, 1, 0, 1, 1, 0, 1]\n",
"3 [1, 1, 1, 1, 0, 0, 1] [1, 1, 1, 1, 0, 0, 1] [0, 0, 0, 1, 0, 1, 1]\n",
"4 [0, 1, 1, 0, 0, 1, 1] [0, 1, 1, 0, 0, 1, 1] [0, 1, 1, 0, 0, 1, 1]\n",
"5 [0, 0, 1, 1, 0, 1, 1] [1, 0, 1, 1, 0, 1, 1] [1, 0, 1, 1, 1, 0, 1]\n",
"6 [0, 0, 1, 1, 1, 1, 1] [0, 0, 1, 1, 1, 1, 1] [0, 0, 0, 1, 1, 1, 1]\n",
"7 [1, 1, 1, 0, 0, 0, 0] [1, 1, 1, 0, 0, 0, 0] [1, 1, 1, 0, 0, 0, 0]\n",
"8 [0, 1, 1, 1, 1, 1, 1] [1, 1, 1, 1, 1, 1, 1] [1, 1, 0, 1, 1, 1, 1]\n",
"9 [1, 1, 1, 0, 0, 1, 1] [1, 1, 1, 0, 0, 1, 1] [1, 1, 1, 0, 0, 0, 1]"
]
},
"execution_count": 10,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"test_set = []\n",
"test_labels = []\n",
"for index, dig in enumerate(digs):\n",
" arr = []\n",
" labels = [index]*examples_test\n",
" for example in range(examples_test):\n",
" arr.append(generateDigit(dig, noise_level))\n",
" test_set.append(arr)\n",
" test_labels.append(labels) \n",
"pd.DataFrame(test_set)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Since we got the test data set, lets calculate the probabilities of each example using naive bayes. Here using above computed probability matrix we perform conditional probability for each digit on set of 7 segments in each example and pick the max probable class as class of our example segment set. (Here class means digit)"
]
},
{
"cell_type": "code",
"execution_count": 11,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[[0, 0, 0],\n",
" [1, 1, 1],\n",
" [2, 2, 2],\n",
" [3, 3, 5],\n",
" [4, 4, 4],\n",
" [5, 5, 3],\n",
" [6, 6, 6],\n",
" [7, 7, 7],\n",
" [8, 8, 8],\n",
" [9, 9, 3]]"
]
},
"execution_count": 11,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"pred_labels = []\n",
"for index_dig, dig in enumerate(test_set):\n",
" l = []\n",
" for example in dig:\n",
" prob_arr = []\n",
" for num in range(10):\n",
" p = 1\n",
" for index_seg, seg in enumerate(example):\n",
" if (seg == 1):\n",
" p *= prob[num][index_seg]\n",
" else:\n",
" p *= (1-prob[num][index_seg])\n",
" prob_arr.append(p)\n",
" l.append(prob_arr.index(max(prob_arr)))\n",
" pred_labels.append(l)\n",
"pred_labels"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Accuracy"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"As we got out predicted and test labels, lets just find the simple accuracy score by comparing both of them and dividing it by the total number of example test cases. This gives us the accuracy of our model"
]
},
{
"cell_type": "code",
"execution_count": 12,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Accuracy score of our model is: 0.9\n"
]
}
],
"source": [
"A = np.array(test_labels)\n",
"B = np.array(pred_labels)\n",
"accuracy = (A == B).sum()/(examples_test*10)\n",
"print(f\"Accuracy score of our model is: {accuracy}\")"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.12"
},
"vscode": {
"interpreter": {
"hash": "6863e040e8981097dbf29d10dc72ab85d80010d98b5512de1073715d1caccbff"
}
}
},
"nbformat": 4,
"nbformat_minor": 2
}
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment