Skip to content

Instantly share code, notes, and snippets.

@NiharG15
Last active February 27, 2024 14:39
Show Gist options
  • Save NiharG15/cd8272c9639941cf8f481a7c4478d525 to your computer and use it in GitHub Desktop.
Save NiharG15/cd8272c9639941cf8f481a7c4478d525 to your computer and use it in GitHub Desktop.
Iris Classification using a Neural Network

A Simple Neural Network in Keras + TensorFlow to classify the Iris Dataset

Following python packages are required to run this file:

    pip install tensorflow
    pip install scikit-learn
    pip install keras

Then run with:

    $ KERAS_BACKEND=tensorflow python3 iris-keras-nn.py
"""
A simple neural network written in Keras (TensorFlow backend) to classify the IRIS data
"""
import numpy as np
from sklearn.datasets import load_iris
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import OneHotEncoder
from keras.models import Sequential
from keras.layers import Dense
from keras.optimizers import Adam
iris_data = load_iris() # load the iris dataset
print('Example data: ')
print(iris_data.data[:5])
print('Example labels: ')
print(iris_data.target[:5])
x = iris_data.data
y_ = iris_data.target.reshape(-1, 1) # Convert data to a single column
# One Hot encode the class labels
encoder = OneHotEncoder(sparse=False)
y = encoder.fit_transform(y_)
#print(y)
# Split the data for training and testing
train_x, test_x, train_y, test_y = train_test_split(x, y, test_size=0.20)
# Build the model
model = Sequential()
model.add(Dense(10, input_shape=(4,), activation='relu', name='fc1'))
model.add(Dense(10, activation='relu', name='fc2'))
model.add(Dense(3, activation='softmax', name='output'))
# Adam optimizer with learning rate of 0.001
optimizer = Adam(lr=0.001)
model.compile(optimizer, loss='categorical_crossentropy', metrics=['accuracy'])
print('Neural Network Model Summary: ')
print(model.summary())
# Train the model
model.fit(train_x, train_y, verbose=2, batch_size=5, epochs=200)
# Test on unseen data
results = model.evaluate(test_x, test_y)
print('Final test set loss: {:4f}'.format(results[0]))
print('Final test set accuracy: {:4f}'.format(results[1]))
Example data:
[[ 5.1 3.5 1.4 0.2]
[ 4.9 3. 1.4 0.2]
[ 4.7 3.2 1.3 0.2]
[ 4.6 3.1 1.5 0.2]
[ 5. 3.6 1.4 0.2]]
Example labels:
[0 0 0 0 0]
Neural Network Model Summary:
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
fc1 (Dense) (None, 10) 50
_________________________________________________________________
fc2 (Dense) (None, 10) 110
_________________________________________________________________
output (Dense) (None, 3) 33
=================================================================
Total params: 193
Trainable params: 193
Non-trainable params: 0
_________________________________________________________________
None
Epoch 1/200
0s - loss: 1.0914 - acc: 0.5083
Epoch 2/200
0s - loss: 1.0029 - acc: 0.6000
Epoch 3/200
0s - loss: 0.9409 - acc: 0.6167
Epoch 4/200
0s - loss: 0.8829 - acc: 0.7000
Epoch 5/200
0s - loss: 0.8323 - acc: 0.9000
Epoch 6/200
0s - loss: 0.7901 - acc: 0.8333
Epoch 7/200
0s - loss: 0.7419 - acc: 0.9583
Epoch 8/200
0s - loss: 0.6958 - acc: 0.9500
Epoch 9/200
0s - loss: 0.6546 - acc: 0.9083
Epoch 10/200
0s - loss: 0.6163 - acc: 0.9583
Epoch 11/200
0s - loss: 0.5821 - acc: 0.9333
Epoch 12/200
0s - loss: 0.5443 - acc: 0.9583
Epoch 13/200
0s - loss: 0.5146 - acc: 0.9417
Epoch 14/200
0s - loss: 0.4817 - acc: 0.9500
Epoch 15/200
0s - loss: 0.4560 - acc: 0.9583
Epoch 16/200
0s - loss: 0.4284 - acc: 0.9750
Epoch 17/200
0s - loss: 0.4065 - acc: 0.9750
Epoch 18/200
0s - loss: 0.3842 - acc: 0.9667
Epoch 19/200
0s - loss: 0.3667 - acc: 0.9917
Epoch 20/200
0s - loss: 0.3502 - acc: 0.9417
Epoch 21/200
0s - loss: 0.3316 - acc: 0.9667
Epoch 22/200
0s - loss: 0.3221 - acc: 0.9583
Epoch 23/200
0s - loss: 0.3063 - acc: 0.9750
Epoch 24/200
0s - loss: 0.2895 - acc: 0.9667
Epoch 25/200
0s - loss: 0.2759 - acc: 0.9667
Epoch 26/200
0s - loss: 0.2640 - acc: 0.9750
Epoch 27/200
0s - loss: 0.2520 - acc: 0.9833
Epoch 28/200
0s - loss: 0.2419 - acc: 0.9833
Epoch 29/200
0s - loss: 0.2336 - acc: 0.9833
Epoch 30/200
0s - loss: 0.2241 - acc: 0.9750
Epoch 31/200
0s - loss: 0.2164 - acc: 0.9750
Epoch 32/200
0s - loss: 0.2137 - acc: 0.9667
Epoch 33/200
0s - loss: 0.2034 - acc: 0.9583
Epoch 34/200
0s - loss: 0.1955 - acc: 0.9750
Epoch 35/200
0s - loss: 0.1880 - acc: 0.9750
Epoch 36/200
0s - loss: 0.1803 - acc: 0.9750
Epoch 37/200
0s - loss: 0.1769 - acc: 0.9750
Epoch 38/200
0s - loss: 0.1717 - acc: 0.9750
Epoch 39/200
0s - loss: 0.1663 - acc: 0.9750
Epoch 40/200
0s - loss: 0.1608 - acc: 0.9750
Epoch 41/200
0s - loss: 0.1577 - acc: 0.9750
Epoch 42/200
0s - loss: 0.1524 - acc: 0.9750
Epoch 43/200
0s - loss: 0.1489 - acc: 0.9750
Epoch 44/200
0s - loss: 0.1429 - acc: 0.9750
Epoch 45/200
0s - loss: 0.1419 - acc: 0.9750
Epoch 46/200
0s - loss: 0.1396 - acc: 0.9750
Epoch 47/200
0s - loss: 0.1360 - acc: 0.9583
Epoch 48/200
0s - loss: 0.1317 - acc: 0.9833
Epoch 49/200
0s - loss: 0.1299 - acc: 0.9750
Epoch 50/200
0s - loss: 0.1263 - acc: 0.9750
Epoch 51/200
0s - loss: 0.1256 - acc: 0.9750
Epoch 52/200
0s - loss: 0.1209 - acc: 0.9750
Epoch 53/200
0s - loss: 0.1201 - acc: 0.9833
Epoch 54/200
0s - loss: 0.1159 - acc: 0.9750
Epoch 55/200
0s - loss: 0.1149 - acc: 0.9750
Epoch 56/200
0s - loss: 0.1109 - acc: 0.9750
Epoch 57/200
0s - loss: 0.1088 - acc: 0.9750
Epoch 58/200
0s - loss: 0.1092 - acc: 0.9750
Epoch 59/200
0s - loss: 0.1077 - acc: 0.9750
Epoch 60/200
0s - loss: 0.1049 - acc: 0.9750
Epoch 61/200
0s - loss: 0.1031 - acc: 0.9750
Epoch 62/200
0s - loss: 0.1039 - acc: 0.9667
Epoch 63/200
0s - loss: 0.1004 - acc: 0.9750
Epoch 64/200
0s - loss: 0.0988 - acc: 0.9750
Epoch 65/200
0s - loss: 0.0959 - acc: 0.9750
Epoch 66/200
0s - loss: 0.0964 - acc: 0.9750
Epoch 67/200
0s - loss: 0.0962 - acc: 0.9750
Epoch 68/200
0s - loss: 0.0944 - acc: 0.9750
Epoch 69/200
0s - loss: 0.0935 - acc: 0.9750
Epoch 70/200
0s - loss: 0.0956 - acc: 0.9750
Epoch 71/200
0s - loss: 0.0958 - acc: 0.9667
Epoch 72/200
0s - loss: 0.0900 - acc: 0.9750
Epoch 73/200
0s - loss: 0.0889 - acc: 0.9750
Epoch 74/200
0s - loss: 0.0883 - acc: 0.9750
Epoch 75/200
0s - loss: 0.0883 - acc: 0.9750
Epoch 76/200
0s - loss: 0.0940 - acc: 0.9833
Epoch 77/200
0s - loss: 0.0924 - acc: 0.9833
Epoch 78/200
0s - loss: 0.0851 - acc: 0.9750
Epoch 79/200
0s - loss: 0.0823 - acc: 0.9750
Epoch 80/200
0s - loss: 0.0821 - acc: 0.9750
Epoch 81/200
0s - loss: 0.0819 - acc: 0.9750
Epoch 82/200
0s - loss: 0.0813 - acc: 0.9750
Epoch 83/200
0s - loss: 0.0847 - acc: 0.9750
Epoch 84/200
0s - loss: 0.0830 - acc: 0.9750
Epoch 85/200
0s - loss: 0.0802 - acc: 0.9750
Epoch 86/200
0s - loss: 0.0828 - acc: 0.9750
Epoch 87/200
0s - loss: 0.0796 - acc: 0.9833
Epoch 88/200
0s - loss: 0.0763 - acc: 0.9833
Epoch 89/200
0s - loss: 0.0769 - acc: 0.9750
Epoch 90/200
0s - loss: 0.0768 - acc: 0.9750
Epoch 91/200
0s - loss: 0.0763 - acc: 0.9750
Epoch 92/200
0s - loss: 0.0754 - acc: 0.9750
Epoch 93/200
0s - loss: 0.0749 - acc: 0.9750
Epoch 94/200
0s - loss: 0.0748 - acc: 0.9750
Epoch 95/200
0s - loss: 0.0752 - acc: 0.9750
Epoch 96/200
0s - loss: 0.0771 - acc: 0.9750
Epoch 97/200
0s - loss: 0.0755 - acc: 0.9750
Epoch 98/200
0s - loss: 0.0792 - acc: 0.9667
Epoch 99/200
0s - loss: 0.0824 - acc: 0.9750
Epoch 100/200
0s - loss: 0.0758 - acc: 0.9750
Epoch 101/200
0s - loss: 0.0727 - acc: 0.9750
Epoch 102/200
0s - loss: 0.0712 - acc: 0.9750
Epoch 103/200
0s - loss: 0.0713 - acc: 0.9750
Epoch 104/200
0s - loss: 0.0704 - acc: 0.9750
Epoch 105/200
0s - loss: 0.0743 - acc: 0.9750
Epoch 106/200
0s - loss: 0.0751 - acc: 0.9833
Epoch 107/200
0s - loss: 0.0773 - acc: 0.9750
Epoch 108/200
0s - loss: 0.0710 - acc: 0.9750
Epoch 109/200
0s - loss: 0.0696 - acc: 0.9750
Epoch 110/200
0s - loss: 0.0748 - acc: 0.9750
Epoch 111/200
0s - loss: 0.0685 - acc: 0.9833
Epoch 112/200
0s - loss: 0.0661 - acc: 0.9750
Epoch 113/200
0s - loss: 0.0690 - acc: 0.9750
Epoch 114/200
0s - loss: 0.0728 - acc: 0.9667
Epoch 115/200
0s - loss: 0.0705 - acc: 0.9750
Epoch 116/200
0s - loss: 0.0695 - acc: 0.9833
Epoch 117/200
0s - loss: 0.0693 - acc: 0.9750
Epoch 118/200
0s - loss: 0.0666 - acc: 0.9750
Epoch 119/200
0s - loss: 0.0677 - acc: 0.9750
Epoch 120/200
0s - loss: 0.0671 - acc: 0.9750
Epoch 121/200
0s - loss: 0.0646 - acc: 0.9750
Epoch 122/200
0s - loss: 0.0652 - acc: 0.9833
Epoch 123/200
0s - loss: 0.0658 - acc: 0.9833
Epoch 124/200
0s - loss: 0.0661 - acc: 0.9750
Epoch 125/200
0s - loss: 0.0688 - acc: 0.9667
Epoch 126/200
0s - loss: 0.0693 - acc: 0.9833
Epoch 127/200
0s - loss: 0.0619 - acc: 0.9750
Epoch 128/200
0s - loss: 0.0641 - acc: 0.9833
Epoch 129/200
0s - loss: 0.0736 - acc: 0.9750
Epoch 130/200
0s - loss: 0.0640 - acc: 0.9833
Epoch 131/200
0s - loss: 0.0642 - acc: 0.9750
Epoch 132/200
0s - loss: 0.0660 - acc: 0.9750
Epoch 133/200
0s - loss: 0.0621 - acc: 0.9750
Epoch 134/200
0s - loss: 0.0663 - acc: 0.9750
Epoch 135/200
0s - loss: 0.0628 - acc: 0.9750
Epoch 136/200
0s - loss: 0.0660 - acc: 0.9750
Epoch 137/200
0s - loss: 0.0602 - acc: 0.9750
Epoch 138/200
0s - loss: 0.0624 - acc: 0.9667
Epoch 139/200
0s - loss: 0.0642 - acc: 0.9833
Epoch 140/200
0s - loss: 0.0638 - acc: 0.9750
Epoch 141/200
0s - loss: 0.0718 - acc: 0.9583
Epoch 142/200
0s - loss: 0.0636 - acc: 0.9750
Epoch 143/200
0s - loss: 0.0629 - acc: 0.9750
Epoch 144/200
0s - loss: 0.0629 - acc: 0.9750
Epoch 145/200
0s - loss: 0.0633 - acc: 0.9667
Epoch 146/200
0s - loss: 0.0641 - acc: 0.9750
Epoch 147/200
0s - loss: 0.0636 - acc: 0.9750
Epoch 148/200
0s - loss: 0.0623 - acc: 0.9833
Epoch 149/200
0s - loss: 0.0614 - acc: 0.9833
Epoch 150/200
0s - loss: 0.0687 - acc: 0.9750
Epoch 151/200
0s - loss: 0.0695 - acc: 0.9667
Epoch 152/200
0s - loss: 0.0634 - acc: 0.9750
Epoch 153/200
0s - loss: 0.0604 - acc: 0.9750
Epoch 154/200
0s - loss: 0.0594 - acc: 0.9750
Epoch 155/200
0s - loss: 0.0622 - acc: 0.9750
Epoch 156/200
0s - loss: 0.0641 - acc: 0.9667
Epoch 157/200
0s - loss: 0.0582 - acc: 0.9750
Epoch 158/200
0s - loss: 0.0588 - acc: 0.9750
Epoch 159/200
0s - loss: 0.0659 - acc: 0.9667
Epoch 160/200
0s - loss: 0.0571 - acc: 0.9833
Epoch 161/200
0s - loss: 0.0642 - acc: 0.9833
Epoch 162/200
0s - loss: 0.0604 - acc: 0.9833
Epoch 163/200
0s - loss: 0.0592 - acc: 0.9750
Epoch 164/200
0s - loss: 0.0589 - acc: 0.9750
Epoch 165/200
0s - loss: 0.0591 - acc: 0.9750
Epoch 166/200
0s - loss: 0.0590 - acc: 0.9833
Epoch 167/200
0s - loss: 0.0605 - acc: 0.9833
Epoch 168/200
0s - loss: 0.0597 - acc: 0.9750
Epoch 169/200
0s - loss: 0.0636 - acc: 0.9833
Epoch 170/200
0s - loss: 0.0597 - acc: 0.9667
Epoch 171/200
0s - loss: 0.0590 - acc: 0.9667
Epoch 172/200
0s - loss: 0.0646 - acc: 0.9833
Epoch 173/200
0s - loss: 0.0654 - acc: 0.9750
Epoch 174/200
0s - loss: 0.0661 - acc: 0.9833
Epoch 175/200
0s - loss: 0.0607 - acc: 0.9750
Epoch 176/200
0s - loss: 0.0707 - acc: 0.9833
Epoch 177/200
0s - loss: 0.0651 - acc: 0.9750
Epoch 178/200
0s - loss: 0.0585 - acc: 0.9750
Epoch 179/200
0s - loss: 0.0575 - acc: 0.9750
Epoch 180/200
0s - loss: 0.0560 - acc: 0.9750
Epoch 181/200
0s - loss: 0.0576 - acc: 0.9750
Epoch 182/200
0s - loss: 0.0574 - acc: 0.9750
Epoch 183/200
0s - loss: 0.0635 - acc: 0.9750
Epoch 184/200
0s - loss: 0.0570 - acc: 0.9750
Epoch 185/200
0s - loss: 0.0559 - acc: 0.9833
Epoch 186/200
0s - loss: 0.0577 - acc: 0.9667
Epoch 187/200
0s - loss: 0.0569 - acc: 0.9750
Epoch 188/200
0s - loss: 0.0575 - acc: 0.9833
Epoch 189/200
0s - loss: 0.0592 - acc: 0.9667
Epoch 190/200
0s - loss: 0.0564 - acc: 0.9750
Epoch 191/200
0s - loss: 0.0560 - acc: 0.9833
Epoch 192/200
0s - loss: 0.0619 - acc: 0.9667
Epoch 193/200
0s - loss: 0.0655 - acc: 0.9750
Epoch 194/200
0s - loss: 0.0654 - acc: 0.9750
Epoch 195/200
0s - loss: 0.0570 - acc: 0.9750
Epoch 196/200
0s - loss: 0.0578 - acc: 0.9750
Epoch 197/200
0s - loss: 0.0574 - acc: 0.9750
Epoch 198/200
0s - loss: 0.0556 - acc: 0.9750
Epoch 199/200
0s - loss: 0.0563 - acc: 0.9833
Epoch 200/200
0s - loss: 0.0632 - acc: 0.9750
30/30 [==============================] - 0s
Final test set loss: 0.072233
Final test set accuracy: 0.966667
@niranjandasMM
Copy link

can you tell , in a simple classification (e.g => output is 0,1,1,0) , why only percentage is coming , i mean there are four outputs : here code :

import numpy as np
from numpy import exp, array, random, dot

class NeuralNetwork():
def init(self):
# Seed the random number generator, so it generates the same numbers
# every time the program runs.
random.seed(1)

    # We model a single neuron, with 3 input connections and 1 output connection.
    # We assign random weights to a 3 x 1 matrix, with values in the range -1 to 1
    # and mean 0.
    self.synaptic_weights = 2 * random.random((3, 1)) - 1

self.synaptic_weights = 2,4,6

# The Sigmoid function, which describes an S shaped curve.
# We pass the weighted sum of the inputs through this function to
# normalise them between 0 and 1.
def __sigmoid(self, x):
    return 1 / (1 + exp(-x))

# The derivative of the Sigmoid function.
# This is the gradient of the Sigmoid curve.
# It indicates how confident we are about the existing weight.
def __sigmoid_derivative(self, x):
    return x * (1 - x)

# We train the neural network through a process of trial and error.
# Adjusting the synaptic weights each time.
def train(self, training_set_inputs, training_set_outputs, number_of_training_iterations):

    for iteration in range(number_of_training_iterations):
        # Pass the training set through our neural network (a single neuron).
        output = self.think(training_set_inputs)
        # Calculate the error (The difference between the desired output
        # and the predicted output).
        error = training_set_outputs - output

        # Multiply the error by the input and again by the gradient of the Sigmoid curve.
        # This means less confident weights are adjusted more.
        # This means inputs, which are zero, do not cause changes to the weights.
        adjustment = dot(training_set_inputs.T, error * self.__sigmoid_derivative(output))
        # Adjust the weights.
        self.synaptic_weights += adjustment
# The neural network thinks.
def think(self, inputs):
    # Pass inputs through our neural network (our single neuron).

print(f"the think is : {self.__sigmoid(dot(inputs, self.synaptic_weights))}")

    return self.__sigmoid(dot(inputs, self.synaptic_weights))

if name == "main":

#Intialise a single neuron neural network.
neural_network = NeuralNetwork()

print ("Random starting synaptic weights: ")
print (neural_network.synaptic_weights)

# The training set. We have 4 examples, each consisting of 3 input values
# and 1 output value.
training_set_inputs = array([[0, 0, 1], [1, 1, 1], [1, 0, 1], [0, 1, 1]])
training_set_outputs = array([[0, 1, 1, 0]]).T

# Train the neural network using a training set.
# Do it 10,000 times and make small adjustments each time.
neural_network.train(training_set_inputs, training_set_outputs, 10000)

print ("New synaptic weights after training: ")
print (neural_network.synaptic_weights)

# Test the neural network with a new situation.
print ("Considering new situation [1, 0, 0] -> ?: ")
y_pred = neural_network.think(array([1, 0, 1]))
print(y_pred)
print(np.round(y_pred))

the outputs is some percentage which shows 0 or 1 , which i get , but what is exactly that percentage ! why it is not the showing the percentage of other outputs ? (New to This ANN)

@souravabhishek
Copy link

what about a softmax classifier with 3 neurons in output layer and no hidden layer ?

@Elijah57
Copy link

@niranjandasMM what is the idea/concept you are trying to explain in your code

@Elijah57
Copy link

what about a softmax classifier with 3 neurons in output layer and no hidden layer ?

softmax is used in the output layer, the softmax activation function is generally used when you are dealing with a multi-class classification problem.
since we have 3 classes, the output layer has 3 neuron unit, and the softmax activation produce 3 outputs which are the probabilities of belonging to either of the classes. using np.argmax() we can see the class that has the highest chance of belonging to an input feature feature

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment