Skip to content

Instantly share code, notes, and snippets.

@jamesloyys
Created May 14, 2018 08:54
Show Gist options
  • Save jamesloyys/ff7a7bb1540384f709856f9cdcdee70d to your computer and use it in GitHub Desktop.
Save jamesloyys/ff7a7bb1540384f709856f9cdcdee70d to your computer and use it in GitHub Desktop.
class NeuralNetwork:
def __init__(self, x, y):
self.input = x
self.weights1 = np.random.rand(self.input.shape[1],4)
self.weights2 = np.random.rand(4,1)
self.y = y
self.output = np.zeros(self.y.shape)
def feedforward(self):
self.layer1 = sigmoid(np.dot(self.input, self.weights1))
self.output = sigmoid(np.dot(self.layer1, self.weights2))
def backprop(self):
# application of the chain rule to find derivative of the loss function with respect to weights2 and weights1
d_weights2 = np.dot(self.layer1.T, (2*(self.y - self.output) * sigmoid_derivative(self.output)))
d_weights1 = np.dot(self.input.T, (np.dot(2*(self.y - self.output) * sigmoid_derivative(self.output), self.weights2.T) * sigmoid_derivative(self.layer1)))
# update the weights with the derivative (slope) of the loss function
self.weights1 += d_weights1
self.weights2 += d_weights2
@AndyM10
Copy link

AndyM10 commented Dec 9, 2019

So how would you modify this design for 2 hidden layers instead of just the 1

@SibuleleMboyi
Copy link

If I change this

self.weights1= np.random.rand(self.input.shape[1],4) # considering we have 4 nodes in the hidden layer
self.weights2 = np.random.rand(4,1)

to this

self.weights1= np.random.rand(self.input.shape[1],100) # considering we have 100 nodes in the hidden layer
self.weights2 = np.random.rand(100,1)

then I'm only getting 1 in the output everywhere from the start (without training).

Why would adding more hidden nodes cause this?

I am also stuck here, did you figure it out?

@djamelherbadji
Copy link

I want the paython code of neurel network where: input layer part is composed of two neurons, . The hidden layer is constituted of two under-layers of 20 and 10 neurons for the first under-layer and the second under-layer respectively. The output layer is composed of 5 neurons.

@CoolCat467
Copy link

This system will only work for if network layers = 2. No more, that is why your tests with more layers fail. the backpropagation function only modifies weights from the second to last layer and last layer.

@ClackHack
Copy link

Does anyone know how to use more than one layer. I have most of it figured out, but I am stuck on the backpropagation mostly

@ClackHack
Copy link

Does anyone know how to use more than one layer. I have most of it figured out, but I am stuck on the backpropagation mostly

Sorta figured it out, but how do I back propagate with variable sizes?

@playermarker
Copy link

Could you tell how to add inputs?

Hi Mustafa, Here is the code that I wrote which lets you give inputs, train the network and keep track of the loss. Best, Madhuri

#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Created on Tuesday Oct 2, 2018
@author: Madhuri Suthar, PhD Candidate in Electrical and Computer Engineering, UCLA
"""

# Imports
import numpy as np 
      
# Each row is a training example, each column is a feature  [X1, X2, X3]
X=np.array(([0,0,1],[0,1,1],[1,0,1],[1,1,1]), dtype=float)
y=np.array(([0],[1],[1],[0]), dtype=float)

# Define useful functions    

# Activation function
def sigmoid(t):
    return 1/(1+np.exp(-t))

# Derivative of sigmoid
def sigmoid_derivative(p):
    return p * (1 - p)

# Class definition
class NeuralNetwork:
    def __init__(self, x,y):
        self.input = x
        self.weights1= np.random.rand(self.input.shape[1],4) # considering we have 4 nodes in the hidden layer
        self.weights2 = np.random.rand(4,1)
        self.y = y
        self.output = np. zeros(y.shape)
        
    def feedforward(self):
        self.layer1 = sigmoid(np.dot(self.input, self.weights1))
        self.layer2 = sigmoid(np.dot(self.layer1, self.weights2))
        return self.layer2
        
    def backprop(self):
        d_weights2 = np.dot(self.layer1.T, 2*(self.y -self.output)*sigmoid_derivative(self.output))
        d_weights1 = np.dot(self.input.T, np.dot(2*(self.y -self.output)*sigmoid_derivative(self.output), self.weights2.T)*sigmoid_derivative(self.layer1))
    
        self.weights1 += d_weights1
        self.weights2 += d_weights2

    def train(self, X, y):
        self.output = self.feedforward()
        self.backprop()
        

NN = NeuralNetwork(X,y)
for i in range(1500): # trains the NN 1,000 times
    if i % 100 ==0: 
        print ("for iteration # " + str(i) + "\n")
        print ("Input : \n" + str(X))
        print ("Actual Output: \n" + str(y))
        print ("Predicted Output: \n" + str(NN.feedforward()))
        print ("Loss: \n" + str(np.mean(np.square(y - NN.feedforward())))) # mean sum squared loss
        print ("\n")
  
    NN.train(X, y)

Hi have an error after running this code it says "unsupported operand type(s) for -: 'float' and 'NoneType' " how could I solve it

@Mrezakhodashenas
Copy link

Have the same issue.
Any tips, guys?
It seems "NN.feedforward()" is none!
Thanks in adv.

@jamesloyys

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment