Skip to content

Instantly share code, notes, and snippets.

@abhinishetye
abhinishetye / CapsNet.md
Last active September 3, 2018 05:18
Introduction to Capsule Theory

Why do CNN need millions of images to get trained? Neural networks are seen as the replication of human brain. But on the contrary, our brain does not require millions of images before we start to recognize any object, then why do ConvNets have such requirement?

The answer to this question lies in the fundamentals of how CNN performs the task of image segmentation or recognition. CNN is totally based on pixel intensities or you can say finding out the contents of the image irrespective of the relative position of the contents to each other. As there is no spatial correlation which gets recorded in CNN, it cannot recognize the same object in different pose or from different viewing angle. For example, you train a CNN with the front profile of yours and then feed your side profile, the CNN probably will not be able to recognize the face because the spatial coordinate information is not present.

To solve this problem the concept of Capsule theory is suggested. This theory works on the spatial orientation, or

Understanding Deep Neural Networks using Keras

For implementing deep neural network in Python, we will be using the Keras library as we need to perform many highly computational numerical tasks. Before understanding the implementation, lets understand the libraries related to Keras.

Theano:

Theano is an open source numerical computation library for Python. It is very efficient for fast numeric computations using Python syntax. It can run on both the CPU as well as the GPU. (Making use of GPU is better when you have many highly computational tasks and parallel computation)

TensorFlow:

This is another open source numerical computation library. Again runs on both CPU and the GPU. Originally developed by researchers and engineers from the Google Brain team within Google’s AI organization.

Understanding Artificial Neural Networks

If we take out the word Artificial from ANN it leaves us with ‘Neural Networks’ which basically indicates our Brain. What ANN does is, it tries to mimic human brain, and hence the word Artificial. So, for understanding ANN let’s start with the basics of neuron. Simply speaking, neuron is a single entity which takes some input, computes the result and gives the output. Now who does it take input from? It takes input from several other neurons and gives output to another neuron.

Here Dendrites are the inputs, cell body is responsible for computing results and through Axon the output is given out. Coming back to ANN, the network comprises of a Node --> Cell neucleous, the node gets inputs initialized with some weights --> Dendrites and synapses, and gives an output --> Axon.

As the basic structure is unerstood now, lets move forward with the training process.

@abhinishetye
abhinishetye / 100daysLog.md
Last active November 9, 2019 17:34
100DaysofMLCode

100 Days Of ML Code

Hi! I am Abhini, a Machine Learning Enthusiast and this is my log for the 100DaysOfMLCode Challenge

Day 1: July 08, 2018

Today's Progress: Understood the basics of Neural Network and how to build ANN. Also practiced Python on Hackerrank.

Thoughts: Cleared up my concepts on ANN in which I had earlier found confusing like Activation and Cost functions, Batch and Stochastic Gradient Descent and Backpropagation.