Forked from bigsnarfdude/gist:61516f62f2da8ccd20ab491c7506d461
Created
September 15, 2019 20:48
-
-
Save weijianzhang/32b4eccf6c9d24342262264c2277a1f6 to your computer and use it in GitHub Desktop.
[deep_learning_study] book table of contents
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Table of Contents | |
http://neuralnetworksanddeeplearning.com/about.html | |
http://neuralnetworksanddeeplearning.com/index.html | |
http://neuralnetworksanddeeplearning.com/exercises_and_problems.html | |
http://neuralnetworksanddeeplearning.com/chap1.html | |
http://neuralnetworksanddeeplearning.com/chap2.html | |
http://neuralnetworksanddeeplearning.com/chap3.html | |
http://neuralnetworksanddeeplearning.com/chap4.html | |
http://neuralnetworksanddeeplearning.com/chap5.html | |
http://neuralnetworksanddeeplearning.com/chap6.html | |
http://neuralnetworksanddeeplearning.com/sai.html | |
http://neuralnetworksanddeeplearning.com/acknowledgements.html | |
http://neuralnetworksanddeeplearning.com/faq.html | |
http://neuralnetworksanddeeplearning.com/bugfinder.html | |
http://www.deeplearningbook.org/ | |
http://michaelnielsen.us7.list-manage2.com/subscribe?u=3f1842d0c9988acecde24dbee&id=085bc4de14 | |
Chapter 1 | |
Perceptrons | |
Sigmoid neurons | |
The architecture of neural networks | |
A simple network to classify handwritten digits | |
Learning with gradient descent | |
Implementing our network to classify digits | |
Toward deep learning | |
Chapter 2 | |
Warm up: a fast matrix-based approach to computing the output from a neural network | |
The two assumptions we need about the cost function | |
The Hadamard product | |
The four fundamental equations behind backpropagation | |
Proof of the four fundamental equations (optional) | |
The backpropagation algorithm | |
The code for backpropagation | |
In what sense is backpropagation a fast algorithm? | |
Backpropagation: the big picture | |
Chapter 3 | |
The cross-entropy cost function | |
Introducing the cross-entropy cost function | |
Using the cross-entropy to classify MNIST digits | |
What does the cross-entropy mean? Where does it come from? | |
Softmax | |
Overfitting and regularization | |
Regularization | |
Why does regularization help reduce overfitting? | |
Other techniques for regularization | |
Weight initialization | |
Handwriting recognition revisited: the code | |
How to choose a neural network's hyper-parameters? | |
Other techniques | |
Variations on stochastic gradient descent | |
Other models of artificial neuron | |
On stories in neural networks | |
Chapter 4 | |
Two caveats | |
Universality with one input and one output | |
Many input variables | |
Extension beyond sigmoid neurons | |
Fixing up the step functions | |
Conclusion | |
Chapter 5 | |
The vanishing gradient problem | |
What's causing the vanishing gradient problem? Unstable gradients in deep neural nets | |
Unstable gradients in more complex networks | |
Other obstacles to deep learning | |
Chapter 6 | |
Introducing convolutional networks | |
Convolutional neural networks in practice | |
The code for our convolutional networks | |
Recent progress in image recognition | |
Other approaches to deep neural nets | |
On the future of neural networks | |
Find the code | |
git clone https://github.com/mnielsen/neural-networks-and-deep-learning.git | |
https://github.com/mnielsen/neural-networks-and-deep-learning | |
https://github.com/mnielsen/neural-networks-and-deep-learning/tree/master/src | |
Code by Chapter | |
Chapter 1 How the backpropagation algorithm works | |
https://github.com/mnielsen/neural-networks-and-deep-learning/blob/master/src/mnist_loader.py | |
https://github.com/mnielsen/neural-networks-and-deep-learning/blob/master/src/mnist_svm.py | |
https://github.com/mnielsen/neural-networks-and-deep-learning/blob/master/src/network.py | |
Chapter 2 How the backpropagation algorithm works | |
https://github.com/mnielsen/neural-networks-and-deep-learning/blob/master/src/network.py | |
Chapter 3 Improving the way neural networks learn | |
https://github.com/mnielsen/neural-networks-and-deep-learning/blob/master/src/network2.py | |
Chapter 4 A visual proof that neural nets can compute any function | |
Moar mathz | |
Chapter 5 Why are deep neural networks hard to train? | |
https://github.com/mnielsen/neural-networks-and-deep-learning/blob/master/src/network2.py | |
Chapter 6 Deep learning | |
https://github.com/mnielsen/neural-networks-and-deep-learning/blob/master/src/conv.py | |
https://github.com/mnielsen/neural-networks-and-deep-learning/blob/master/src/expand_mnist.py | |
https://github.com/mnielsen/neural-networks-and-deep-learning/blob/master/src/mnist_average_darkness.py | |
https://github.com/mnielsen/neural-networks-and-deep-learning/blob/master/src/network3.py | |
Supplemental Video Learning | |
https://classroom.udacity.com/courses/ud730/lessons/6379031992/concepts/64370585660923 | |
https://www.youtube.com/watch?v=g-PvXUjD6qg&list=PLlJy-eBtNFt6EuMxFYRiNRS07MCWN5UIA |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment