Skip to content

Instantly share code, notes, and snippets.

Show Gist options
  • Star 2 You must be signed in to star a gist
  • Fork 1 You must be signed in to fork a gist
  • Save misho-kr/83ee0f5d672052cd6a11a09ef311e257 to your computer and use it in GitHub Desktop.
Save misho-kr/83ee0f5d672052cd6a11a09ef311e257 to your computer and use it in GitHub Desktop.
Summary of "Introduction to TensorFlow for Artificial Intelligence, Machine Learning, and Deep Learning" course on Coursera.Org

The "Machine Learning" course and "Deep Learning" Specialization from Andrew Ng teach the most important and foundational principles of Machine Learning and Deep Learning. This new deeplearning.ai TensorFlow Specialization teaches you how to use TensorFlow to implement those principles so that you can start building and applying scalable models to real-world problems.

Taught by

Laurence Moroney, AI Advocate, Google Brain

Week 1: A New Programming Paradigm

In week 1 you'll get a soft introduction to what Machine Learning and Deep Learning are, and how they offer you a new programming paradigm, giving you a new set of tools to open previously unexplored scenarios ... To get started, check out the first video, a conversation between Andrew and Laurence that sets the theme for what you'll study...

  • A Primer in Machine Learning

    Rules and data go in answers come out. Rules are expressed in a programming language and data can come from a variety of sources from local variables all the way up to databases.

    Machine learning rearranges this diagram where we put answers in data in and then we get rules out.

  • The Hello-World of neural networks

    Y = 2 * X + 1 (github) (colab)

Week 2: Introduction to Computer Vision

This week you’re going to take that to the next level by beginning to solve problems of computer vision with just a few lines of code!

  • Fashion-MNIST data
    • Train data and Test data
    • Data is a list of pairs of image and label
  • 3-Layer neural network
    • Flatten: 28 x 28 -> 1D array
    • Dense: 128 neurons (think about is as a variable)
    • Dense: 10 outputs because tehre are 10 labels
model = tf.keras.models.Sequential([
  tf.keras.layers.Flatten(input_shape=(28, 28)),
  tf.keras.layers.Dense(512, activation=tf.nn.relu),
  tf.keras.layers.Dense(10, activation=tf.nn.softmax)
])
  • Workbook (colab)
    • Import tensorflow module
    • Normalize the data
    • Build the model
    • Compile the model
    • Tweak the NN
    • Use callbacks

Exercises

The output of the NN is a list of probabilities, the length of the list matches the number of labels
Increasing the number of neurons in the hidden layer improves somewhat the accuracy, but training takes longer
The dimension of the first layer must match the dimension of the input data
The range of training labels must match the number of output neurons
Increasing the number of hidden layers improves the accuracy but make the training take longer time
More training epouch improve the accuracy but only to some extent, sometimes the error increases due to overfitting
Skipping the normalization reduces the accuracy dramatically

Week 3: Enhancing Vision with Convolutional Neural Networks

In week 2 you saw a basic Neural Network for Computer Vision. It did the job nicely, but it was a little naive in its approach. This week we’ll see how to make it better, as discussed by Laurence and Andrew here.

  • Convolution is a way to condense the image down to the important features, for example Conv2D
  • Pooling is a way of compressing an image, for example MaxPooling2D
model = tf.keras.models.Sequential([
  tf.keras.layers.Conv2D(64, (3,3), activation='relu', input_shape=(28, 28, 1)),
  tf.keras.layers.MaxPooling2D(2, 2),
  tf.keras.layers.Conv2D(64, (3,3), activation='relu'),
  tf.keras.layers.MaxPooling2D(2,2),
  tf.keras.layers.Flatten(),
  tf.keras.layers.Dense(128, activation='relu'),
  tf.keras.layers.Dense(10, activation='softmax')
])

Week 4: Using Real-world Images

Last week you saw how to improve the results from your deep neural network using convolutions. It was a good start, but the data you used was very basic. What happens when your images are larger, or if the features aren’t always in the same place? Andrew and Laurence discuss this to prepare you for what you’ll learn this week: handling complex images!

  • What happens when you use larger images, millions of colors and where the feature might be in different locations of the image
  • Image generator generates labels from the subdirectories that contain the images, it also resizes at runtime
from tensorflow.keras.preprocessing.image import ImageDataGenerator

# All images will be rescaled by 1./255
train_datagen = ImageDataGenerator(rescale=1/255)

# Flow training images in batches of 128 using train_datagen generator
train_generator = train_datagen.flow_from_directory(
        '/tmp/horse-or-human/',  # This is the source directory for training images
        target_size=(300, 300),  # All images will be resized to 150x150
        batch_size=128,
        # Since we use binary_crossentropy loss, we need binary labels
        class_mode='binary')
@alexa1829
Copy link

Best AI Content Write and AI Images Generator from Text Prompts: https://writerzingo.com/

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment