Skip to content

Instantly share code, notes, and snippets.

@elijah123815
Created November 3, 2020 19:05
Show Gist options
  • Save elijah123815/1dd177a787c901e8eaab2210708bc41c to your computer and use it in GitHub Desktop.
Save elijah123815/1dd177a787c901e8eaab2210708bc41c to your computer and use it in GitHub Desktop.
Created on Skills Network Labs
Display the source blob
Display the rendered blob
Raw
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<a href=\"https://www.skills.network/\"><img src=\"https://cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud/IBM-DL0120ENedX/labs/Template%20for%20Instructional%20Hands-on%20Labs/images/IDSNlogo.png\" width=\"400px\" align=\"center\"></a>\n",
"\n",
"<h1 align=\"center\"><font size=\"5\">RECURRENT NETWORKS IN DEEP LEARNING</font></h1>\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Hello and welcome to this notebook. In this notebook, we will go over concepts of the Long Short-Term Memory (LSTM) model, a refinement of the original Recurrent Neural Network model. By the end of this notebook, you should be able to understand the Long Short-Term Memory model, the benefits and problems it solves, and its inner workings and calculations.\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<h2>RECURRENT NETWORKS IN DEEP LEARNING</h2>\n",
"\n",
"<h3>Objective for this Notebook<h3> \n",
"<h5> 1. Learn Long Short-Term Memory Model</h5>\n",
"<h5> 2. Stacked LTSM </h5>\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<br>\n",
"<h2>Table of Contents</h2>\n",
"<ol>\n",
" <li><a href=\"#intro\">Introduction</a></li>\n",
" <li><a href=\"#long_short_term_memory_model\">Long Short-Term Memory Model</a></li>\n",
" <li><a href=\"#ltsm\">LTSM</a></li>\n",
" <li><a href=\"#stacked_ltsm\">Stacked LTSM</a></li>\n",
"</ol>\n",
"<p></p>\n",
"</div>\n",
"<br>\n",
"\n",
"<hr>\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<a id=\"intro\"><a/> \n",
"\n",
"<h2>Introduction</h2>\n",
"\n",
"Recurrent Neural Networks are Deep Learning models with simple structures and a feedback mechanism built-in, or in different words, the output of a layer is added to the next input and fed back to the same layer.\n",
"\n",
"The Recurrent Neural Network is a specialized type of Neural Network that solves the issue of **maintaining context for Sequential data** -- such as Weather data, Stocks, Genes, etc. At each iterative step, the processing unit takes in an input and the current state of the network, and produces an output and a new state that is <b>re-fed into the network</b>.\n",
"\n",
"<img src=\"https://ibm.box.com/shared/static/v7p90neiaqghmpwawpiecmz9n7080m59.png\">\n",
"\n",
"<center><i>Representation of a Recurrent Neural Network</i></center>\n",
"<br><br>\n",
"However, <b>this model has some problems</b>. It's very computationally expensive to maintain the state for a large amount of units, even more so over a long amount of time. Additionally, Recurrent Networks are very sensitive to changes in their parameters. As such, they are prone to different problems with their Gradient Descent optimizer -- they either grow exponentially (Exploding Gradient) or drop down to near zero and stabilize (Vanishing Gradient), both problems that greatly harm a model's learning capability.\n",
"\n",
"To solve these problems, Hochreiter and Schmidhuber published a paper in 1997 describing a way to keep information over long periods of time and additionally solve the oversensitivity to parameter changes, i.e., make backpropagating through the Recurrent Networks more viable. This proposed method is called Long Short-Term Memory (LSTM).\n",
"\n",
"(In this notebook, we will cover only LSTM and its implementation using TensorFlow)\n",
"\n",
"<hr>\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"button": false,
"deletable": true,
"new_sheet": false,
"run_control": {
"read_only": false
}
},
"source": [
"<a id=\"long_short_term_memory_model\"></a>\n",
"\n",
"<h2>Long Short-Term Memory Model</h2>\n",
"\n",
"The Long Short-Term Memory, as it was called, was an abstraction of how computer memory works. It is \"bundled\" with whatever processing unit is implemented in the Recurrent Network, although outside of its flow, and is responsible for keeping, reading, and outputting information for the model. The way it works is simple: you have a linear unit, which is the information cell itself, surrounded by three logistic gates responsible for maintaining the data. One gate is for inputting data into the information cell, one is for outputting data from the input cell, and the last one is to keep or forget data depending on the needs of the network.\n",
"\n",
"Thanks to that, it not only solves the problem of keeping states, because the network can choose to forget data whenever information is not needed, it also solves the gradient problems, since the Logistic Gates have a very nice derivative.\n",
"\n",
"<h3>Long Short-Term Memory Architecture</h3>\n",
"\n",
"The Long Short-Term Memory is composed of a linear unit surrounded by three logistic gates. The name for these gates vary from place to place, but the most usual names for them are:\n",
"\n",
"<ul>\n",
" <li>the \"Input\" or \"Write\" Gate, which handles the writing of data into the information cell</li>\n",
" <li>the \"Output\" or \"Read\" Gate, which handles the sending of data back onto the Recurrent Network</li>\n",
" <li>the \"Keep\" or \"Forget\" Gate, which handles the maintaining and modification of the data stored in the information cell</li>\n",
"</ul>\n",
"<br>\n",
"<img src=\"https://ibm.box.com/shared/static/zx10duv5egw0baw6gh2hzsgr8ex45gsg.png\" width=\"720\"/>\n",
"<center><i>Diagram of the Long Short-Term Memory Unit</i></center>\n",
"<br><br>\n",
"The three gates are the centerpiece of the LSTM unit. The gates, when activated by the network, perform their respective functions. For example, the Input Gate will write whatever data it is passed into the information cell, the Output Gate will return whatever data is in the information cell, and the Keep Gate will maintain the data in the information cell. These gates are analog and multiplicative, and as such, can modify the data based on the signal they are sent.\n",
"\n",
"<hr>\n",
"\n",
"For example, an usual flow of operations for the LSTM unit is as such: First off, the Keep Gate has to decide whether to keep or forget the data currently stored in memory. It receives both the input and the state of the Recurrent Network, and passes it through its Sigmoid activation. If $K\n",
"_t$ has value of 1 means that the LSTM unit should keep the data stored perfectly and if $K_t$ a value of 0 means that it should forget it entirely. Consider $S_{t-1}$ as the incoming (previous) state, $x_t$ as the incoming input, and $W_k$, $B_k$ as the weight and bias for the Keep Gate. Additionally, consider $Old_{t-1}$ as the data previously in memory. What happens can be summarized by this equation:\n",
"\n",
"<br>\n",
"\n",
"<font size=\"4\"><strong>\n",
"$$K_t = \\\\sigma(W_k \\\\times \\[S_{t-1}, x_t] + B_k)$$\n",
"\n",
"$$Old_t = K_t \\\\times Old_{t-1}$$\n",
"</strong></font>\n",
"\n",
"<br>\n",
"\n",
"As you can see, $Old\\_{t-1}$ was multiplied by value was returned by the Keep Gate($K_t$) -- this value is written in the memory cell.\n",
"\n",
"<br>\n",
"Then, the input and state are passed on to the Input Gate, in which there is another Sigmoid activation applied. Concurrently, the input is processed as normal by whatever processing unit is implemented in the network, and then multiplied by the Sigmoid activation's result $I_t$, much like the Keep Gate. Consider $W_i$ and $B_i$ as the weight and bias for the Input Gate, and $C_t$ the result of the processing of the inputs by the Recurrent Network.\n",
"<br><br>\n",
"\n",
"<font size=\"4\"><strong>\n",
"$$I_t = \\\\sigma(W_i\\\\times\\[S_{t-1},x_t]+B_i)$$\n",
"\n",
"$$New_t = I_t \\\\times C_t$$\n",
"</strong></font>\n",
"\n",
"<br>\n",
"$New_t$ is the new data to be input into the memory cell. This is then <b>added</b> to whatever value is still stored in memory.\n",
"<br><br>\n",
"\n",
"<font size=\"4\"><strong>\n",
"$$Cell_t = Old_t + New_t$$\n",
"</strong></font>\n",
"\n",
"<br>\n",
"We now have the <i>candidate data</i> which is to be kept in the memory cell. The conjunction of the Keep and Input gates work in an analog manner, making it so that it is possible to keep part of the old data and add only part of the new data. Consider however, what would happen if the Forget Gate was set to 0 and the Input Gate was set to 1:\n",
"<br><br>\n",
"\n",
"<font size=\"4\"><strong>\n",
"$$Old_t = 0 \\\\times Old_{t-1}$$\n",
"\n",
"$$New_t = 1 \\\\times C_t$$\n",
"\n",
"$$Cell_t = C_t$$\n",
"</strong></font>\n",
"\n",
"<br>\n",
"The old data would be totally forgotten and the new data would overwrite it completely.\n",
"\n",
"The Output Gate functions in a similar manner. To decide what we should output, we take the input data and state and pass it through a Sigmoid function as usual. The contents of our memory cell, however, are pushed onto a <i>Tanh</i> function to bind them between a value of -1 to 1. Consider $W_o$ and $B_o$ as the weight and bias for the Output Gate.\n",
"<br>\n",
"<font size=\"4\"><strong>\n",
"$$O_t = \\\\sigma(W_o \\\\times \\[S_{t-1},x_t] + B_o)$$\n",
"\n",
"$$Output_t = O_t \\\\times tanh(Cell_t)$$\n",
"</strong></font>\n",
"<br>\n",
"\n",
"And that $Output_t$ is what is output into the Recurrent Network.\n",
"\n",
"<br>\n",
"<img width=\"384\" src=\"https://ibm.box.com/shared/static/rkr60528r3mz2fmtlpah8lqpg7mcsy0g.png\">\n",
"<center><i>The Logistic Function plotted</i></center>\n",
"<br><br>\n",
"As mentioned many times, all three gates are logistic. The reason for this is because it is very easy to backpropagate through them, and as such, it is possible for the model to learn exactly _how_ it is supposed to use this structure. This is one of the reasons for which LSTM is a very strong structure. Additionally, this solves the gradient problems by being able to manipulate values through the gates themselves -- by passing the inputs and outputs through the gates, we have now a easily derivable function modifying our inputs.\n",
"\n",
"In regards to the problem of storing many states over a long period of time, LSTM handles this perfectly by only keeping whatever information is necessary and forgetting it whenever it is not needed anymore. Therefore, LSTMs are a very elegant solution to both problems.\n",
"\n",
"<hr>\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<a id=\"instructions\"><a/> \n",
"\n",
"<h2>Instructions</h2>\n",
" \n",
"We start by installing everything we need for this exercise:\n"
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"#!pip install grpcio==1.24.3\n",
"#!pip install tensorflow==2.2.0rc0"
]
},
{
"cell_type": "markdown",
"metadata": {
"button": false,
"deletable": true,
"new_sheet": false,
"run_control": {
"read_only": false
}
},
"source": [
"<a id=\"ltsm\"></a>\n",
"\n",
"<h2>LSTM</h2>\n",
"Lets first create a tiny LSTM network sample to understand the architecture of LSTM networks.\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"button": false,
"deletable": true,
"new_sheet": false,
"run_control": {
"read_only": false
}
},
"source": [
"We need to import the necessary modules for our code. We need <b><code>numpy</code></b> and <b><code>tensorflow</code></b>, obviously. Additionally, we can import directly the <b><code>tensorflow.keras.layers</code></b> , which includes the function for building RNNs.\n"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {
"button": false,
"deletable": true,
"new_sheet": false,
"run_control": {
"read_only": false
}
},
"outputs": [],
"source": [
"import numpy as np\n",
"import tensorflow as tf\n",
"if not tf.__version__ == '2.2.0-rc0':\n",
" print(tf.__version__)\n",
" raise ValueError('please upgrade to TensorFlow 2.2.0-rc0, or restart your Kernel (Kernel->Restart & Clear Output)')\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"IMPORTANT! => Please restart the kernel by clicking on \"Kernel\"->\"Restart and Clear Outout\" and wait until all output disapears. Then your changes are beeing picked up\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"button": false,
"deletable": true,
"new_sheet": false,
"run_control": {
"read_only": false
}
},
"source": [
"We want to create a network that has only one LSTM cell. We have to pass 2 elements to LSTM, the <b>prv_output</b> and <b>prv_state</b>, so called, <b>h</b> and <b>c</b>. Therefore, we initialize a state vector, <b>state</b>. Here, <b>state</b> is a tuple with 2 elements, each one is of size [1 x 4], one for passing prv_output to next time step, and another for passing the prv_state to next time stamp.\n"
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"(<tf.Tensor: shape=(1, 4), dtype=float32, numpy=array([[0., 0., 0., 0.]], dtype=float32)>,\n",
" <tf.Tensor: shape=(1, 4), dtype=float32, numpy=array([[0., 0., 0., 0.]], dtype=float32)>)"
]
},
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"LSTM_CELL_SIZE = 4 # output size (dimension), which is same as hidden size in the cell\n",
"\n",
"state = (tf.zeros([1,LSTM_CELL_SIZE]),)*2\n",
"state"
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {
"button": false,
"collapsed": false,
"deletable": true,
"jupyter": {
"outputs_hidden": false
},
"new_sheet": false,
"run_control": {
"read_only": false
}
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"(<tf.Tensor: shape=(1, 4), dtype=float32, numpy=array([[0., 0., 0., 0.]], dtype=float32)>, <tf.Tensor: shape=(1, 4), dtype=float32, numpy=array([[0., 0., 0., 0.]], dtype=float32)>)\n"
]
}
],
"source": [
"lstm = tf.keras.layers.LSTM(LSTM_CELL_SIZE, return_sequences=True, return_state=True)\n",
"\n",
"lstm.states=state\n",
"\n",
"print(lstm.states)\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"button": false,
"new_sheet": false,
"run_control": {
"read_only": false
}
},
"source": [
"As we can see, the states has 2 parts, the new state c, and also the output h. Lets check the output again:\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"button": false,
"deletable": true,
"new_sheet": false,
"run_control": {
"read_only": false
}
},
"source": [
"Let define a sample input. In this example, batch_size = 1, and features = 6:\n"
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {},
"outputs": [],
"source": [
"#Batch size x time steps x features.\n",
"sample_input = tf.constant([[3,2,2,2,2,2]],dtype=tf.float32)\n",
"\n",
"batch_size = 1\n",
"sentence_max_length = 1\n",
"n_features = 6\n",
"\n",
"new_shape = (batch_size, sentence_max_length, n_features)\n",
"\n",
"inputs = tf.constant(np.reshape(sample_input, new_shape), dtype = tf.float32)"
]
},
{
"cell_type": "markdown",
"metadata": {
"button": false,
"deletable": true,
"new_sheet": false,
"run_control": {
"read_only": false
}
},
"source": [
"Now, we can pass the input to lstm_cell, and check the new state:\n"
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {
"button": false,
"collapsed": false,
"deletable": true,
"jupyter": {
"outputs_hidden": false
},
"new_sheet": false,
"run_control": {
"read_only": false
}
},
"outputs": [],
"source": [
"output, final_memory_state, final_carry_state = lstm(inputs)\n"
]
},
{
"cell_type": "code",
"execution_count": 8,
"metadata": {
"button": false,
"collapsed": false,
"deletable": true,
"jupyter": {
"outputs_hidden": false
},
"new_sheet": false,
"run_control": {
"read_only": false
}
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Output : tf.Tensor([1 1 4], shape=(3,), dtype=int32)\n",
"Memory : tf.Tensor([1 4], shape=(2,), dtype=int32)\n",
"Carry state : tf.Tensor([1 4], shape=(2,), dtype=int32)\n"
]
}
],
"source": [
"print('Output : ', tf.shape(output))\n",
"\n",
"print('Memory : ',tf.shape(final_memory_state))\n",
"\n",
"print('Carry state : ',tf.shape(final_carry_state))"
]
},
{
"cell_type": "markdown",
"metadata": {
"button": false,
"deletable": true,
"new_sheet": false,
"run_control": {
"read_only": false
}
},
"source": [
"<hr>\n",
"<a id=\"stacked_ltsm\"></a>\n",
"<h2>Stacked LSTM</h2>\n",
"What about if we want to have a RNN with stacked LSTM? For example, a 2-layer LSTM. In this case, the output of the first layer will become the input of the second.\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"button": false,
"deletable": true,
"new_sheet": false,
"run_control": {
"read_only": false
}
},
"source": [
"Lets create the stacked LSTM cell:\n"
]
},
{
"cell_type": "code",
"execution_count": 9,
"metadata": {},
"outputs": [],
"source": [
"cells = []"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Creating the first layer LTSM cell.\n"
]
},
{
"cell_type": "code",
"execution_count": 10,
"metadata": {},
"outputs": [],
"source": [
"LSTM_CELL_SIZE_1 = 4 #4 hidden nodes\n",
"cell1 = tf.keras.layers.LSTMCell(LSTM_CELL_SIZE_1)\n",
"cells.append(cell1)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Creating the second layer LTSM cell.\n"
]
},
{
"cell_type": "code",
"execution_count": 11,
"metadata": {},
"outputs": [],
"source": [
"LSTM_CELL_SIZE_2 = 5 #5 hidden nodes\n",
"cell2 = tf.keras.layers.LSTMCell(LSTM_CELL_SIZE_2)\n",
"cells.append(cell2)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"To create a multi-layer LTSM we use the <b>tf.keras.layers.StackedRNNCells</b> function, it takes in multiple single layer LTSM cells to create a multilayer stacked LTSM model.\n"
]
},
{
"cell_type": "code",
"execution_count": 12,
"metadata": {},
"outputs": [],
"source": [
"stacked_lstm = tf.keras.layers.StackedRNNCells(cells)"
]
},
{
"cell_type": "markdown",
"metadata": {
"button": false,
"deletable": true,
"new_sheet": false,
"run_control": {
"read_only": false
}
},
"source": [
"Now we can create the RNN from <b>stacked_lstm</b>:\n"
]
},
{
"cell_type": "code",
"execution_count": 13,
"metadata": {
"button": false,
"deletable": true,
"new_sheet": false,
"run_control": {
"read_only": false
}
},
"outputs": [],
"source": [
"lstm_layer= tf.keras.layers.RNN(stacked_lstm ,return_sequences=True, return_state=True)"
]
},
{
"cell_type": "markdown",
"metadata": {
"button": false,
"deletable": true,
"new_sheet": false,
"run_control": {
"read_only": false
}
},
"source": [
"Lets say the input sequence length is 3, and the dimensionality of the inputs is 6. The input should be a Tensor of shape: [batch_size, max_time, dimension], in our case it would be (2, 3, 6)\n"
]
},
{
"cell_type": "code",
"execution_count": 14,
"metadata": {
"button": false,
"collapsed": false,
"deletable": true,
"jupyter": {
"outputs_hidden": false
},
"new_sheet": false,
"run_control": {
"read_only": false
}
},
"outputs": [],
"source": [
"#Batch size x time steps x features.\n",
"sample_input = [[[1,2,3,4,3,2], [1,2,1,1,1,2],[1,2,2,2,2,2]],[[1,2,3,4,3,2],[3,2,2,1,1,2],[0,0,0,0,3,2]]]\n",
"sample_input\n",
"\n",
"batch_size = 2\n",
"time_steps = 3\n",
"features = 6\n",
"new_shape = (batch_size, time_steps, features)\n",
"\n",
"x = tf.constant(np.reshape(sample_input, new_shape), dtype = tf.float32)"
]
},
{
"cell_type": "markdown",
"metadata": {
"button": false,
"deletable": true,
"new_sheet": false,
"run_control": {
"read_only": false
}
},
"source": [
"we can now send our input to network, and check the output:\n"
]
},
{
"cell_type": "code",
"execution_count": 15,
"metadata": {},
"outputs": [],
"source": [
"output, final_memory_state, final_carry_state = lstm_layer(x)"
]
},
{
"cell_type": "code",
"execution_count": 16,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Output : tf.Tensor([2 3 5], shape=(3,), dtype=int32)\n",
"Memory : tf.Tensor([2 2 4], shape=(3,), dtype=int32)\n",
"Carry state : tf.Tensor([2 2 5], shape=(3,), dtype=int32)\n"
]
}
],
"source": [
"print('Output : ', tf.shape(output))\n",
"\n",
"print('Memory : ',tf.shape(final_memory_state))\n",
"\n",
"print('Carry state : ',tf.shape(final_carry_state))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"As you see, the output is of shape (2, 3, 5), which corresponds to our 2 batches, 3 elements in our sequence, and the dimensionality of the output which is 5.\n",
"\n",
"<hr>\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"button": false,
"deletable": true,
"new_sheet": false,
"run_control": {
"read_only": false
}
},
"source": [
"## Want to learn more?\n",
"\n",
"Running deep learning programs usually needs a high performance platform. **PowerAI** speeds up deep learning and AI. Built on IBM’s Power Systems, **PowerAI** is a scalable software platform that accelerates deep learning and AI with blazing performance for individual users or enterprises. The **PowerAI** platform supports popular machine learning libraries and dependencies including TensorFlow, Caffe, Torch, and Theano. You can use [PowerAI on IMB Cloud](https://cocl.us/ML0120EN_PAI).\n",
"\n",
"Also, you can use **Watson Studio** to run these notebooks faster with bigger datasets.**Watson Studio** is IBM’s leading cloud solution for data scientists, built by data scientists. With Jupyter notebooks, RStudio, Apache Spark and popular libraries pre-packaged in the cloud, **Watson Studio** enables data scientists to collaborate on their projects without having to install anything. Join the fast-growing community of **Watson Studio** users today with a free account at [Watson Studio](https://cocl.us/ML0120EN_DSX).This is the end of this lesson. Thank you for reading this notebook, and good luck on your studies.\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"button": false,
"deletable": true,
"new_sheet": false,
"run_control": {
"read_only": false
}
},
"source": [
"### Thanks for completing this lesson!\n",
"\n",
"Notebook created by: <a href = \"https://linkedin.com/in/saeedaghabozorgi\"> Saeed Aghabozorgi </a>, <a href=\"https://br.linkedin.com/in/walter-gomes-de-amorim-junior-624726121\">Walter Gomes de Amorim Junior</a>\n",
"\n",
"Updated to TF 2.X by <a href=\"https://linkedin.com/in/romeo-kienzler-089b4557\"> Romeo Kienzler </a>, <a href=\"https://www.linkedin.com/in/samaya-madhavan\"> Samaya Madhavan </a>\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Change Log\n",
"\n",
"| Date (YYYY-MM-DD) | Version | Changed By | Change Description |\n",
"| ----------------- | ------- | ---------- | ----------------------------------------------------------- |\n",
"| 2020-09-21 | 2.0 | Srishti | Migrated Lab to Markdown and added to course repo in GitLab |\n",
"\n",
"<hr>\n",
"\n",
"## <h3 align=\"center\"> © IBM Corporation 2020. All rights reserved. <h3/>\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"button": false,
"deletable": true,
"new_sheet": false,
"run_control": {
"read_only": false
}
},
"source": [
"<hr>\n",
"\n",
"Copyright © 2018 [Cognitive Class](https://cocl.us/DX0108EN_CC). This notebook and its source code are released under the terms of the [MIT License](https://bigdatauniversity.com/mit-license?cm_mmc=Email_Newsletter-_-Developer_Ed%2BTech-_-WW_WW-_-SkillsNetwork-Courses-IBMDeveloperSkillsNetwork-DL0120EN-SkillsNetwork-20629446&cm_mmca1=000026UJ&cm_mmca2=10006555&cm_mmca3=M12345678&cvosrc=email.Newsletter.M12345678&cvo_campaign=000026UJ&cm_mmc=Email_Newsletter-_-Developer_Ed%2BTech-_-WW_WW-_-SkillsNetwork-Courses-IBMDeveloperSkillsNetwork-DL0120EN-SkillsNetwork-20629446&cm_mmca1=000026UJ&cm_mmca2=10006555&cm_mmca3=M12345678&cvosrc=email.Newsletter.M12345678&cvo_campaign=000026UJ&cm_mmc=Email_Newsletter-_-Developer_Ed%2BTech-_-WW_WW-_-SkillsNetwork-Courses-IBMDeveloperSkillsNetwork-DL0120EN-SkillsNetwork-20629446&cm_mmca1=000026UJ&cm_mmca2=10006555&cm_mmca3=M12345678&cvosrc=email.Newsletter.M12345678&cvo_campaign=000026UJ&cm_mmc=Email_Newsletter-_-Developer_Ed%2BTech-_-WW_WW-_-SkillsNetwork-Courses-IBMDeveloperSkillsNetwork-DL0120EN-SkillsNetwork-20629446&cm_mmca1=000026UJ&cm_mmca2=10006555&cm_mmca3=M12345678&cvosrc=email.Newsletter.M12345678&cvo_campaign=000026UJ).\n"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python",
"language": "python",
"name": "conda-env-python-py"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.11"
},
"widgets": {
"state": {},
"version": "1.1.2"
}
},
"nbformat": 4,
"nbformat_minor": 4
}
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment