Skip to content

Instantly share code, notes, and snippets.

@Orbifold
Last active August 25, 2019 14:37
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save Orbifold/56b748bd5faf8c058f687842a1121571 to your computer and use it in GitHub Desktop.
Save Orbifold/56b748bd5faf8c058f687842a1121571 to your computer and use it in GitHub Desktop.
Analyzing sentiment using TensorFlow RC0 and NLTK.
Display the source blob
Display the rendered blob
Raw
{
"cells": [
{
"cell_type": "markdown",
"source": [
"# Sentiment analysis\n",
"\n",
"This is a recipe to analyze positive vs. negative sentiments in text using [NLTK](https://www.nltk.org) and [TensorFlow v2](http://tensorflow.org).\n",
"\n",
"One can find datasets in many places, especially on [Kaggle](https://www.kaggle.com/c/movie-review-sentiment-analysis-kernels-only/data). The datasets usually come in a 'positive' and 'negative' part but sometimes in one set with a column denoting the sentiment. If you don't have two sets like this NLTK has it all but you need to assemble things a bit because the review come as separate files.\n",
"\n",
"Easy enough, use something like this to compile the separate files into just two files:"
],
"metadata": {}
},
{
"cell_type": "code",
"source": [
"import sklearn\n",
"from sklearn.datasets import load_files\n",
"moviedir = r'/Users/You/nltk_data/corpora/movie_reviews'\n",
"movie_train = load_files(moviedir, shuffle=True)\n",
"\n",
"pos =\"\"\n",
"neg=\"\"\n",
"for i,item in enumerate(movie_train.data):\n",
" if movie_train.target[i] == 0: #neg\n",
" neg += item.decode(\"utf-8\").replace(\"\\n\",\" \") + \"\\n\"\n",
" else:\n",
" pos += item.decode(\"utf-8\").replace(\"\\n\",\" \") + \"\\n\"\n",
"with open('/Users/You/desktop/positive.txt', 'wt') as f:\n",
" f.write(pos)\n",
"with open('/Users/You/desktop/negative.txt', 'wt') as f:\n",
" f.write(neg) "
],
"outputs": [],
"execution_count": null,
"metadata": {
"collapsed": false,
"outputHidden": false,
"inputHidden": false
}
},
{
"cell_type": "markdown",
"source": [
"Of course, you need some packages. Note that this example is based on **TensorFlow RC0**."
],
"metadata": {}
},
{
"cell_type": "code",
"source": [
"from nltk.tokenize import word_tokenize \n",
"from nltk.stem import WordNetLemmatizer \n",
"import numpy as np \n",
"import random \n",
"import pickle \n",
"from collections import Counter \n",
"\n",
"# TensorFlow and tf.keras\n",
"import tensorflow as tf\n",
"from tensorflow import keras\n",
"\n",
"# Helper libraries\n",
"import numpy as np\n",
"import matplotlib.pyplot as plt\n",
"\n",
"print(tf.__version__)"
],
"outputs": [
{
"output_type": "stream",
"name": "stdout",
"text": [
"2.0.0-rc0\n"
]
}
],
"execution_count": 2,
"metadata": {
"collapsed": false,
"outputHidden": false,
"inputHidden": false
}
},
{
"cell_type": "markdown",
"source": [
"Normally you would use an embedding layer (things like [GloVe](https://nlp.stanford.edu/projects/glove/)) but let's approach things in a simplistic fashion here. We'll create for every review one big vector with an entry for every word appearing at least 50 times in the reviews.\n",
"\n",
"Lemmatization is the process of converting a word to its base form. This is where you need NLTK to perform this conversion."
],
"metadata": {}
},
{
"cell_type": "code",
"source": [
"lemmatizer = WordNetLemmatizer() \n",
"max_lines = 10000000 \n",
"pos = 'positive.txt'\n",
"neg = 'negative.txt'\n",
"\n",
"\n",
"def create_lexicon(pos, neg): \n",
" '''\n",
" Returns a vector with the most important words\n",
" for the given positive and negative reviews.\n",
" '''\n",
" lexicon = [] \n",
" for fi in [pos, neg]: \n",
" with open(fi, 'r') as f: \n",
" contents = f.readlines() \n",
" for l in contents[:max_lines]: \n",
" all_words = word_tokenize(l.lower()) \n",
" lexicon += list(all_words) \n",
"\n",
" lexicon = [lemmatizer.lemmatize(i) for i in lexicon] \n",
" w_counts = Counter(lexicon) \n",
"\n",
" l2 =[] # vector with the words appearing more than 50 times\n",
" for w in w_counts: \n",
" if 1000 > w_counts[w] > 50: \n",
" l2.append(w) \n",
" return l2 "
],
"outputs": [],
"execution_count": 4,
"metadata": {
"collapsed": false,
"outputHidden": false,
"inputHidden": false
}
},
{
"cell_type": "markdown",
"source": [
"With the lexicon we now approach the reviews and convert them to vectors of the size of the lexicon and for each word how often it appears in the review. \n",
"This is a poor-man's way of embedding text in a low-dimensional vector space. The simplification being that we do not embed affinity between words, there is no statistical distribution or minimization involved in this simple algorithm."
],
"metadata": {}
},
{
"cell_type": "code",
"source": [
"def create_embedding(sample, lexicon, classification): \n",
" '''\n",
" Returns a lexicon-sized vector for each review.\n",
" '''\n",
" featureset = [] \n",
" with open(sample,'r') as f: \n",
" contents = f.readlines() \n",
" for l in contents[:max_lines]: \n",
" current_words = word_tokenize(l.lower()) \n",
" current_words = [lemmatizer.lemmatize(i) for i in current_words] \n",
" features = np.zeros(len(lexicon)) \n",
" for word in current_words: \n",
" if word.lower() in lexicon: \n",
" index_value = lexicon.index(word.lower()) \n",
" features[index_value] += 1 \n",
"\n",
" features = list(features)\n",
" featureset.append([features, classification]) \n",
"\n",
" return featureset "
],
"outputs": [],
"execution_count": 73,
"metadata": {
"collapsed": false,
"outputHidden": false,
"inputHidden": false
}
},
{
"cell_type": "markdown",
"source": [
"Now we effectively apply this to the dataset. "
],
"metadata": {}
},
{
"cell_type": "code",
"source": [
"lexicon = create_lexicon(pos,neg) \n",
"features = [] \n",
"features += create_embedding(pos, lexicon,[1,0]) \n",
"features += create_embedding(neg, lexicon,[0,1]) \n",
"random.shuffle(features) \n",
"features = np.array(features) \n",
"\n"
],
"outputs": [],
"execution_count": 74,
"metadata": {
"collapsed": false,
"outputHidden": false,
"inputHidden": false
}
},
{
"cell_type": "markdown",
"source": [
"We take 10% of the data for testing purposes and create numpy arrays because that's what TensorFlow expects. The switch between list and numpy array is because of the ease with which you can subset things when you have an array."
],
"metadata": {}
},
{
"cell_type": "code",
"source": [
"testing_size = int(0.1*len(features)) \n",
"X_train = np.array(list(features[:,0][:-testing_size])) \n",
"y_train = np.array(list(features[:,1][:-testing_size])) \n",
"X_test = np.array(list(features[:,0][-testing_size:])) \n",
"y_test = np.array(list(features[:,1][-testing_size:]))"
],
"outputs": [],
"execution_count": 97,
"metadata": {
"collapsed": false,
"outputHidden": false,
"inputHidden": false
}
},
{
"cell_type": "markdown",
"source": [
"Each vector in the sets has the dimension of the lexicon:"
],
"metadata": {}
},
{
"cell_type": "code",
"source": [
"assert(X_train.shape[1] == len(lexicon))"
],
"outputs": [],
"execution_count": 98,
"metadata": {
"collapsed": false,
"outputHidden": false,
"inputHidden": false
}
},
{
"cell_type": "markdown",
"source": [
"From this point on you recognize that AI is as much an art as it is science. The way you assemble the network is where expeerience and insights show. For playing purposes, anything works."
],
"metadata": {}
},
{
"cell_type": "code",
"source": [
" model = keras.Sequential([\n",
" \n",
" keras.layers.Dense(13, activation='relu', input_dim=len(lexicon)),\n",
" keras.layers.Dense(10, activation='relu'),\n",
" keras.layers.Dense(2, activation='sigmoid') \n",
"])\n",
"model.summary()"
],
"outputs": [
{
"output_type": "stream",
"name": "stdout",
"text": [
"Model: \"sequential_15\"\n",
"_________________________________________________________________\n",
"Layer (type) Output Shape Param # \n",
"=================================================================\n",
"dense_35 (Dense) (None, 13) 30121 \n",
"_________________________________________________________________\n",
"dense_36 (Dense) (None, 10) 140 \n",
"_________________________________________________________________\n",
"dense_37 (Dense) (None, 2) 22 \n",
"=================================================================\n",
"Total params: 30,283\n",
"Trainable params: 30,283\n",
"Non-trainable params: 0\n",
"_________________________________________________________________\n"
]
}
],
"execution_count": 102,
"metadata": {
"collapsed": false,
"outputHidden": false,
"inputHidden": false
}
},
{
"cell_type": "code",
"source": [
"model.compile(optimizer='adam', \n",
" loss='binary_crossentropy',\n",
" metrics=['accuracy'])"
],
"outputs": [],
"execution_count": 103,
"metadata": {
"collapsed": false,
"outputHidden": false,
"inputHidden": false
}
},
{
"cell_type": "code",
"source": [
"history = model.fit(X_train, y_train, epochs=5)"
],
"outputs": [
{
"output_type": "stream",
"name": "stdout",
"text": [
"Train on 1800 samples\n",
"Epoch 1/5\n",
"1800/1800 [==============================] - 0s 65us/sample - loss: 0.0374 - accuracy: 0.9967\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\n",
"Epoch 2/5\n",
"1800/1800 [==============================] - 0s 55us/sample - loss: 0.0222 - accuracy: 0.9994\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\n",
"Epoch 3/5\n",
"1800/1800 [==============================] - 0s 51us/sample - loss: 0.0147 - accuracy: 1.0000\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\n",
"Epoch 4/5\n",
"1800/1800 [==============================] - 0s 50us/sample - loss: 0.0102 - accuracy: 1.0000\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\n",
"Epoch 5/5\n",
"1800/1800 [==============================] - 0s 50us/sample - loss: 0.0076 - accuracy: 1.0000\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\n"
]
}
],
"execution_count": 106,
"metadata": {
"collapsed": false,
"outputHidden": false,
"inputHidden": false
}
},
{
"cell_type": "code",
"source": [
"results = model.evaluate(X_test, y_test, verbose=0)\n",
"print(results)"
],
"outputs": [
{
"output_type": "stream",
"name": "stdout",
"text": [
"[0.604368144646287, 0.8225]\n"
]
}
],
"execution_count": 117,
"metadata": {
"collapsed": false,
"outputHidden": false,
"inputHidden": false
}
},
{
"cell_type": "code",
"source": [
"history_dict = history.history\n",
"\n",
"import matplotlib.pyplot as plt\n",
"\n",
"acc = history_dict['accuracy']\n",
" \n",
"loss = history_dict['loss']\n",
"epochs = range(1, len(acc) + 1)\n",
"\n",
"plt.plot(epochs, loss, 'bo', label='Training loss')\n",
"plt.title('Training and validation loss')\n",
"plt.xlabel('Epochs')\n",
"plt.ylabel('Loss')\n",
"plt.legend()\n",
"\n",
"plt.show()"
],
"outputs": [
{
"output_type": "display_data",
"data": {
"text/plain": [
"<Figure size 432x288 with 1 Axes>"
],
"image/png": [
"iVBORw0KGgoAAAANSUhEUgAAAZIAAAEWCAYAAABMoxE0AAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAADl0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uIDMuMC4yLCBodHRwOi8vbWF0cGxvdGxpYi5vcmcvOIA7rQAAIABJREFUeJzt3X+cHXV97/HXO9n8IBBC3cRCs4RFw0PcgNCwDXqBgvy6oQpRSSWw/PJiF1SkvZTepoDWRvMocHtFwVxrFBBk+VW41Egr0dvgtdQassFACDFlTRPYGCSJEIgRYZPP/WO+G07Ws7tnd3b27G7ez8fjPM7Md75nzufMZvM+853ZGUUEZmZm/TWq2gWYmdnw5iAxM7NcHCRmZpaLg8TMzHJxkJiZWS4OEjMzy8VBYlUnabSkHZKmDWTfapI0XdKAn1sv6XRJG0rm10k6qZK+/Xivb0i6tr+v72G9X5D0zYFer1VPTbULsOFH0o6S2QnAb4Bdaf7yiGjpy/oiYhdwwED33RdExLsGYj2SPg5cGBGnlKz74wOxbhv5HCTWZxGx5z/y9I334xHxf7vrL6kmIjoGozYzG3we2rIBl4Yu7pd0r6TXgAslvU/SjyW9ImmzpFskjUn9aySFpPo0f3da/l1Jr0n6d0mH97VvWn6WpP+QtF3SrZL+TdKl3dRdSY2XS2qT9LKkW0peO1rSzZK2SVoPzO5h+1wn6b4ubYskfTFNf1zS2vR5fpb2FrpbV7ukU9L0BEnfSrWtAY7r0vd6SevTetdIOie1Hw18BTgpDRtuLdm2nyt5/RXps2+T9I+SDqlk2/RG0odTPa9IWibpXSXLrpX0c0mvSvppyWd9r6QnU/svJP3PSt/PChARfvjR7wewATi9S9sXgDeAs8m+rOwH/AFwPNle8DuA/wCuTP1rgADq0/zdwFagERgD3A/c3Y++bwdeA+akZVcDbwKXdvNZKqnx28AkoB74ZednB64E1gB1QC3ww+zXq+z7vAPYAexfsu6XgMY0f3bqI+BU4NfAe9Ky04ENJetqB05J038H/AD4HeAw4NkufT8KHJJ+JhekGn43Lfs48IMudd4NfC5Nn5lqPBYYD/xvYFkl26bM5/8C8M00/e5Ux6npZ3QtsC5NzwA2AgenvocD70jTK4Dz0/RE4Phq/y7syw/vkVhRHo+I70TE7oj4dUSsiIjlEdEREeuBxcDJPbz+wYhojYg3gRay/8D62veDwKqI+HZadjNZ6JRVYY1/GxHbI2ID2X/ane/1UeDmiGiPiG3ADT28z3rgGbKAAzgDeDkiWtPy70TE+sgsA/4FKHtAvYuPAl+IiJcjYiPZXkbp+z4QEZvTz+Qesi8BjRWsF6AJ+EZErIqI14H5wMmS6kr6dLdtejIPWBIRy9LP6AayMDoe6CALrRlpePQ/07aD7AvBEZJqI+K1iFhe4eewAjhIrCgvlM5IOlLSP0l6UdKrwAJgcg+vf7Fkeic9H2Dvru/vldYREUH2Db6sCmus6L3Ivkn35B7g/DR9QZrvrOODkpZL+qWkV8j2BnraVp0O6akGSZdKeioNIb0CHFnheiH7fHvWFxGvAi8DU0v69OVn1t16d5P9jKZGxDrgz8l+Di+lodKDU9ePAQ3AOklPSPqjCj+HFcBBYkXpeurr18i+hU+PiAOBz5IN3RRpM9lQEwCSxN7/8XWVp8bNwKEl872dnvwAcLqkqWR7JvekGvcDHgT+lmzY6SDgexXW8WJ3NUh6B/BV4BNAbVrvT0vW29upyj8nGy7rXN9EsiG0TRXU1Zf1jiL7mW0CiIi7I+IEsmGt0WTbhYhYFxHzyIYv/xfwkKTxOWuxfnKQ2GCZCGwHfiXp3cDlg/CejwAzJZ0tqQb4U2BKQTU+APyZpKmSaoG/7KlzRLwIPA58E1gXEc+lReOAscAWYJekDwKn9aGGayUdpOzvbK4sWXYAWVhsIcvUPyHbI+n0C6Cu8+SCMu4FLpP0HknjyP5D/9eI6HYPrw81nyPplPTef0F2XGu5pHdLen96v1+nx26yD3CRpMlpD2Z7+my7c9Zi/eQgscHy58AlZP9JfI3soHihIuIXwHnAF4FtwDuBn5D93ctA1/hVsmMZq8kOBD9YwWvuITt4vmdYKyJeAf478DDZAeu5ZIFYib8m2zPaAHwXuKtkvU8DtwJPpD7vAkqPK3wfeA74haTSIarO1z9KNsT0cHr9NLLjJrlExBqybf5VspCbDZyTjpeMA24iO671Itke0HXppX8ErFV2VuDfAedFxBt567H+UTZsbDbySRpNNpQyNyL+tdr1mI0U3iOxEU3S7DTUMw74DNnZPk9UuSyzEcVBYiPdicB6smGT/wp8OCK6G9oys37w0JaZmeXiPRIzM8tln7ho4+TJk6O+vr7aZZiZDSsrV67cGhE9nTIP7CNBUl9fT2tra7XLMDMbViT1doUGwENbZmaWk4PEzMxycZCYmVku+8QxEjMbet58803a29t5/fXXq13KPm/8+PHU1dUxZkx3l1rrmYPEzKqivb2diRMnUl9fT3ZhZquGiGDbtm20t7dz+OGH9/6CMjy01Y2WFqivh1GjsueWlmpXZDayvP7669TW1jpEqkwStbW1ufYMvUdSRksLNDfDzp3Z/MaN2TxAU+7rnZpZJ4fI0JD35+A9kjKuu+6tEOm0c2fWbmZme3OQlPH8831rN7PhZ9u2bRx77LEce+yxHHzwwUydOnXP/BtvVHZrk4997GOsW7euxz6LFi2iZYDGxk888URWrVo1IOsaSB7aKmPatGw4q1y7mVVHS0s2KvD889nv4sKF+Yaaa2tr9/yn/LnPfY4DDjiAa665Zq8+EUFEMGpU+e/cd9xxR6/v86lPfar/RQ4T3iMpY+FCmDBh77YJE7J2Mxt8ncctN26EiLeOWxZxEkxbWxsNDQ00NTUxY8YMNm/eTHNzM42NjcyYMYMFCxbs6du5h9DR0cFBBx3E/PnzOeaYY3jf+97HSy+9BMD111/Pl770pT3958+fz6xZs3jXu97Fj370IwB+9atfce6559LQ0MDcuXNpbGzsdc/j7rvv5uijj+aoo47i2muvBaCjo4OLLrpoT/stt9wCwM0330xDQwPvec97uPDCCwd8m3mPpIzObzkD+e3HzPqvp+OWRfxe/vSnP+Wuu+6isbERgBtuuIG3ve1tdHR08P73v5+5c+fS0NCw12u2b9/OySefzA033MDVV1/N7bffzvz5839r3RHBE088wZIlS1iwYAGPPvoot956KwcffDAPPfQQTz31FDNnzuyxvvb2dq6//npaW1uZNGkSp59+Oo888ghTpkxh69atrF69GoBXXnkFgJtuuomNGzcyduzYPW0DyXsk3Whqgg0bYPfu7NkhYlY9g33c8p3vfOeeEAG49957mTlzJjNnzmTt2rU8++yzv/Wa/fbbj7POOguA4447jg0bNpRd90c+8pHf6vP4448zb948AI455hhmzJjRY33Lly/n1FNPZfLkyYwZM4YLLriAH/7wh0yfPp1169Zx1VVXsXTpUiZNmgTAjBkzuPDCC2lpaen3Hx32xEFiZkNed8cnizpuuf/++++Zfu655/jyl7/MsmXLePrpp5k9e3bZv7kYO3bsnunRo0fT0dFRdt3jxo3rtU9/1dbW8vTTT3PSSSexaNEiLr/8cgCWLl3KFVdcwYoVK5g1axa7du0a0Pd1kJjZkFfN45avvvoqEydO5MADD2Tz5s0sXbp0wN/jhBNO4IEHHgBg9erVZfd4Sh1//PE89thjbNu2jY6ODu677z5OPvlktmzZQkTwx3/8xyxYsIAnn3ySXbt20d7ezqmnnspNN93E1q1b2dl1nDAnHyMxsyGvmsctZ86cSUNDA0ceeSSHHXYYJ5xwwoC/x6c//WkuvvhiGhoa9jw6h6XKqaur4/Of/zynnHIKEcHZZ5/NBz7wAZ588kkuu+wyIgJJ3HjjjXR0dHDBBRfw2muvsXv3bq655homTpw4oPXvE/dsb2xsDN/YymxoWbt2Le9+97urXcaQ0NHRQUdHB+PHj+e5557jzDPP5LnnnqOmZvC+65f7eUhaGRGN3bxkD++RmJlV2Y4dOzjttNPo6OggIvja1742qCGS1/Cp1MxshDrooINYuXJltcvoNx9sN7Oq2ReG1oeDvD8HB4mZVcX48ePZtm2bw6TKOu9HMn78+H6vo9ChLUmzgS8Do4FvRMQNXZaPA+4CjgO2AedFxAZJs4DFnd2Az0XEw+k1G4DXgF1ARyUHgsxs6Kmrq6O9vZ0tW7ZUu5R9XucdEvursCCRNBpYBJwBtAMrJC2JiNITpC8DXo6I6ZLmATcC5wHPAI0R0SHpEOApSd+JiM6/3nl/RGwtqnYzK96YMWP6fUc+G1qKHNqaBbRFxPqIeAO4D5jTpc8c4M40/SBwmiRFxM6S0BgPeN/XzGyIKjJIpgIvlMy3p7ayfVJwbAdqASQdL2kNsBq4oiRYAviepJWSmrt7c0nNkloltXrX2cysOEP2YHtELI+IGcAfAH8lqfNI0IkRMRM4C/iUpD/s5vWLI6IxIhqnTJkySFWbme17igySTcChJfN1qa1sH0k1wCSyg+57RMRaYAdwVJrflJ5fAh4mG0IzM7MqKTJIVgBHSDpc0lhgHrCkS58lwCVpei6wLCIivaYGQNJhwJHABkn7S5qY2vcHziQ7MG9mZlVS2Flb6YyrK4GlZKf/3h4RayQtAFojYglwG/AtSW3AL8nCBuBEYL6kN4HdwCcjYqukdwAPS+qs/Z6IeLSoz2BmZr3zRRvNzKysSi/aOGQPtpuZ2fDgIDEzs1wcJGZmlouDxMzMcnGQmJlZLg4SMzPLxUFiZma5OEjMzCwXB4mZmeXiIDEzs1wcJGZmlouDxMzMcnGQmJlZLg4SMzPLxUFiZma5OEjMzCwXB4mZmeXiIDEzs1wcJGZmlouDxMzMcnGQmJlZLg4SMzPLxUFiZma5OEjMzCyXQoNE0mxJ6yS1SZpfZvk4Sfen5csl1af2WZJWpcdTkj5c6TrNzGxwFRYkkkYDi4CzgAbgfEkNXbpdBrwcEdOBm4EbU/szQGNEHAvMBr4mqabCdZqZ2SAqco9kFtAWEesj4g3gPmBOlz5zgDvT9IPAaZIUETsjoiO1jweiD+s0M7NBVGSQTAVeKJlvT21l+6Tg2A7UAkg6XtIaYDVwRVpeyTpJr2+W1CqpdcuWLQPwcczMrJwhe7A9IpZHxAzgD4C/kjS+j69fHBGNEdE4ZcqUYoo0M7NCg2QTcGjJfF1qK9tHUg0wCdhW2iEi1gI7gKMqXKeZmQ2iIoNkBXCEpMMljQXmAUu69FkCXJKm5wLLIiLSa2oAJB0GHAlsqHCdZmY2iGqKWnFEdEi6ElgKjAZuj4g1khYArRGxBLgN+JakNuCXZMEAcCIwX9KbwG7gkxGxFaDcOov6DGZm1jtFRO+9hrnGxsZobW2tdhlmZsOKpJUR0dhbvyF7sN3MzIYHB4mZmeXiIDEzs1wcJGZmlouDxMzMcnGQmJlZLg4SMzPLxUFiZma5OEjMzCwXB4mZmeXiIDEzs1wcJGZmlouDxMzMcnGQmJlZLg4SMzPLxUFiZma5OEjMzCwXB4mZmeXiIDEzs1wcJGZmlouDxMzMcnGQmJlZLg4SMzPLxUFiZma5FBokkmZLWiepTdL8MsvHSbo/LV8uqT61nyFppaTV6fnUktf8IK1zVXq8vcjPYGZmPaspasWSRgOLgDOAdmCFpCUR8WxJt8uAlyNiuqR5wI3AecBW4OyI+Lmko4ClwNSS1zVFRGtRtZuZWeWK3COZBbRFxPqIeAO4D5jTpc8c4M40/SBwmiRFxE8i4uepfQ2wn6RxBdZqZmb9VGSQTAVeKJlvZ++9ir36REQHsB2o7dLnXODJiPhNSdsdaVjrM5JU7s0lNUtqldS6ZcuWPJ/DzMx6MKQPtkuaQTbcdXlJc1NEHA2clB4XlXttRCyOiMaIaJwyZUrxxZqZ7aOKDJJNwKEl83WprWwfSTXAJGBbmq8DHgYujoifdb4gIjal59eAe8iG0MzMrEqKDJIVwBGSDpc0FpgHLOnSZwlwSZqeCyyLiJB0EPBPwPyI+LfOzpJqJE1O02OADwLPFPgZzMysF4UFSTrmcSXZGVdrgQciYo2kBZLOSd1uA2oltQFXA52nCF8JTAc+2+U033HAUklPA6vI9mi+XtRnMDOz3ikiql1D4RobG6O11WcLm5n1haSVEdHYW78hfbDdzMyGPgeJmZnl4iAxM7NcKgoSSe/s/MtySadIuiqdWWVmZvu4SvdIHgJ2SZoOLCb72497CqvKzMyGjUqDZHc6nffDwK0R8RfAIcWVZWZmw0WlQfKmpPPJ/njwkdQ2ppiSzMxsOKk0SD4GvA9YGBH/Kelw4FvFlWVmZsNFRfcjSfcQuQpA0u8AEyPixiILMzOz4aHSs7Z+IOlASW8DngS+LumLxZZmZmbDQaVDW5Mi4lXgI8BdEXE8cHpxZdlw09IC9fUwalT23NJS7YrMbLBUGiQ1kg4BPspbB9vNgCw0mpth40aIyJ6bmx0mZvuKSoNkAdlVfH8WESskvQN4rriybDi57jrYuXPvtp07s3YzG/kqPdj+D8A/lMyvJ7sFrhnPP9+3djMbWSo92F4n6WFJL6XHQ+kOhmZMm9a3djMbWSod2rqD7G6Gv5ce30ltZixcCBMm7N02YULWbmYjX6VBMiUi7oiIjvT4JjClwLpsGGlqgsWL4bDDQMqeFy/O2s1s5KvoGAmwTdKFwL1p/nxgWzEl2XDU1OTgMNtXVbpH8t/ITv19EdgMzAUuLagmMzMbRioKkojYGBHnRMSUiHh7RHwIn7VlZmbku0Pi1QNWhZmZDVt5gkQDVoWZmQ1beYIkBqwKMzMbtno8a0vSa5QPDAH7FVKRmZkNKz3ukUTExIg4sMxjYkT0euqwpNmS1klqkzS/zPJxku5Py5dLqk/tZ0haKWl1ej615DXHpfY2SbdI8hCbmVkV5Rna6pGk0cAi4CygAThfUkOXbpcBL0fEdOBmoPNmWVuBsyPiaLLb+5bejfGrwJ8AR6TH7KI+g5mZ9a6wIAFmAW0RsT4i3gDuA+Z06TMHuDNNPwicJkkR8ZOI+HlqXwPsl/ZeDgEOjIgfR0QAdwEfKvAzmJlZL4oMkqnACyXz7amtbJ+I6AC2A7Vd+pwLPBkRv0n923tZp5mZDaJKL5FSFZJmkA13ndmP1zYDzQDTfBlaM7PCFLlHsgk4tGS+LrWV7SOpBphEuoZXukz9w8DFEfGzkv6ll68vt04AImJxRDRGROOUKb6+pJlZUYoMkhXAEZIOlzQWmEd2KfpSS8gOpkN2/a5lERGSDgL+CZgfEf/W2TkiNgOvSnpvOlvrYuDbBX4GMzPrRWFBko55XEl2i961wAMRsUbSAknnpG63AbWS2sguudJ5ivCVwHTgs5JWpcfb07JPAt8A2oCfAd8t6jOYmVnvlJ38NLI1NjZGa2trtcswMxtWJK2MiMbe+hU5tGVmZvsAB4mZmeXiIDEzs1wcJGZmlouDxMzMcnGQmJlZLg4SMzPLxUFiZma5OEjMzCwXB4mZmeXiIDEzs1wcJGZmlouDxMzMcnGQmJlZLg4SMzPLxUFiZma5OEjMzCwXB4mZmeXiIDEzs1wcJGZmlouDxMzMcnGQmJlZLg4SMzPLxUFiZma5OEjMzCyXQoNE0mxJ6yS1SZpfZvk4Sfen5csl1af2WkmPSdoh6StdXvODtM5V6fH2Ij+DmZn1rKaoFUsaDSwCzgDagRWSlkTEsyXdLgNejojpkuYBNwLnAa8DnwGOSo+umiKitajazcysckXukcwC2iJifUS8AdwHzOnSZw5wZ5p+EDhNkiLiVxHxOFmgmI04LS1QXw+jRmXPLS3Vrsis/4oMkqnACyXz7amtbJ+I6AC2A7UVrPuONKz1GUkq10FSs6RWSa1btmzpe/VmBWlpgeZm2LgRIrLn5maHiQ1fw/Fge1NEHA2clB4XlesUEYsjojEiGqdMmTKoBZr15LrrYOfOvdt27szazYajIoNkE3BoyXxdaivbR1INMAnY1tNKI2JTen4NuIdsCM1s2Hj++b61mw11RQbJCuAISYdLGgvMA5Z06bMEuCRNzwWWRUR0t0JJNZImp+kxwAeBZwa8crMCTZvWt3azoa6wIEnHPK4ElgJrgQciYo2kBZLOSd1uA2oltQFXA3tOEZa0AfgicKmkdkkNwDhgqaSngVVkezRfL+ozmBVh4UKYMGHvtgkTsnaz4Ug97ACMGI2NjdHa6rOFbehoacmOiTz/fLYnsnAhNDVVuyqzvUlaGRGNvfUr7O9IzKx7TU0ODhs5huNZW2ZmNoQ4SMzMLBcHiZmZ5eIgMTOzXBwkZmaWi4PEzMxycZCYmVkuDhIzM8vFQWJmZrk4SMzMLBcHiZmZ5eIgMTOzXBwkZmaWi4PEzMxycZCYmVkuDhIzM8vFQWJmZrk4SMzMLBcHiZmZ5eIgMTOzXBwkZmaWi4PEzMxycZCY2ZDX0gL19TBqVPbc0lLtiqxUoUEiabakdZLaJM0vs3ycpPvT8uWS6lN7raTHJO2Q9JUurzlO0ur0mlskqcjPYGbV1dICzc2wcSNEZM/NzQ6ToaSwIJE0GlgEnAU0AOdLaujS7TLg5YiYDtwM3JjaXwc+A1xTZtVfBf4EOCI9Zg989WY2VFx3HezcuXfbzp1Zuw0NRe6RzALaImJ9RLwB3AfM6dJnDnBnmn4QOE2SIuJXEfE4WaDsIekQ4MCI+HFEBHAX8KECP4OZVdnzz/et3QZfkUEyFXihZL49tZXtExEdwHagtpd1tveyTgAkNUtqldS6ZcuWPpZuZkPFtGl9a7fBN2IPtkfE4ohojIjGKVOmVLscM+unhQthwoS92yZMyNptaCgySDYBh5bM16W2sn0k1QCTgG29rLOul3Wa2QjS1ASLF8Nhh4GUPS9enLXb0FBkkKwAjpB0uKSxwDxgSZc+S4BL0vRcYFk69lFWRGwGXpX03nS21sXAtwe+dDMbSpqaYMMG2L07e3aIDC01Ra04IjokXQksBUYDt0fEGkkLgNaIWALcBnxLUhvwS7KwAUDSBuBAYKykDwFnRsSzwCeBbwL7Ad9NDzMzqxL1sAMwYjQ2NkZra2u1yzAzG1YkrYyIxt76jdiD7WZmNjgcJGZmlouDxMzMcnGQmJlZLg4SMzPLxUFiZma5OEjMzCwXB4mZ2Qgz2DcCK+wv283MbPB13gis8x4unTcCg+IuLeM9EjOzEaQaNwJzkJiZjSDVuBGYg8TMbASpxo3AHCRmZiNINW4E5iAxMxtBqnEjMJ+1ZWY2wjQ1De7Nv7xHYmZmuThIzMwsFweJmZnl4iAxM7NcHCRmZpaLIqLaNRRO0hZgYz9fPhnYOoDlDBTX1Teuq29cV9+M1LoOi4gpvXXaJ4IkD0mtEdFY7Tq6cl1947r6xnX1zb5el4e2zMwsFweJmZnl4iDp3eJqF9AN19U3rqtvXFff7NN1+RiJmZnl4j0SMzPLxUFiZma5OEgASbdLeknSM90sl6RbJLVJelrSzCFS1ymStktalR6fHaS6DpX0mKRnJa2R9Kdl+gz6NquwrkHfZpLGS3pC0lOprr8p02ecpPvT9louqX6I1HWppC0l2+vjRddV8t6jJf1E0iNllg369qqwrqpsL0kbJK1O79laZnmxv48Rsc8/gD8EZgLPdLP8j4DvAgLeCywfInWdAjxShe11CDAzTU8E/gNoqPY2q7CuQd9maRsckKbHAMuB93bp80ng79P0POD+IVLXpcBXBvvfWHrvq4F7yv28qrG9KqyrKtsL2ABM7mF5ob+P3iMBIuKHwC976DIHuCsyPwYOknTIEKirKiJic0Q8maZfA9YCU7t0G/RtVmFdgy5tgx1pdkx6dD3LZQ5wZ5p+EDhNkoZAXVUhqQ74APCNbroM+vaqsK6hqtDfRwdJZaYCL5TMtzME/oNK3peGJr4racZgv3kaUvh9sm+zpaq6zXqoC6qwzdJwyCrgJeD7EdHt9oqIDmA7UDsE6gI4Nw2HPCjp0KJrSr4E/A9gdzfLq7K9KqgLqrO9AviepJWSmsssL/T30UEyvD1Jdi2cY4BbgX8czDeXdADwEPBnEfHqYL53T3qpqyrbLCJ2RcSxQB0wS9JRg/G+vamgru8A9RHxHuD7vLUXUBhJHwReioiVRb9XX1RY16Bvr+TEiJgJnAV8StIfDtL7Ag6SSm0CSr9Z1KW2qoqIVzuHJiLin4ExkiYPxntLGkP2n3VLRPyfMl2qss16q6ua2yy95yvAY8DsLov2bC9JNcAkYFu164qIbRHxmzT7DeC4QSjnBOAcSRuA+4BTJd3dpU81tlevdVVpexERm9LzS8DDwKwuXQr9fXSQVGYJcHE68+G9wPaI2FztoiQd3DkuLGkW2c+z8P980nveBqyNiC92023Qt1kldVVjm0maIumgNL0fcAbw0y7dlgCXpOm5wLJIR0mrWVeXcfRzyI47FSoi/ioi6iKinuxA+rKIuLBLt0HfXpXUVY3tJWl/SRM7p4Ezga5nehb6+1gzUCsaziTdS3Y2z2RJ7cBfkx14JCL+HvhnsrMe2oCdwMeGSF1zgU9I6gB+Dcwr+pcpOQG4CFidxtcBrgWmldRWjW1WSV3V2GaHAHdKGk0WXA9ExCOSFgCtEbGELAC/JamN7ASLeQXXVGldV0k6B+hIdV06CHWVNQS2VyV1VWN7/S7wcPp+VAPcExGPSroCBuf30ZdIMTOzXDy0ZWZmuThIzMwsFweJmZnl4iAxM7NcHCRmZpaLg8SsnyTtKrnK6ypJ8wdw3fXq5qrPZkON/47ErP9+nS4vYrZP8x6J2QBL94a4Kd0f4glJ01N7vaRl6YJ+/yJpWmr/XUkPpwtJPiXpv6RVjZb0dWX3Cvle+utzJF2l7J4rT0u6r0of02wPB4lZ/+3XZWjrvJJl2yPiaOArZFeMhewikXemC/q1ALek9luA/5cuJDkTWJPajwAWRcQM4BXg3NQ+H/j9tJ4rivpwZpXyX7ab9ZOkHRFxQJn2DcCpEbE+XUTyxYiolbQVOCQVB95bAAABAElEQVQi3kztmyNisqQtQF3Jxf46L4P//Yg4Is3/JTAmIr4g6VFgB9mVi/+x5J4iZlXhPRKzYkQ3033xm5LpXbx1TPMDwCKyvZcV6eq3ZlXjIDErxnklz/+epn/EWxcXbAL+NU3/C/AJ2HOjqUndrVTSKODQiHgM+Euyy6f/1l6R2WDyNxmz/tuv5CrDAI9GROcpwL8j6WmyvYrzU9ungTsk/QWwhbeuwPqnwGJJl5HteXwC6O4S36OBu1PYCLgl3UvErGp8jMRsgKVjJI0RsbXatZgNBg9tmZlZLt4jMTOzXLxHYmZmuThIzMwsFweJmZnl4iAxM7NcHCRmZpbL/we+JuTHnPC2pgAAAABJRU5ErkJggg==\n"
]
},
"metadata": {
"needs_background": "light"
}
}
],
"execution_count": 112,
"metadata": {
"collapsed": false,
"outputHidden": false,
"inputHidden": false
}
},
{
"cell_type": "code",
"source": [],
"outputs": [],
"execution_count": null,
"metadata": {
"collapsed": false,
"outputHidden": false,
"inputHidden": false
}
}
],
"metadata": {
"kernel_info": {
"name": "python3"
},
"language_info": {
"name": "python",
"version": "3.7.2",
"mimetype": "text/x-python",
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"pygments_lexer": "ipython3",
"nbconvert_exporter": "python",
"file_extension": ".py"
},
"kernelspec": {
"name": "python3",
"language": "python",
"display_name": "Python 3"
}
},
"nbformat": 4,
"nbformat_minor": 0
}
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment