Skip to content

Instantly share code, notes, and snippets.

@mattiasostmar
Last active March 14, 2019 21:40
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save mattiasostmar/f264bad7eb22ad2093160c8a44755388 to your computer and use it in GitHub Desktop.
Save mattiasostmar/f264bad7eb22ad2093160c8a44755388 to your computer and use it in GitHub Desktop.
Using Facebooks deep-learning framwork fasttext to try to predict the Jungian cognitive functions based on blog authors writing style.
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Author: **Mattias Östmar**\n",
"\n",
"Date: **2019-03-14**\n",
"\n",
"Contact: **mattiasostmar at gmail dot com**\n",
"\n",
"Thanks to Mikael Huss for being a good speaking partner.\n",
"\n",
"In this notebook we're going to use the [python version of fasttext](https://pypi.org/project/fasttext/), based on [Facebooks fasttext](https://github.com/facebookresearch/fastText) tool, to try to predict the [Jungian cognitive function](https://en.wikipedia.org/wiki/Jungian_cognitive_functions) of the authors writing style as appearing in blog posts."
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"import csv\n",
"import requests\n",
"import pandas as pd\n",
"from sklearn.model_selection import train_test_split\n",
"import fasttext"
]
},
{
"cell_type": "markdown",
"metadata": {
"collapsed": false
},
"source": [
"Download the annotated dataset as semi-colon separated CSV from [https://osf.io/zvw5g/download](https://osf.io/zvw5g/download) (66,1 MB file size)"
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {
"collapsed": false
},
"outputs": [
{
"data": {
"text/html": [
"<div>\n",
"<style>\n",
" .dataframe thead tr:only-child th {\n",
" text-align: right;\n",
" }\n",
"\n",
" .dataframe thead th {\n",
" text-align: left;\n",
" }\n",
"\n",
" .dataframe tbody tr th {\n",
" vertical-align: top;\n",
" }\n",
"</style>\n",
"<table border=\"1\" class=\"dataframe\">\n",
" <thead>\n",
" <tr style=\"text-align: right;\">\n",
" <th></th>\n",
" <th>text</th>\n",
" <th>base_function</th>\n",
" <th>directed_function</th>\n",
" </tr>\n",
" </thead>\n",
" <tbody>\n",
" <tr>\n",
" <th>1</th>\n",
" <td>❀*a drop of colour*❀ 1/39 next→ home ask past ...</td>\n",
" <td>f</td>\n",
" <td>fi</td>\n",
" </tr>\n",
" <tr>\n",
" <th>2</th>\n",
" <td>Neko cool kids can't die home family daveblog ...</td>\n",
" <td>t</td>\n",
" <td>ti</td>\n",
" </tr>\n",
" <tr>\n",
" <th>3</th>\n",
" <td>Anything... Anything Mass Effect-related Music...</td>\n",
" <td>f</td>\n",
" <td>fe</td>\n",
" </tr>\n",
" </tbody>\n",
"</table>\n",
"</div>"
],
"text/plain": [
" text base_function \\\n",
"1 ❀*a drop of colour*❀ 1/39 next→ home ask past ... f \n",
"2 Neko cool kids can't die home family daveblog ... t \n",
"3 Anything... Anything Mass Effect-related Music... f \n",
"\n",
" directed_function \n",
"1 fi \n",
"2 ti \n",
"3 fe "
]
},
"execution_count": 2,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"df = pd.read_csv(\"blog_texts_and_cognitive_function.csv\", sep=\";\", index_col=0)\n",
"df.head(3)"
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {
"collapsed": false
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"<class 'pandas.core.frame.DataFrame'>\n",
"Int64Index: 22588 entries, 1 to 25437\n",
"Data columns (total 3 columns):\n",
"text 22588 non-null object\n",
"base_function 22588 non-null object\n",
"directed_function 22588 non-null object\n",
"dtypes: object(3)\n",
"memory usage: 705.9+ KB\n"
]
}
],
"source": [
"df.info()"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {
"collapsed": false
},
"outputs": [
{
"data": {
"text/plain": [
"n 9380\n",
"f 6063\n",
"t 4502\n",
"s 2643\n",
"Name: base_function, dtype: int64"
]
},
"execution_count": 3,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"df.base_function.value_counts()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Let's see, crudely, if the blog writers of a certain class writes longer or shorter texts in average."
]
},
{
"cell_type": "code",
"execution_count": 11,
"metadata": {
"collapsed": false
},
"outputs": [
{
"data": {
"text/html": [
"<div>\n",
"<style>\n",
" .dataframe thead tr:only-child th {\n",
" text-align: right;\n",
" }\n",
"\n",
" .dataframe thead th {\n",
" text-align: left;\n",
" }\n",
"\n",
" .dataframe tbody tr th {\n",
" vertical-align: top;\n",
" }\n",
"</style>\n",
"<table border=\"1\" class=\"dataframe\">\n",
" <thead>\n",
" <tr style=\"text-align: right;\">\n",
" <th></th>\n",
" <th>text_len</th>\n",
" </tr>\n",
" <tr>\n",
" <th>base_function</th>\n",
" <th></th>\n",
" </tr>\n",
" </thead>\n",
" <tbody>\n",
" <tr>\n",
" <th>f</th>\n",
" <td>476.125869</td>\n",
" </tr>\n",
" <tr>\n",
" <th>n</th>\n",
" <td>489.926113</td>\n",
" </tr>\n",
" <tr>\n",
" <th>s</th>\n",
" <td>488.566448</td>\n",
" </tr>\n",
" <tr>\n",
" <th>t</th>\n",
" <td>508.435853</td>\n",
" </tr>\n",
" </tbody>\n",
"</table>\n",
"</div>"
],
"text/plain": [
" text_len\n",
"base_function \n",
"f 476.125869\n",
"n 489.926113\n",
"s 488.566448\n",
"t 508.435853"
]
},
"execution_count": 11,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"tokens = []\n",
"df.text.apply(lambda x: tokens.append(len(x.split())))\n",
"df[\"text_len\"] = pd.Series(tokens)\n",
"df.groupby(\"base_function\").mean()"
]
},
{
"cell_type": "markdown",
"metadata": {
"collapsed": true
},
"source": [
"Let's try to predict the four base cognitive functions. We need to prepare the labels to suite fasttexts formatting."
]
},
{
"cell_type": "code",
"execution_count": 76,
"metadata": {
"collapsed": false
},
"outputs": [
{
"data": {
"text/html": [
"<div>\n",
"<style>\n",
" .dataframe thead tr:only-child th {\n",
" text-align: right;\n",
" }\n",
"\n",
" .dataframe thead th {\n",
" text-align: left;\n",
" }\n",
"\n",
" .dataframe tbody tr th {\n",
" vertical-align: top;\n",
" }\n",
"</style>\n",
"<table border=\"1\" class=\"dataframe\">\n",
" <thead>\n",
" <tr style=\"text-align: right;\">\n",
" <th></th>\n",
" <th>label</th>\n",
" <th>text</th>\n",
" </tr>\n",
" </thead>\n",
" <tbody>\n",
" <tr>\n",
" <th>1</th>\n",
" <td>__label__f</td>\n",
" <td>❀*a drop of colour*❀ 1/39 next→ home ask past ...</td>\n",
" </tr>\n",
" <tr>\n",
" <th>2</th>\n",
" <td>__label__t</td>\n",
" <td>Neko cool kids can't die home family daveblog ...</td>\n",
" </tr>\n",
" <tr>\n",
" <th>3</th>\n",
" <td>__label__f</td>\n",
" <td>Anything... Anything Mass Effect-related Music...</td>\n",
" </tr>\n",
" </tbody>\n",
"</table>\n",
"</div>"
],
"text/plain": [
" label text\n",
"1 __label__f ❀*a drop of colour*❀ 1/39 next→ home ask past ...\n",
"2 __label__t Neko cool kids can't die home family daveblog ...\n",
"3 __label__f Anything... Anything Mass Effect-related Music..."
]
},
"execution_count": 76,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"dataset = df[[\"base_function\",\"text\"]]\n",
"dataset[\"label\"] = df.base_function.apply(lambda x: \"__label__\" + x)\n",
"dataset.drop(\"base_function\", axis=1, inplace=True)\n",
"dataset = dataset[[\"label\",\"text\"]]\n",
"dataset.head(3)"
]
},
{
"cell_type": "code",
"execution_count": 37,
"metadata": {
"collapsed": false
},
"outputs": [
{
"data": {
"text/html": [
"<div>\n",
"<style>\n",
" .dataframe thead tr:only-child th {\n",
" text-align: right;\n",
" }\n",
"\n",
" .dataframe thead th {\n",
" text-align: left;\n",
" }\n",
"\n",
" .dataframe tbody tr th {\n",
" vertical-align: top;\n",
" }\n",
"</style>\n",
"<table border=\"1\" class=\"dataframe\">\n",
" <thead>\n",
" <tr style=\"text-align: right;\">\n",
" <th></th>\n",
" <th>label</th>\n",
" <th>text</th>\n",
" </tr>\n",
" </thead>\n",
" <tbody>\n",
" <tr>\n",
" <th>25435</th>\n",
" <td>__label__t</td>\n",
" <td>Living in Lit Home Hi there Ask Archive Mobile...</td>\n",
" </tr>\n",
" <tr>\n",
" <th>25436</th>\n",
" <td>__label__f</td>\n",
" <td>Love is Art Love is Art message   ·   about   ...</td>\n",
" </tr>\n",
" <tr>\n",
" <th>25437</th>\n",
" <td>__label__n</td>\n",
" <td>(Source: taeyeohn , via ninakask ) Posted at 0...</td>\n",
" </tr>\n",
" </tbody>\n",
"</table>\n",
"</div>"
],
"text/plain": [
" label text\n",
"25435 __label__t Living in Lit Home Hi there Ask Archive Mobile...\n",
"25436 __label__f Love is Art Love is Art message   ·   about   ...\n",
"25437 __label__n (Source: taeyeohn , via ninakask ) Posted at 0..."
]
},
"execution_count": 37,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"dataset.tail(3)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now let's separate the dataset into two separate files for 80 per cent training and 20 per cent evaluation respectively."
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {
"collapsed": false
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Rows in training data: 18070\n",
"Rows in test data: 4518\n"
]
}
],
"source": [
"train, test = train_test_split(dataset, test_size=0.2)\n",
"print(\"Rows in training data: {}\".format(len(train)))\n",
"print(\"Rows in test data: {}\".format(len(test)))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now we create two separate textfiles for the training and evaluation respectively, with each row containing the label and the text according to fasttexts formatting standards."
]
},
{
"cell_type": "code",
"execution_count": 18,
"metadata": {
"collapsed": false
},
"outputs": [],
"source": [
"train.to_csv(r'jung_training.txt', index=False, sep=' ', header=False, quoting=csv.QUOTE_NONE, quotechar=\"\", escapechar=\" \")\n",
"test.to_csv(r'jung_evaluation.txt', index=False, sep=' ', header=False, quoting=csv.QUOTE_NONE, quotechar=\"\", escapechar=\" \")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now we can train our model with the default settings and no text preprocessing to get an initial setup."
]
},
{
"cell_type": "code",
"execution_count": 81,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"classifier1 = fasttext.supervised(\"jung_training.txt\",\"model_jung_default\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Then we can evaluate the model using our test data."
]
},
{
"cell_type": "code",
"execution_count": 82,
"metadata": {
"collapsed": false
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"P@1: 0.42297476759628155\n",
"R@1: 0.42297476759628155\n",
"Number of examples: 4518\n"
]
}
],
"source": [
"result = classifier1.test(\"jung_evaluation.txt\")\n",
"print('P@1:', result.precision)\n",
"print('R@1:', result.recall)\n",
"print('Number of examples:', result.nexamples)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The results are slightly better than pure chance (0.415). Let's see if we can improve the model by some crude preprocessing of the texts, removing non-alphanumeric characters and making all words lowercase."
]
},
{
"cell_type": "code",
"execution_count": 75,
"metadata": {
"collapsed": false
},
"outputs": [
{
"data": {
"text/html": [
"<div>\n",
"<style>\n",
" .dataframe thead tr:only-child th {\n",
" text-align: right;\n",
" }\n",
"\n",
" .dataframe thead th {\n",
" text-align: left;\n",
" }\n",
"\n",
" .dataframe tbody tr th {\n",
" vertical-align: top;\n",
" }\n",
"</style>\n",
"<table border=\"1\" class=\"dataframe\">\n",
" <thead>\n",
" <tr style=\"text-align: right;\">\n",
" <th></th>\n",
" <th>label</th>\n",
" <th>text</th>\n",
" </tr>\n",
" </thead>\n",
" <tbody>\n",
" <tr>\n",
" <th>1</th>\n",
" <td>__label__f</td>\n",
" <td>a drop of colour 1 39 next home ask past a dro...</td>\n",
" </tr>\n",
" <tr>\n",
" <th>2</th>\n",
" <td>__label__t</td>\n",
" <td>neko cool kids can t die home family daveblog ...</td>\n",
" </tr>\n",
" <tr>\n",
" <th>3</th>\n",
" <td>__label__f</td>\n",
" <td>anything anything mass effect related music fu...</td>\n",
" </tr>\n",
" </tbody>\n",
"</table>\n",
"</div>"
],
"text/plain": [
" label text\n",
"1 __label__f a drop of colour 1 39 next home ask past a dro...\n",
"2 __label__t neko cool kids can t die home family daveblog ...\n",
"3 __label__f anything anything mass effect related music fu..."
]
},
"execution_count": 75,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"processed = dataset.copy()\n",
"processed[\"text\"] = processed.text.str.replace(r\"[\\W ]\",\" \") # replace all characters that are not a-z, A-Z or 0-9\n",
"processed[\"text\"] = processed.text.str.lower() # make all characters lower case\n",
"processed[\"text\"] = processed.text.str.replace(r' +',' ') # Remove multiple spaces\n",
"processed[\"text\"] = processed.text.str.replace(r'^ +','') # Remove resulting initial spaces\n",
"\n",
"processed.head(3)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"And then we create training and evaluation data from the processed dataframe and store them to two new files with the prefix \"processed_\""
]
},
{
"cell_type": "code",
"execution_count": 77,
"metadata": {
"collapsed": false
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Rows in training data: 18070\n",
"Rows in test data: 4518\n"
]
}
],
"source": [
"train, test = train_test_split(processed, test_size=0.2)\n",
"print(\"Rows in training data: {}\".format(len(train)))\n",
"print(\"Rows in test data: {}\".format(len(test)))\n",
"\n",
"train.to_csv(r'processed_jung_training.txt', index=False, sep=' ', header=False, quoting=csv.QUOTE_NONE, quotechar=\"\", escapechar=\" \")\n",
"test.to_csv(r'processed_jung_evaluation.txt', index=False, sep=' ', header=False, quoting=csv.QUOTE_NONE, quotechar=\"\", escapechar=\" \")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"And re-run the training and evaluation."
]
},
{
"cell_type": "code",
"execution_count": 84,
"metadata": {
"collapsed": false
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"P@1: 0.42076139884904823\n",
"R@1: 0.42076139884904823\n",
"Number of examples: 4518\n"
]
}
],
"source": [
"classifier2 = fasttext.supervised(\"processed_jung_training.txt\",\"model_jung_preprocessed\")\n",
"result = classifier2.test(\"processed_jung_evaluation.txt\")\n",
"print('P@1:', result.precision)\n",
"print('R@1:', result.recall)\n",
"print('Number of examples:', result.nexamples)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Even worse results now. Apparently capital letters and special characters are features that help distinguish between the different labels, so let's keep the original trainingdata for further training and tuning.\n",
"\n",
"What happens if we increase the number of epochs from the default 5 epochs to 25?"
]
},
{
"cell_type": "code",
"execution_count": 90,
"metadata": {
"collapsed": false
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"P@1: 0.35568835768038953\n",
"R@1: 0.35568835768038953\n",
"Number of examples: 4518\n"
]
}
],
"source": [
"classifier3 = fasttext.supervised(\"jung_training.txt\", \"model_jung_default_25epochs\", epoch=25)\n",
"result = classifier3.test(\"jung_evaluation.txt\")\n",
"print('P@1:', result.precision)\n",
"print('R@1:', result.recall)\n",
"print('Number of examples:', result.nexamples)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The results actually deteriorates from 0.422 to 0.355."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"What happens if we increase the learning rate from default 0.05 to 1?"
]
},
{
"cell_type": "code",
"execution_count": 89,
"metadata": {
"collapsed": false
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"P@1: 0.42363877822045154\n",
"R@1: 0.42363877822045154\n",
"Number of examples: 4518\n"
]
}
],
"source": [
"classifier4 = fasttext.supervised(\"jung_training.txt\", \"model_jung_default_lr0.5\", lr=1)\n",
"result = classifier4.test(\"jung_evaluation.txt\")\n",
"print('P@1:', result.precision)\n",
"print('R@1:', result.recall)\n",
"print('Number of examples:', result.nexamples)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"A miniscule improvement from 0.422 to 0.423."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"What happens if we use word_ngrams of 2?"
]
},
{
"cell_type": "raw",
"metadata": {
"collapsed": false
},
"source": [
"# This makes my kernel crash in Jupyter Notebook\n",
"classifier5 = fasttext.supervised(\"jung_training.txt\", \"model_jung_default_ngrams2\", word_ngrams=2)\n",
"result = classifier5.test(\"jung_evaluation.txt\")\n",
"print('P@1:', result.precision)\n",
"print('R@1:', result.recall)\n",
"print('Number of examples:', result.nexamples)"
]
},
{
"cell_type": "raw",
"metadata": {
"collapsed": true
},
"source": [
"# Instead I download the compiled fasttext from https://github.com/facebookresearch/fastTex\n",
"# and run in terminal\n",
"\n",
"fastText-0.2.0 $ ./fasttext supervised -input ../jung_training.txt -output ../model_jung_default_ngrams2 -wordNgrams 2\n",
"Read 9M words\n",
"Number of words: 610075\n",
"Number of labels: 4\n",
"Progress: 100.0% words/sec/thread: 1017869 lr: 0.000000 loss: 1.301996 ETA: 0h 0m\n",
"\n",
"fastText-0.2.0 $ ./fasttext test ../model_jung_default_ngrams2.bin ../jung_evaluation.txt\n",
"N\t4518\n",
"P@1\t0.423\n",
"R@1\t0.423"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"With ngrams set to 2 we get a similar result of 0.423 as when we increase the learning rate to 1."
]
},
{
"cell_type": "markdown",
"metadata": {
"collapsed": true
},
"source": [
"What if we use pre-trained vectors when building the classifier? They can be downloaded from [fasttext.cc](https://fasttext.cc/docs/en/english-vectors.html).\n",
"\n",
"First we use the smallest vektor-file. Note that we have to increase the number of dimensions used when training from default 100 to 300 to match the vector-file."
]
},
{
"cell_type": "raw",
"metadata": {},
"source": [
"fastText-0.2.0 $ ./fasttext supervised -input ../jung_training.txt -output ../model_jung_default_wiki-news-300d-1M -dim 300 -pretrainedVectors wiki-news-300d-1M.vec\n",
"Read 9M words\n",
"Number of words: 610075\n",
"Number of labels: 4\n",
"Progress: 100.0% words/sec/thread: 648220 lr: 0.000000 loss: 1.290941 ETA: 0h 0m\n",
"\n",
"fastText-0.2.0 $ ./fasttext test ../model_jung_default_wiki-news-300d-1M.bin ../jung_evaluation.txt\n",
"N\t4518\n",
"P@1\t0.417\n",
"R@1\t0.417"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Then we try it with the largest vector-file that also includes subword-information."
]
},
{
"cell_type": "raw",
"metadata": {},
"source": [
"fastText-0.2.0 $ ./fasttext supervised -input ../jung_training.txt -output ../model_jung_default_crawl-300d-2M-subword -dim 300 -pretrainedVectors ./crawl-300d-2M-subword/crawl-300d-2M-subword.vec\n",
"Read 9M words\n",
"Number of words: 610075\n",
"Number of labels: 4\n",
"Progress: 100.0% words/sec/thread: 947173 lr: 0.000000 loss: 1.288895 ETA: 0h 0m\n",
"\n",
"fastText-0.2.0 $ ./fasttext test ../model_jung_default_crawl-300d-2M-subword.bin ../jung_evaluation.txt\n",
"N\t4518\n",
"P@1\t0.419\n",
"R@1\t0.419\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The results improve by a mere 0.002. \n",
"\n",
"Just in case, we also train on the preprocessed texts again using the largest pre-trained vectors."
]
},
{
"cell_type": "raw",
"metadata": {},
"source": [
"fastText-0.2.0 $ ./fasttext supervised -input ../processed_jung_training.txt -output ../model_jung_processed_crawl-300d-2M-subword -dim 300 -verbose 1 -pretrainedVectors ./crawl-300d-2M-subword/crawl-300d-2M-subword.vec\n",
"Read 8M words\n",
"Number of words: 261935\n",
"Number of labels: 4\n",
"Progress: 100.0% words/sec/thread: 1049889 lr: 0.000000 loss: 1.286178 ETA: 0h 0m\n",
"\n",
"fastText-0.2.0 $ ./fasttext test ../model_jung_processed_crawl-300d-2M-subword.bin ../jung_evaluation.txt\n",
"N\t4518\n",
"P@1\t0.425\n",
"R@1\t0.425\n",
"(nlp) mos@mosmbp fastText-0.2.0 $"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Now we get the best results this far. 0.425 in precision when baseline is 0.415, as n = 9380 in the largest class N from a total of 22588 in the original dataset. But that is only 2.4 % better than chance, so it doesn't say very much about the predictability of blog authors Jungian cognitive function based on their writing style."
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.4.5"
}
},
"nbformat": 4,
"nbformat_minor": 2
}
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment