Skip to content

Instantly share code, notes, and snippets.

@Ben-Epstein
Created July 17, 2020 14:47
Show Gist options
  • Save Ben-Epstein/d6e1fc617fadbdacf2ceded16aecb25d to your computer and use it in GitHub Desktop.
Save Ben-Epstein/d6e1fc617fadbdacf2ceded16aecb25d to your computer and use it in GitHub Desktop.
Display the source blob
Display the rendered blob
Raw
{"cells":[{"metadata":{},"cell_type":"markdown","source":"# Splice Machine and Spark have a great relationship\n\n## I was thinking maybe an image here of the Splice Logo with a <3 and the Spark Logo\n\n<blockquote><p class='quotation'><b><br><span style='font-size:15px'>Spark is Embedded into the DNA of Splice Machine. It is used in our database for large, analytical queries as well as in our notebooks here for large machine learning data manipulation workloads which we'll cover later. Spark and PySpark come preconfigured on all of our clusters, and getting started is as easy as 2 lines of code. Your Spark Session will automatically connect to your Kubernetes cluster and can scale to meet your demands.<footer>Splice Machine</footer>\n</blockquote>\n\n#### Let's start our Spark Session"},{"metadata":{"trusted":true},"cell_type":"code","source":"from pyspark.sql import SparkSession\nspark = SparkSession.builder.getOrCreate()","execution_count":5,"outputs":[]},{"metadata":{},"cell_type":"markdown","source":"# That's it!\n## You now have a powerful Spark Session running on Kubernetes\n<blockquote> \n You can access your Spark Session UI by calling the <code>get_spark_ui</code> function in our <code>splicemachine.notebook</code> module. This function takes either the port of your Spark Session or the Spark Session object itself, and returns both a link to your Spark UI as well as an embedded IFrame you can interact with right here in the notebook.\n<footer>Splice Machine</footer>\n</blockquote>"},{"metadata":{"trusted":true},"cell_type":"code","source":"from splicemachine.notebook import get_spark_ui\n# Get the port of our Spark Session\nport = spark.sparkContext.uiWebUrl.split(':')[-1]\nprint('Spark UI Port: ',port)\nhelp(get_spark_ui)","execution_count":null,"outputs":[]},{"metadata":{"trusted":true},"cell_type":"code","source":"# Get the Spark UI with the port\nget_spark_ui(port=port)","execution_count":null,"outputs":[]},{"metadata":{},"cell_type":"markdown","source":"# Let's talk Database\n<blockquote> After all, Splice Machine is a powerful Scale-Out transactional and analytical database. To make this as useful as possible for Data Scientists, we've created the\n <a href=\"https://www.splicemachine.com/the-splice-machine-native-spark-datasource/\">Native \nSpark Datasource</a>. It allows us to do inserts, selects, upserts, updates and many more functions without serialization all from code. On top of this, we've implemented a wrapper called the <code>PySpliceContext</code> to establish our direct connection in Python. This comes with the same API as the Native Scala implementation, and a few extra Python specific helpers. Check out the entire documentation <a href=\"https://pysplice.readthedocs.io/en/dbaas-4100/splicemachine.spark.html\">here</a>.<br><br>\n You'll see in the docs that there is both the <code>PySpliceContext</code> and the <code>ExtPySpliceContext</code>. The <code>ExtPySpliceContext</code> is used when you are running your code outside of the Kubernetes cluster. The only difference in configuration is that you must manually set both the JDBC_URL (which you can get from your <a href=\"https://cloud.splicemachine.io\">Cloud Manager UI</a>) and your kafkaServer URL. Everything else is identical.\n<footer>Splice Machine</footer>\n</blockquote>\n\n#### Let's create our PySpliceContext"},{"metadata":{"trusted":true},"cell_type":"code","source":"from splicemachine.spark import PySpliceContext\n\nsplice = PySpliceContext(spark)\nhelp(splice)","execution_count":6,"outputs":[{"output_type":"stream","text":"Help on PySpliceContext in module splicemachine.spark.context object:\n\nclass PySpliceContext(builtins.object)\n | PySpliceContext(sparkSession, JDBC_URL=None)\n | \n | This class implements a SpliceMachineContext object (similar to the SparkContext object)\n | \n | Methods defined here:\n | \n | __init__(self, sparkSession, JDBC_URL=None)\n | :param JDBC_URL: (string) The JDBC URL Connection String for your Splice Machine Cluster\n | :param sparkSession: (sparkContext) A SparkSession object for talking to Spark\n | \n | analyzeSchema(self, schema_name)\n | analyze the schema\n | :param schema_name: schema name which stats info will be collected\n | :return:\n | \n | analyzeTable(self, schema_table_name, estimateStatistics=False, samplePercent=10.0)\n | collect stats info on a table\n | :param schema_table_name: full table name in the format of \"schema.table\"\n | :param estimateStatistics:will use estimate statistics if True\n | :param samplePercent: the percentage or rows to be sampled.\n | :return:\n | \n | createTable(self, dataframe, schema_table_name, primary_keys=(), create_table_options=None, to_upper=False, drop_table=False)\n | Creates a schema.table from a dataframe\n | :param dataframe: The Spark DataFrame to base the table off\n | :param schema_table_name: str The schema.table to create\n | :param primary_keys: List[str] the primary keys. Default None\n | :param create_table_options: str The additional table-level SQL options default None\n | :param to_upper: bool If the dataframe columns should be converted to uppercase before table creation\n | If False, the table will be created with lower case columns. Default False\n | :param drop_table: bool whether to drop the table if it exists. Default False. If False and the table exists,\n | the function will throw an exception.\n | \n | delete(self, dataframe, schema_table_name)\n | Delete records in a dataframe based on joining by primary keys from the data frame.\n | Be careful with column naming and case sensitivity.\n | \n | :param dataframe: (DF) The dataframe you would like to delete\n | :param schema_table_name: (string) Splice Machine Table\n | \n | df(self, sql)\n | Return a Spark Dataframe from the results of a Splice Machine SQL Query\n | \n | :param sql: (string) SQL Query (eg. SELECT * FROM table1 WHERE column2 > 3)\n | :return: A Spark DataFrame containing the results\n | \n | dropTable(self, schema_table_name)\n | Drop a specified table.\n | \n | :param schema_table_name: (optional) (string) schemaName.tableName\n | \n | execute(self, query_string)\n | execute a query\n | :param query_string: (string) SQL Query (eg. SELECT * FROM table1 WHERE column2 > 3)\n | :return:\n | \n | executeUpdate(self, query_string)\n | execute a dml query:(update,delete,drop,etc)\n | :param query_string: (string) SQL Query (eg. SELECT * FROM table1 WHERE column2 > 3)\n | :return:\n | \n | export(self, dataframe, location, compression=False, replicationCount=1, fileEncoding=None, fieldSeparator=None, quoteCharacter=None)\n | Export a dataFrame in CSV\n | :param dataframe:\n | :param location: Destination directory\n | :param compression: Whether to compress the output or not\n | :param replicationCount: Replication used for HDFS write\n | :param fileEncoding: fileEncoding or null, defaults to UTF-8\n | :param fieldSeparator: fieldSeparator or null, defaults to ','\n | :param quoteCharacter: quoteCharacter or null, defaults to '\"'\n | :return:\n | \n | exportBinary(self, dataframe, location, compression, e_format)\n | Export a dataFrame in binary format\n | :param dataframe:\n | :param location: Destination directory\n | :param compression: Whether to compress the output or not\n | :param e_format: Binary format to be used, currently only 'parquet' is supported\n | :return:\n | \n | getConnection(self)\n | Return a connection to the database\n | \n | getSchema(self, schema_table_name)\n | Return the schema via JDBC.\n | \n | :param schema_table_name: (DF) Table name\n | \n | insert(self, dataframe, schema_table_name, to_upper=False)\n | Insert a dataframe into a table (schema.table).\n | \n | :param dataframe: (DF) The dataframe you would like to insert\n | :param schema_table_name: (string) The table in which you would like to insert the DF\n | :param to_upper: bool If the dataframe columns should be converted to uppercase before table creation\n | If False, the table will be created with lower case columns. Default False\n | \n | internalDf(self, query_string)\n | SQL to Dataframe translation. (Lazy)\n | Runs the query inside Splice Machine and sends the results to the Spark Adapter app\n | :param query_string: (string) SQL Query (eg. SELECT * FROM table1 WHERE column2 > 3)\n | :return: pyspark dataframe contains the result of query_string\n | \n | replaceDataframeSchema(self, dataframe, schema_table_name)\n | Returns a dataframe with all column names replaced with the proper string case from the DB table\n | :param dataframe: A dataframe with column names to convert\n | :param schema_table_name: The schema.table with the correct column cases to pull from the database\n | \n | tableExists(self, schema_table_name)\n | Check whether or not a table exists\n | \n | :param schema_table_name: (string) Table Name\n | \n | toUpper(self, dataframe)\n | Returns a dataframe with all of the columns in uppercase\n | :param dataframe: The dataframe to convert to uppercase\n | \n | truncateTable(self, schema_table_name)\n | truncate a table\n | :param schema_table_name: the full table name in the format \"schema.table_name\" which will be truncated\n | :return:\n | \n | update(self, dataframe, schema_table_name)\n | Update data from a dataframe for a specified schema_table_name (schema.table).\n | The keys are required for the update and any other columns provided will be updated\n | in the rows.\n | \n | :param dataframe: (DF) The dataframe you would like to update\n | :param schema_table_name: (string) Splice Machine Table\n | \n | upsert(self, dataframe, schema_table_name)\n | Upsert the data from a dataframe into a table (schema.table).\n | \n | :param dataframe: (DF) The dataframe you would like to upsert\n | :param schema_table_name: (string) The table in which you would like to upsert the RDD\n | \n | ----------------------------------------------------------------------\n | Data descriptors defined here:\n | \n | __dict__\n | dictionary for instance variables (if defined)\n | \n | __weakref__\n | list of weak references to the object (if defined)\n\n","name":"stdout"}]},{"metadata":{},"cell_type":"markdown","source":"## Great! \n### Let's look at some common functions\n<blockquote> \n Some of the most commonly used functions by Data Scientists and Engineers are:\n <ul>\n <li><code>df</code>: This function takes an arbitrary SQL statement and returns the result as a Spark Dataframe. This ensures that no matter the size of the result, it will be distributed amongst your available Spark Executors</li>\n <li><code>createTable</code>: This function takes your Dataframe and the name of a table in the format \"schema.table\" and creates that table using the structure of your DF. This allows you to skip all of the SQL</li>\n <li><code>insert</code>: This function takes your Dataframe and the name of a table in the format \"schema.table\" and inserts the rows directly into the table. It's important to make sure <b>the schema of your Dataframe matches the schema of your table</b></li>\n <li><code>dropTableIfExists</code>: This function takes the name of a table in the format \"schema.table\" and drops that table if it exists</li>\n <li><code>execute</code>: This function takes arbitrary SQL and executes it through a raw JDBC connection</li>\n </ul>\n <br>\nThere are many other powerful functions available in our <a href=\"https://pysplice.readthedocs.io/en/dbaas-4100/splicemachine.spark.html\">documentation</a>\n<footer>Splice Machine</footer>\n</blockquote>\n\n#### Let's see and example"},{"metadata":{"trusted":true,"scrolled":true},"cell_type":"code","source":"print(help(splice.df))\nprint('-------------------------------------------------------------------------------------')\nprint(help(splice.createTable))\nprint('-------------------------------------------------------------------------------------')\nprint(help(splice.insert))\nprint('-------------------------------------------------------------------------------------')\nprint(help(splice.dropTableIfExists))\nprint('-------------------------------------------------------------------------------------')\nprint(help(splice.execute))\n","execution_count":22,"outputs":[{"output_type":"stream","text":"Help on method df in module splicemachine.spark.context:\n\ndf(sql) method of splicemachine.spark.context.PySpliceContext instance\n Return a Spark Dataframe from the results of a Splice Machine SQL Query\n \n :param sql: (string) SQL Query (eg. SELECT * FROM table1 WHERE column2 > 3)\n :return: A Spark DataFrame containing the results\n\nNone\n-------------------------------------------------------------------------------------\nHelp on method createTable in module splicemachine.spark.context:\n\ncreateTable(dataframe, schema_table_name, primary_keys=(), create_table_options=None, to_upper=False, drop_table=False) method of splicemachine.spark.context.PySpliceContext instance\n Creates a schema.table from a dataframe\n :param dataframe: The Spark DataFrame to base the table off\n :param schema_table_name: str The schema.table to create\n :param primary_keys: List[str] the primary keys. Default None\n :param create_table_options: str The additional table-level SQL options default None\n :param to_upper: bool If the dataframe columns should be converted to uppercase before table creation\n If False, the table will be created with lower case columns. Default False\n :param drop_table: bool whether to drop the table if it exists. Default False. If False and the table exists,\n the function will throw an exception.\n\nNone\n-------------------------------------------------------------------------------------\nHelp on method insert in module splicemachine.spark.context:\n\ninsert(dataframe, schema_table_name, to_upper=False) method of splicemachine.spark.context.PySpliceContext instance\n Insert a dataframe into a table (schema.table).\n \n :param dataframe: (DF) The dataframe you would like to insert\n :param schema_table_name: (string) The table in which you would like to insert the DF\n :param to_upper: bool If the dataframe columns should be converted to uppercase before table creation\n If False, the table will be created with lower case columns. Default False\n\nNone\n-------------------------------------------------------------------------------------\nHelp on method _dropTableIfExists in module splicemachine.spark.context:\n\n_dropTableIfExists(schema_table_name) method of splicemachine.spark.context.PySpliceContext instance\n Drop table if it exists\n\nNone\n-------------------------------------------------------------------------------------\nHelp on method execute in module splicemachine.spark.context:\n\nexecute(query_string) method of splicemachine.spark.context.PySpliceContext instance\n execute a query\n :param query_string: (string) SQL Query (eg. SELECT * FROM table1 WHERE column2 > 3)\n :return:\n\nNone\n","name":"stdout"}]},{"metadata":{},"cell_type":"markdown","source":"#### Let's try it out\n\nFirst, we'll create a SQL table and populate it. Then we'll grab that data as a Spark Dataframe and create a new table with it, inserting our data"},{"metadata":{"trusted":true,"scrolled":true},"cell_type":"code","source":"%%sql\nDROP TABLE IF EXISTS FOO;\nCREATE TABLE FOO(a INT, b FLOAT, c VARCHAR(25), d TIMESTAMP DEFAULT CURRENT TIMESTAMP);\nINSERT INTO FOO (a,b,c) VALUES (240, 84.1189, 'bird');\nINSERT INTO FOO (a,b,c) VALUES (207, 1120.7235, 'heal');\nINSERT INTO FOO (a,b,c) VALUES (73, 1334.6568, 'scent');\nINSERT INTO FOO (a,b,c) VALUES (24, 513.4238, 'toy');\nINSERT INTO FOO (a,b,c) VALUES (127, 1030.0719, 'neat');\nINSERT INTO FOO (a,b,c) VALUES (91, 694.5587, 'mailbox');\nINSERT INTO FOO (a,b,c) VALUES (219, 238.7311, 'animal');\nINSERT INTO FOO (a,b,c) VALUES (112, 698.1438, 'watch');\nINSERT INTO FOO (a,b,c) VALUES (229, 1034.051, 'sheet');\nINSERT INTO FOO (a,b,c) VALUES (246, 782.5559, 'challenge');\nINSERT INTO FOO (a,b,c) VALUES (33, 241.8961, 'nutty');\nINSERT INTO FOO (a,b,c) VALUES (127, 758.8009, 'python');\nINSERT INTO FOO (a,b,c) VALUES (80, 1566.444, 'jumble');\nINSERT INTO FOO (a,b,c) VALUES (246, 751.352, 'easy');\nINSERT INTO FOO (a,b,c) VALUES (242, 717.3813, 'difficult');\nINSERT INTO FOO (a,b,c) VALUES (118, 311.3499, 'answer');\nINSERT INTO FOO (a,b,c) VALUES (174, 815.5917, 'xylophone');\nINSERT INTO FOO (a,b,c) VALUES (235, 269.0144, 'crash');\nINSERT INTO FOO (a,b,c) VALUES (21, 267.1351, 'chocolate');\nINSERT INTO FOO (a,b,c) VALUES (82, 1097.7805, 'straw');","execution_count":19,"outputs":[{"output_type":"stream","text":"Sql started successfully\n\n","name":"stdout"},{"output_type":"display_data","data":{"method":"display_data","application/vnd.jupyter.widget-view+json":{"version_minor":0,"model_id":"628ba98b-3c04-43a1-9a5c-ee725fc2be7c","version_major":2}},"metadata":{}},{"output_type":"display_data","data":{"method":"display_data","application/vnd.jupyter.widget-view+json":{"version_minor":0,"model_id":"850a6eb8-9bf7-44a3-b1a9-9c3dfadb77fe","version_major":2}},"metadata":{}},{"output_type":"display_data","data":{"method":"display_data","application/vnd.jupyter.widget-view+json":{"version_minor":0,"model_id":"01a5c6e5-89be-4570-875e-359ee1a6bb71","version_major":2}},"metadata":{}},{"output_type":"display_data","data":{"method":"display_data","application/vnd.jupyter.widget-view+json":{"version_minor":0,"model_id":"b0f51024-8b18-480d-912b-fb20aaaa4e61","version_major":2}},"metadata":{}},{"output_type":"display_data","data":{"method":"display_data","application/vnd.jupyter.widget-view+json":{"version_minor":0,"model_id":"5e10a60c-514c-4cdf-8bc4-35417b683397","version_major":2}},"metadata":{}},{"output_type":"display_data","data":{"method":"display_data","application/vnd.jupyter.widget-view+json":{"version_minor":0,"model_id":"4a974d14-45b3-41eb-a0af-a5bfe95efef9","version_major":2}},"metadata":{}},{"output_type":"display_data","data":{"method":"display_data","application/vnd.jupyter.widget-view+json":{"version_minor":0,"model_id":"3f93f5bd-d50c-412e-b90b-6e1d747ff751","version_major":2}},"metadata":{}},{"output_type":"display_data","data":{"method":"display_data","application/vnd.jupyter.widget-view+json":{"version_minor":0,"model_id":"2a6dc338-12f8-4b88-b473-9856694198b8","version_major":2}},"metadata":{}},{"output_type":"display_data","data":{"method":"display_data","application/vnd.jupyter.widget-view+json":{"version_minor":0,"model_id":"c836d5e1-f5c2-4006-804a-75aa7d5ef69f","version_major":2}},"metadata":{}},{"output_type":"display_data","data":{"method":"display_data","application/vnd.jupyter.widget-view+json":{"version_minor":0,"model_id":"7d8b18eb-68e2-4705-a8d1-e95508df98cb","version_major":2}},"metadata":{}},{"output_type":"display_data","data":{"method":"display_data","application/vnd.jupyter.widget-view+json":{"version_minor":0,"model_id":"a51a6dfb-ae2e-454f-a595-bfbe70671cff","version_major":2}},"metadata":{}},{"output_type":"display_data","data":{"method":"display_data","application/vnd.jupyter.widget-view+json":{"version_minor":0,"model_id":"02da672e-66c8-4118-b719-63eda428610b","version_major":2}},"metadata":{}},{"output_type":"display_data","data":{"method":"display_data","application/vnd.jupyter.widget-view+json":{"version_minor":0,"model_id":"3c590755-2015-4947-9d54-0b7bc0a964a3","version_major":2}},"metadata":{}},{"output_type":"display_data","data":{"method":"display_data","application/vnd.jupyter.widget-view+json":{"version_minor":0,"model_id":"3b03e440-fbe8-4340-b46a-d0cc4514564f","version_major":2}},"metadata":{}},{"output_type":"display_data","data":{"method":"display_data","application/vnd.jupyter.widget-view+json":{"version_minor":0,"model_id":"cbf20ded-731a-41a0-bab2-4a62c3156496","version_major":2}},"metadata":{}},{"output_type":"display_data","data":{"method":"display_data","application/vnd.jupyter.widget-view+json":{"version_minor":0,"model_id":"b5e3e2df-0315-4093-9f0b-1e1493328f13","version_major":2}},"metadata":{}},{"output_type":"display_data","data":{"method":"display_data","application/vnd.jupyter.widget-view+json":{"version_minor":0,"model_id":"c9c61a59-bf0f-4c55-ac85-56f988c1e19f","version_major":2}},"metadata":{}},{"output_type":"display_data","data":{"method":"display_data","application/vnd.jupyter.widget-view+json":{"version_minor":0,"model_id":"4f3d5489-1b80-4c6f-9ee2-ed7fba0439ca","version_major":2}},"metadata":{}},{"output_type":"display_data","data":{"method":"display_data","application/vnd.jupyter.widget-view+json":{"version_minor":0,"model_id":"2dca38ba-fa6d-4884-b0ba-ceff74b8d58f","version_major":2}},"metadata":{}},{"output_type":"display_data","data":{"method":"display_data","application/vnd.jupyter.widget-view+json":{"version_minor":0,"model_id":"e43c91ad-5712-4648-a04d-e588f0803a2f","version_major":2}},"metadata":{}},{"output_type":"display_data","data":{"method":"display_data","application/vnd.jupyter.widget-view+json":{"version_minor":0,"model_id":"f1f98de6-7e19-4694-bd2f-c463f716935a","version_major":2}},"metadata":{}},{"output_type":"display_data","data":{"method":"display_data","application/vnd.jupyter.widget-view+json":{"version_minor":0,"model_id":"7c0dd5bf-f0b8-4b71-952f-5a2cf794b1c7","version_major":2}},"metadata":{}}]},{"metadata":{},"cell_type":"markdown","source":"### Now we'll use the PySpliceContext to\n<blockquote> \n <ul>\n <li>Grab our new data from our table directly into a Spark Dataframe</li>\n <li>Create a new table with our Dataframe</li>\n <li>Inserting our data directly into it</li>\n </ul>\n <br>\n<footer>Splice Machine</footer>\n</blockquote>"},{"metadata":{"trusted":true},"cell_type":"code","source":"from splicemachine.mlflow_support.utilities import get_user\nschema = get_user()\n# Get our data\ndf = splice.df(f'select * from {schema}.foo')\ndf.show()\n\n# Create our new table\nprint(f'Dropping table new_foo if exists...', end='')\nsplice._dropTableIfExists(f\"{schema}.new_foo\")\nprint('done.')\nprint('Creating table new_foo...', end='')\nsplice.createTable(df, f\"{schema}.new_foo\")\nprint('done.')\n\n# Insert our data\nprint('Inserting data into new_foo...', end='')\nsplice.insert(df, f\"{schema}.new_foo\")\nprint('done.')","execution_count":23,"outputs":[{"output_type":"stream","text":"+---+---------+---------+--------------------+\n| A| B| C| D|\n+---+---------+---------+--------------------+\n| 80| 1566.444| jumble|2020-07-17 14:35:...|\n|246| 751.352| easy|2020-07-17 14:35:...|\n|242| 717.3813|difficult|2020-07-17 14:35:...|\n|118| 311.3499| answer|2020-07-17 14:35:...|\n|174| 815.5917|xylophone|2020-07-17 14:35:...|\n|240| 84.1189| bird|2020-07-17 14:35:...|\n|235| 269.0144| crash|2020-07-17 14:35:...|\n|207|1120.7235| heal|2020-07-17 14:35:...|\n| 21| 267.1351|chocolate|2020-07-17 14:35:...|\n| 73|1334.6568| scent|2020-07-17 14:35:...|\n| 82|1097.7805| straw|2020-07-17 14:35:...|\n| 24| 513.4238| toy|2020-07-17 14:35:...|\n|127|1030.0719| neat|2020-07-17 14:35:...|\n| 91| 694.5587| mailbox|2020-07-17 14:35:...|\n|219| 238.7311| animal|2020-07-17 14:35:...|\n|112| 698.1438| watch|2020-07-17 14:35:...|\n|229| 1034.051| sheet|2020-07-17 14:35:...|\n|246| 782.5559|challenge|2020-07-17 14:35:...|\n| 33| 241.8961| nutty|2020-07-17 14:35:...|\n|127| 758.8009| python|2020-07-17 14:35:...|\n+---+---------+---------+--------------------+\n\nDropping table new_foo if exists...done.\nCreating table new_foo...done.\nInserting data into new_foo...done.\n","name":"stdout"}]},{"metadata":{"trusted":true},"cell_type":"code","source":"%%sql\nselect a, b, varchar(c) c, d from new_foo","execution_count":28,"outputs":[{"output_type":"display_data","data":{"method":"display_data","application/vnd.jupyter.widget-view+json":{"version_minor":0,"model_id":"3a9c0a7c-a0a3-43ec-b779-70f68eed7c2c","version_major":2}},"metadata":{}}]},{"metadata":{"trusted":true},"cell_type":"code","source":"","execution_count":null,"outputs":[]}],"metadata":{"kernelspec":{"name":"python3","display_name":"Python 3","language":"python"},"toc":{"nav_menu":{},"number_sections":false,"sideBar":false,"skip_h1_title":false,"base_numbering":1,"title_cell":"Table of Contents","title_sidebar":"Contents","toc_cell":false,"toc_position":{},"toc_section_display":false,"toc_window_display":false},"language_info":{"name":"python","version":"3.7.6","mimetype":"text/x-python","codemirror_mode":{"name":"ipython","version":3},"pygments_lexer":"ipython3","nbconvert_exporter":"python","file_extension":".py"}},"nbformat":4,"nbformat_minor":4}
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment