Skip to content

Instantly share code, notes, and snippets.

@hraftery
Last active October 21, 2021 03:06
Show Gist options
  • Save hraftery/80df5135a34475a60a741ed14e96ac8e to your computer and use it in GitHub Desktop.
Save hraftery/80df5135a34475a60a741ed14e96ac8e to your computer and use it in GitHub Desktop.
Display the source blob
Display the rendered blob
Raw
{
"cells": [
{
"cell_type": "markdown",
"id": "0186682b",
"metadata": {},
"source": [
"# COVIDBlooms\n",
"\n",
"An animated visualisation of COVID-19 cases during the Delta variant outbreak.\n",
"\n",
"- **Why:** https://www.empirical.ee/bringing-data-to-life/\n",
"- **How:** https://www.empirical.ee/bringing-data-to-life-diy/"
]
},
{
"cell_type": "markdown",
"id": "ecc0ea58",
"metadata": {},
"source": [
"## Requirements\n",
"\n",
"This is a [Jupyter Notebook](https://jupyter.org) running a **Python 3** kernel, so the most fundamental requirement is **Juptyer**.\n",
"\n",
"Installing Jupyter typically provides the essentials: `JupyterLab` and `Python3`.\n",
"\n",
"Then the extensive, specific dependencies can all be installed using `pip` with this **one magic command** (and some patience):\n",
"\n",
"---\n",
"\n",
"`$ pip3 install pandas_alive geopandas descartes contextily rtree tqdm`\n",
"\n",
"---\n",
"\n",
"Note that `pip` is not the only way to install these requirements and many prefer Anaconda. After re-visiting this dozens of times over many years, and combing the [many](https://stackoverflow.com/questions/33541876/os-x-deciding-between-anaconda-and-homebrew-python-environments), [detailed](https://stackoverflow.com/questions/42859781/best-practices-with-anaconda-and-brew), [discussions](https://www.datacamp.com/community/tutorials/installing-jupyter-notebook) [and](https://stackoverflow.com/questions/48458033/how-to-install-xeus-cling-without-anaconda) [guides](https://hashrocket.com/blog/posts/keep-anaconda-from-constricting-your-homebrew-installs), I am very satisfied with this strategy for **2021** and **MacOS**:\n",
"\n",
"- Avoid `conda` if at all possible.\n",
"- Use `brew` whenever possible (eg. for Python and Jupyter)\n",
"- Then use `pip3`.\n",
"\n",
"I'm delighted with how well this really simple formula works today. It hasn't in the past, and may not in the future, but for now, finally and fleetingly, working with Jupyter is bliss.\n",
"\n",
"### Modifications\n",
"\n",
"The dependencies, particularly `pandas_alive` by the very clever and generous [Jack McKew](https://jackmckew.dev/), do so much of the heavy lifting. But to eek out just a bit more of what I wanted, some modifications were required. Since `pip` is such a convenient method for installation, I've opted to install the library as provided, and then patch it with my modifications.\n",
"\n",
"All modifications can be found in my [fork](https://github.com/hraftery/pandas_alive) as three commits:\n",
"\n",
"1. [610b3f4: Resolve JackMcKew#37](https://github.com/hraftery/pandas_alive/commit/610b3f4084836f6ebed187c7495d8eb575c2a72a)\n",
"2. [d3ddef6: Resolve JackMcKew#38](https://github.com/hraftery/pandas_alive/commit/d3ddef6b15131fcd11c9c68b6446cf0637058795)\n",
"3. [ddd92bb: Resolve JackMcKew#39](https://github.com/hraftery/pandas_alive/commit/ddd92bbddd5f75af32d1ee2807cc35078f564f9d)\n",
"\n",
"Here's one convenient way to apply each modification:\n",
"\n",
"1. Install `pandas_alive` in the normal way with `pip3 install pandas_alive`.\n",
"1. Add `.patch` to the end of the three commit urls above, and save as [610b3f4.patch](https://github.com/hraftery/pandas_alive/commit/610b3f4084836f6ebed187c7495d8eb575c2a72a.patch), [d3ddef6.patch](https://github.com/hraftery/pandas_alive/commit/d3ddef6b15131fcd11c9c68b6446cf0637058795.patch) and [ddd92bb.patch](https://github.com/hraftery/pandas_alive/commit/ddd92bbddd5f75af32d1ee2807cc35078f564f9d.patch).\n",
"1. Navigate to your installation location (which you can see in error messages, for example) and apply each patch **consecutively**. Eg:\n",
"\n",
"```\n",
"$ cd /usr/local/lib/python3.9/site-packages/pandas_alive\n",
"$ patch < 610b3f4.patch\n",
"$ patch < d3ddef6.patch\n",
"$ patch < ddd92bb.patch\n",
"```\n",
"\n",
"If you already have Jupyter open you need only *Restart* Python via the `Kernel` menu, and you're good to go."
]
},
{
"cell_type": "markdown",
"id": "c81e359a",
"metadata": {},
"source": [
"# On With The Show!\n",
"## 1. Library Imports"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "23739b64",
"metadata": {},
"outputs": [],
"source": [
"import pandas as pd\n",
"import matplotlib.pyplot as plt\n",
"\n",
"import pandas_alive\n",
"from IPython.display import HTML\n",
"\n",
"import urllib.request, json\n",
"\n",
"from datetime import datetime"
]
},
{
"cell_type": "markdown",
"id": "55fd7c53",
"metadata": {},
"source": [
"## 2. Fetch And Process COVID-19 NSW Case Data\n",
"### Download case data from NSW Open Data Portal"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "d1ce381b",
"metadata": {},
"outputs": [],
"source": [
"NSW_COVID_19_CASES_BY_LOCATION_URL = \"https://data.nsw.gov.au/data/api/3/action/package_show?id=aefcde60-3b0c-4bc0-9af1-6fe652944ec2\"\n",
"with urllib.request.urlopen(NSW_COVID_19_CASES_BY_LOCATION_URL) as url:\n",
" data = json.loads(url.read().decode())\n",
"\n",
"# Extract url to csv component\n",
"data_url = data[\"result\"][\"resources\"][0][\"url\"]\n",
"\n",
"# Read csv from data API url\n",
"df = pd.read_csv(data_url)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "5d70d26c",
"metadata": {},
"outputs": [],
"source": [
"# Optionally, inspect data\n",
"#display(df.head())\n",
"#df[df['lga_name19'].isna()].head()\n",
"#df.describe\n",
"#df.dtypes"
]
},
{
"cell_type": "markdown",
"id": "37c24a6c",
"metadata": {},
"source": [
"### Clean up data to suit plotting"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "193473c7",
"metadata": {},
"outputs": [],
"source": [
"# Snip off the City/Area suffixes, (A) or (C), as well as \"(NSW)\" from Central Coast.\n",
"df['lga_name19'] = df['lga_name19'].str.replace(' \\(.*\\)$', '', regex=True)\n",
"\n",
"# There's hundreds of na's. Some have postcodes but they're \"weird\"\n",
"# (different state or on ships?), so in general just call them \"Unknown\".\n",
"#df['lga_name19'].fillna(\"Unknown\", inplace=True)\n",
"# Come to think of it, we're only interested in location so drop them altogether.\n",
"df.dropna(subset = [\"lga_name19\"], inplace=True)\n",
"\n",
"# And while we're at it, drop correctional services for the same reason.\n",
"df.drop(df[df['lga_name19'] == 'Correctional settings'].index, inplace=True)\n",
"\n",
"# Convert the date string (eg. \"2020-01-25\") to a datetime object.\n",
"# Bizarrely only the default ns unit works, despite only being a date string.\n",
"df['notification_date'] = pd.to_datetime(df['notification_date'])"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "40fe16ad",
"metadata": {},
"outputs": [],
"source": [
"# Group by number of records in each lga, on each day.\n",
"df_grouped = df.groupby([\"notification_date\", \"lga_name19\"]).size()"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "35039044",
"metadata": {},
"outputs": [],
"source": [
"# Prepare for pandas-alive\n",
"df_cases = pd.DataFrame(df_grouped).unstack()\n",
"df_cases.columns = df_cases.columns.droplevel().astype(str)\n",
"\n",
"df_cases = df_cases.fillna(0)\n",
"df_cases.index = pd.to_datetime(df_cases.index)"
]
},
{
"cell_type": "markdown",
"id": "566c37a6",
"metadata": {},
"source": [
"### (Optional) Plot the default \"bar chart race\" animation"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "60035df4",
"metadata": {
"scrolled": true
},
"outputs": [],
"source": [
"# Come alive!\n",
"#animated_html = df_cases.plot_animated(n_visible=15).get_html5_video()"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "4a886059",
"metadata": {
"scrolled": true
},
"outputs": [],
"source": [
"#HTML(animated_html)"
]
},
{
"cell_type": "markdown",
"id": "e0848f24",
"metadata": {},
"source": [
"## 3. Fetch And Process NSW Geographical Data\n",
"### Download LGA boundary data so we can create a choropleth"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "beb29190",
"metadata": {},
"outputs": [],
"source": [
"# Now get the geo data.\n",
"#\n",
"# This section heavily inspired by William Ye's work:\n",
"# https://medium.com/@williamye96/covid-19-in-new-south-wales-with-geopandas-and-bokeh-bb737b3b9434\n",
"\n",
"import geopandas\n",
"\n",
"NSW_LGA_BOUNDARIES_URL=\"https://data.gov.au/geoserver/nsw-local-government-areas/wfs?request=GetFeature&typeName=ckan_f6a00643_1842_48cd_9c2f_df23a3a1dc1e&outputFormat=json\"\n",
"gdf = geopandas.read_file(NSW_LGA_BOUNDARIES_URL)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "6609e01b",
"metadata": {},
"outputs": [],
"source": [
"# Remove Lord Howe island, because it doesn't fit inside the map of NSW,\n",
"# doesn't have any cases, and the various islets occupy 62 of the 197 gdf entries!\n",
"gdf.drop(gdf[gdf['lga_pid'] == 'NSW153'].index, inplace=True)\n",
"\n",
"# Remove duplicates by keeping the most recent update. Gets rid of another 6 entries.\n",
"gdf = gdf.sort_values(\"dt_create\",ascending=False).drop_duplicates([\"lga_pid\"])"
]
},
{
"cell_type": "markdown",
"id": "800c852f",
"metadata": {},
"source": [
"## 4. Match Case Data With Geo Data And Prepare To Plot\n",
"### Prepare the case data for merging with the geo data"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "a239a4cf",
"metadata": {},
"outputs": [],
"source": [
"# Upper case the lga_name field values so they nearly match the nsw_lga__3 field values.\n",
"df[\"lga_name19_upper\"] = df[\"lga_name19\"].str.upper()\n",
"\n",
"# And fix up the names that don't quite match\n",
"lga_replace = {\"GLEN INNES SEVERN\":\"GLEN INNES SEVERN SHIRE\", \"GREATER HUME SHIRE\":\"GREATER HUME\", \"UPPER HUNTER SHIRE\":\"UPPER HUNTER\", \"WARRUMBUNGLE SHIRE\":\"WARRUMBUNGLE\"}\n",
"df = df.replace(lga_replace)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "7c0fde0a",
"metadata": {},
"outputs": [],
"source": [
"# Re-do the grouping and prep, using the upper field instead because it's needed for the gdf merge\n",
"df_grouped = df.groupby([\"notification_date\", \"lga_name19_upper\"]).size()\n",
"\n",
"df_cases = pd.DataFrame(df_grouped).unstack()\n",
"df_cases.columns = df_cases.columns.droplevel().astype(str)\n",
"\n",
"df_cases = df_cases.fillna(0)\n",
"df_cases.index = pd.to_datetime(df_cases.index)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "a2629622",
"metadata": {},
"outputs": [],
"source": [
"# Trim down to period of interest.\n",
"# Prior to June there's only one or two cases/day/lga. June doesn't have\n",
"# much to plot but is included because that's when lockdown started.\n",
"df_cases = df_cases['2021-06-01':]\n",
"#df_cases = df_cases['2021-10-01':] # small slice to speed up experimentation"
]
},
{
"cell_type": "markdown",
"id": "1b87b9b9",
"metadata": {},
"source": [
"### Perform the data merge"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "0db9de28",
"metadata": {},
"outputs": [],
"source": [
"# GeoSpatial charts require the transpose of that required by non-geo charts.\n",
"df_cases_t = df_cases.T\n",
"\n",
"# Perform the merge of cases data with geo data, creating a new gdf with the all important \"geometry\" intact. \n",
"gdf_subset = gdf[[\"nsw_lga__3\", \"geometry\"]]\n",
"gdf_merge = gdf_subset.merge(df_cases_t, left_on=\"nsw_lga__3\", right_on=\"lga_name19_upper\")\n",
"\n",
"# plot_animated() can't handle any additional columns, so drop this one.\n",
"# TODO: turn it (or even better, lga_name19) into an \"index\" so it becomes the \"geometry label\"\n",
"gdf_merge.drop('nsw_lga__3', axis=1, inplace=True)"
]
},
{
"cell_type": "markdown",
"id": "ab16e71c",
"metadata": {},
"source": [
"### Clip the geo data to the region of interest"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "e8eb71fa",
"metadata": {},
"outputs": [],
"source": [
"from shapely.geometry import box\n",
"\n",
"b = box(150.5, -32.6, 152.2, -34.35)\n",
"box_gdf = geopandas.GeoDataFrame([1], geometry=[b], crs=gdf_merge.crs)\n",
"\n",
"#fig, ax = plt.subplots(figsize=(14,14))\n",
"#gdf_merge.plot(ax=ax)\n",
"#box_gdf.boundary.plot(ax=ax, color=\"red\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "3af2c9fc",
"metadata": {},
"outputs": [],
"source": [
"gdf_clipped = gdf_merge.clip(b)"
]
},
{
"cell_type": "markdown",
"id": "4c248606",
"metadata": {},
"source": [
"### (Optional) Show the clipped region in context"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "9a611195",
"metadata": {},
"outputs": [],
"source": [
"#fig, ax = plt.subplots(figsize=(14,14))\n",
"\n",
"#gdf_clipped.plot(ax=ax, color=\"purple\")\n",
"#gdf_merge.boundary.plot(ax=ax)\n",
"#box_gdf.boundary.plot(ax=ax, color=\"red\")"
]
},
{
"cell_type": "markdown",
"id": "bf5e79d3",
"metadata": {},
"source": [
"## 5. Start Plotting!\n",
"### Create the animated choropleth and store for later"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "3574670f",
"metadata": {
"scrolled": false
},
"outputs": [],
"source": [
"import contextily\n",
"fig, ax = plt.subplots(figsize=(10,10))\n",
"\n",
"geo_chart = gdf_clipped.plot_animated(fig=fig,\n",
" alpha=0.35,\n",
" cmap='Oranges',\n",
"# facecolor=\"none\", edgecolor=\"black\",\n",
" legend=True,\n",
" vmin=0,\n",
" vmax=50,\n",
" basemap_format={'source':contextily.providers.CartoDB.Positron,\n",
" 'zoom':9},\n",
"# period_label={'x':0.65, 'y':0.01, 'fontsize':36}\n",
" period_label={'x':0.65, 'y':0.01, 'fontsize':27}\n",
" )\n",
"animated_html = geo_chart.get_html5_video() # Note this is necessary whether we use `animated_html` or not..."
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "95e35703",
"metadata": {},
"outputs": [],
"source": [
"# Optionally, play the animation now.\n",
"#HTML(animated_html)"
]
},
{
"cell_type": "markdown",
"id": "e6a7a8e4",
"metadata": {},
"source": [
"### Create the animated Bar Chart and store for later"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "3710fd9a",
"metadata": {},
"outputs": [],
"source": [
"# Group LGA data into regions.\n",
"\n",
"# As per https://www.nsw.gov.au/covid-19/stay-safe/protecting/advice-high-risk-groups/disability/local-councils-greater-sydney\n",
"syd_lgas = ['BAYSIDE', 'BLACKTOWN', 'BLUE MOUNTAINS', 'BURWOOD', 'CAMDEN', 'CAMPBELLTOWN', 'CANADA BAY', 'CANTERBURY-BANKSTOWN', 'CENTRAL COAST', 'CUMBERLAND', 'FAIRFIELD', 'GEORGES RIVER', 'HAWKESBURY', 'HORNSBY', 'HUNTERS HILL', 'INNER WEST', 'KU-RING-GAI', 'LANE COVE', 'LIVERPOOL', 'MOSMAN', 'NORTH SYDNEY', 'NORTHERN BEACHES', 'PARRAMATTA', 'PENRITH', 'RANDWICK', 'RYDE', 'STRATHFIELD', 'SUTHERLAND SHIRE', 'SYDNEY', 'THE HILLS SHIRE', 'WAVERLEY', 'WILLOUGHBY', 'WOLLONDILLY', 'WOLLONGONG', 'WOOLLAHRA']\n",
"\n",
"# Collected from sources such as https://www.dva.gov.au/sites/default/files/files/providers/Veterans%27%20Home%20Care/nsw-hunter.pdf\n",
"hunter_lgas = ['NEWCASTLE', 'LAKE MACQUARIE', 'PORT STEPHENS', 'MAITLAND', 'CESSNOCK', 'DUNGOG', 'SINGLETON', 'MUSWELLBROOK', 'UPPER HUNTER']\n",
"\n",
"df_copy = pd.DataFrame(df_cases)\n",
"\n",
"df_syd = df_copy[syd_lgas].sum(axis=1)\n",
"df_copy.drop(syd_lgas, axis=1, inplace=True)\n",
"\n",
"df_hunter = df_copy[hunter_lgas].sum(axis=1)\n",
"df_copy.drop(hunter_lgas, axis=1, inplace=True)\n",
"\n",
"df_rest = df_copy.sum(axis=1)\n",
"\n",
"df_regions = pd.DataFrame({'Greater Sydney': df_syd, 'Hunter Region': df_hunter, 'Rest of NSW': df_rest})\n",
"#display(df_regions)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "1b4f718c",
"metadata": {},
"outputs": [],
"source": [
"fig, ax = plt.subplots(figsize=(7,7))\n",
"\n",
"bar_chart = df_regions.plot_animated(fig=fig, orientation='v', period_label=False, dpi=600)\n",
"animated_html = bar_chart.get_html5_video()"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "1e67057a",
"metadata": {},
"outputs": [],
"source": [
"# Optionally, play the animation now.\n",
"#HTML(animated_html)"
]
},
{
"cell_type": "markdown",
"id": "2797745a",
"metadata": {},
"source": [
"### Create the animated Line Chart and store for later"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "25982097",
"metadata": {},
"outputs": [],
"source": [
"fig, ax = plt.subplots(figsize=(7,7))\n",
"\n",
"line_chart = df_regions.plot_animated(fig=fig, kind='line', period_label=False,\n",
" label_events={\n",
" 'First case confirmed in \"Bondi cluster\".':datetime.strptime(\"16/06/2021\", \"%d/%m/%Y\"),\n",
" 'Restrictions inplace in LGAs of concern.':datetime.strptime(\"23/06/2021\", \"%d/%m/%Y\"),\n",
"# 'LGAs of concern enter lockdown.':datetime.strptime(\"25/06/2021\", \"%d/%m/%Y\"),\n",
" 'Lockdown extended to Greater Sydney.':datetime.strptime(\"26/06/2021\", \"%d/%m/%Y\"),\n",
" 'Lockdown extended to all of NSW.':datetime.strptime(\"14/08/2021\", \"%d/%m/%Y\"),\n",
" 'Lockdown ends.':datetime.strptime(\"11/10/2021\", \"%d/%m/%Y\")\n",
" }\n",
" )\n",
"animated_html = bar_chart.get_html5_video()"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "6914f50e",
"metadata": {},
"outputs": [],
"source": [
"# Optionally, play the animation now.\n",
"#HTML(animated_html)"
]
},
{
"cell_type": "markdown",
"id": "d45c10a1",
"metadata": {},
"source": [
"### Finally, combine the three charts we've created into a single animation"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "82d50c86",
"metadata": {},
"outputs": [],
"source": [
"from matplotlib import rcParams\n",
"\n",
"rcParams.update({\"figure.autolayout\": False})\n",
"# make sure figures are `Figure()` instances\n",
"figs = plt.Figure(figsize=(20,11))\n",
"gs = figs.add_gridspec(2, 2, hspace=0.05, width_ratios=[10,7])\n",
"f3_ax1 = figs.add_subplot(gs[:, 0])\n",
"f3_ax1.set_title(geo_chart.title)\n",
"geo_chart.ax = f3_ax1\n",
"\n",
"f3_ax2 = figs.add_subplot(gs[0, 1])\n",
"f3_ax2.set_title(bar_chart.title)\n",
"bar_chart.ax = f3_ax2\n",
"\n",
"f3_ax3 = figs.add_subplot(gs[1, 1])\n",
"f3_ax3.set_title(line_chart.title)\n",
"line_chart.ax = f3_ax3\n",
"\n",
"figs.suptitle(f\"NSW COVID-19 Confirmed Cases Per Day, Over Time\")\n",
"\n",
"pandas_alive.animate_multiple_plots('multiple_charts.mp4', [geo_chart, bar_chart, line_chart], figs, enable_progress_bar=True)\n"
]
},
{
"cell_type": "markdown",
"id": "a5373110",
"metadata": {},
"source": [
"### Voilà!\n",
"\n",
"Our animated chart is now available in the same folder as the Notebook file, named \"multiple_charts.mp4\".\n",
"\n",
"---"
]
},
{
"cell_type": "markdown",
"id": "f9d40846",
"metadata": {},
"source": [
"# Further Exploration\n",
"\n",
"Explore the impact and usage of some of the options available when colouring choropleths.\n",
"\n",
"This section is largely derived from the [choropleth documentation](https://pysal.org/mapclassify/notebooks/03_choropleth.html) at pysal.org.\n",
"\n",
"To get started on the wormhole that is colouring choropleths, see relevant documentation from [Matplotlib](https://matplotlib.org/stable/tutorials/colors/colormaps.html) and [GeoPandas](https://geopandas.org/gallery/choropleths.html).\n",
"\n",
"\n",
"## Requirements\n",
"\n",
"The additional dependencies specific to this section can all be installed using pip:\n",
"\n",
"```\n",
"$ pip3 install palettable mapclassify libpysal\n",
"```"
]
},
{
"cell_type": "code",
"execution_count": 32,
"id": "099a9099",
"metadata": {},
"outputs": [],
"source": [
"import libpysal\n",
"import mapclassify\n",
"\n",
"pth = libpysal.examples.get_path('sids2.shp')\n",
"gdf_sids = geopandas.read_file(pth)\n",
"\n",
"def replace_legend_items(legend, mapping):\n",
" for txt in legend.texts:\n",
" for k,v in mapping.items():\n",
" if txt.get_text() == str(k):\n",
" txt.set_text(v)"
]
},
{
"cell_type": "code",
"execution_count": 33,
"id": "593ad822",
"metadata": {},
"outputs": [],
"source": [
"from palettable import colorbrewer\n",
"sequential = colorbrewer.COLOR_MAPS['Sequential']\n",
"diverging = colorbrewer.COLOR_MAPS['Diverging']\n",
"qualitative = colorbrewer.COLOR_MAPS['Qualitative']"
]
},
{
"cell_type": "code",
"execution_count": 34,
"id": "84edadb2",
"metadata": {},
"outputs": [
{
"data": {
"application/vnd.jupyter.widget-view+json": {
"model_id": "c2a2084ab5ce436fbbcc5c5cc6732f05",
"version_major": 2,
"version_minor": 0
},
"text/plain": [
"VBox(children=(RadioButtons(options=('Sequential', 'Diverging', 'Qualitative'), value='Sequential'), Output())…"
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"from ipywidgets import interact, Dropdown, RadioButtons, IntSlider, VBox, HBox, FloatSlider, Button, Label\n",
"\n",
"k_classifiers = {\n",
" 'equal_interval': mapclassify.EqualInterval,\n",
" 'fisher_jenks': mapclassify.FisherJenks,\n",
" 'jenks_caspall': mapclassify.JenksCaspall,\n",
" 'jenks_caspall_forced': mapclassify.JenksCaspallForced,\n",
" 'maximum_breaks': mapclassify.MaximumBreaks,\n",
" 'natural_breaks': mapclassify.NaturalBreaks,\n",
" 'quantiles': mapclassify.Quantiles,\n",
" }\n",
"\n",
"def k_values(ctype, cmap):\n",
" k = list(colorbrewer.COLOR_MAPS[ctype][cmap].keys())\n",
" return list(map(int, k))\n",
" \n",
"def update_map(method='quantiles', k=5, cmap='Blues'):\n",
" classifier = k_classifiers[method](gdf_sids.SIDR79, k=k)\n",
" mapping = dict([(i,s) for i,s in enumerate(classifier.get_legend_classes())])\n",
" #print(classifier)\n",
" f, ax = plt.subplots(1, figsize=(16, 9))\n",
" gdf_sids.assign(cl=classifier.yb).plot(column='cl', categorical=True, \\\n",
" k=k, cmap=cmap, linewidth=0.1, ax=ax, \\\n",
" edgecolor='grey', legend=True, \\\n",
" legend_kwds={'loc': 'lower right'})\n",
" ax.set_axis_off()\n",
" ax.set_title(\"SIDR79\")\n",
" replace_legend_items(ax.get_legend(), mapping)\n",
"\n",
" plt.show()\n",
" \n",
"\n",
"\n",
"data_type = RadioButtons(options=['Sequential', 'Diverging', 'Qualitative'])\n",
"\n",
"bindings = {'Sequential': range(3,9+1),\n",
" 'Diverging': range(3,11+1),\n",
" 'Qualitative': range(3,12+1)}\n",
"\n",
"cmap_bindings = {'Sequential': list(sequential.keys()),\n",
" 'Diverging': list(diverging.keys()),\n",
" 'Qualitative': list(qualitative.keys())}\n",
"\n",
"class_val = Dropdown(options=bindings[data_type.value], value=5) \n",
"cmap_val = Dropdown(options=cmap_bindings[data_type.value])\n",
"\n",
"def type_change(change):\n",
" class_val.options = bindings[change['new']]\n",
" cmap_val.options = cmap_bindings[change['new']]\n",
"\n",
"def cmap_change(change):\n",
" cmap=change['new']\n",
" ctype = data_type.value\n",
" k = k_values(ctype, cmap)\n",
" class_val.options = k\n",
" \n",
"data_type.observe(type_change, names=['value'])\n",
"cmap_val.observe(cmap_change, names=['value'])\n",
"\n",
"\n",
"from ipywidgets import Output, Tab\n",
"out = Output()\n",
"t = Tab()\n",
"t.children = [out]\n",
"#t\n",
"\n",
"# In this case, the interact function must be defined after the conditions stated above...\n",
"# therefore, the k now depends on the radio button \n",
"\n",
"with out:\n",
" interact(update_map, method=list(k_classifiers.keys()), cmap=cmap_val, k = class_val)\n",
"\n",
"display(VBox([data_type, out]))"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "2f957814",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.7"
}
},
"nbformat": 4,
"nbformat_minor": 5
}
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment