Skip to content

Instantly share code, notes, and snippets.

Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save CM-Mr-Mo/261077d56f64f7e8594b7dc325e002af to your computer and use it in GitHub Desktop.
Save CM-Mr-Mo/261077d56f64f7e8594b7dc325e002af to your computer and use it in GitHub Desktop.
Display the source blob
Display the rendered blob
Raw
{
"cells": [
{
"cell_type": "markdown",
"source": [
"# Assess Fairness, Explore Interpretability, and Mitigate Fairness Issues \n",
"\n",
"This notebook demonstrates how to use [InterpretML](interpret.ml), [Fairlearn](fairlearn.org), and the Responsible AI Widget's Fairness and Interpretability dashboards to understand a model trained on the Census dataset. This dataset is a classification problem - given a range of data about 32,000 individuals, predict whether their annual income is above or below fifty thousand dollars per year.\n",
"\n",
"For the purposes of this notebook, we shall treat this as a loan decision problem. We will pretend that the label indicates whether or not each individual repaid a loan in the past. We will use the data to train a predictor to predict whether previously unseen individuals will repay a loan or not. The assumption is that the model predictions are used to decide whether an individual should be offered a loan.\n",
"\n",
"We will first train a fairness-unaware predictor, load its global and local explanations, and use the interpretability and fairness dashboards to demonstrate how this model leads to unfair decisions (under a specific notion of fairness called *demographic parity*). We then mitigate unfairness by applying the `GridSearch` algorithm from `Fairlearn` package.\n"
],
"metadata": {}
},
{
"cell_type": "markdown",
"source": [
"## Install Required Packages"
],
"metadata": {}
},
{
"cell_type": "code",
"source": [
"# %pip install --upgrade fairlearn\n",
"# %pip install --upgrade interpret-community\n",
"# %pip install --upgrade raiwidgets\n",
"\n",
"# %pip install --upgrade lime\n",
"# %pip install --upgrade rai-core-flask\n",
"\n",
"%pip install --upgrade fairlearn==0.6.0\n",
"%pip install --upgrade interpret-community==0.17.2\n",
"%pip install --upgrade raiwidgets==0.3.1.post1\n",
"%pip install --upgrade azureml-interpret==1.26.0\n",
"%pip install --upgrade azureml-core==1.26.0"
],
"outputs": [],
"execution_count": null,
"metadata": {
"gather": {
"logged": 1617803681580
}
}
},
{
"cell_type": "markdown",
"source": [
"After installing packages, you must close and reopen the notebook as well as restarting the kernel."
],
"metadata": {}
},
{
"cell_type": "markdown",
"source": [
"## Load and preprocess the data set\n",
"\n",
"For simplicity, we import the data set from the `fairlearn` package, which contains the data in a cleaned format. We start by importing the various modules we're going to use:"
],
"metadata": {}
},
{
"cell_type": "code",
"source": [
"from fairlearn.reductions import GridSearch\n",
"from fairlearn.reductions import DemographicParity, ErrorRate\n",
"from fairlearn.datasets import fetch_adult\n",
"from fairlearn.metrics import MetricFrame, selection_rate\n",
"\n",
"from sklearn import svm, neighbors, tree\n",
"from sklearn.compose import ColumnTransformer, make_column_selector\n",
"from sklearn.preprocessing import LabelEncoder,StandardScaler\n",
"from sklearn.linear_model import LogisticRegression\n",
"from sklearn.pipeline import Pipeline\n",
"from sklearn.impute import SimpleImputer\n",
"from sklearn.preprocessing import StandardScaler, OneHotEncoder\n",
"from sklearn.svm import SVC\n",
"from sklearn.metrics import accuracy_score\n",
"\n",
"import pandas as pd\n",
"import numpy as np\n",
"\n",
"# SHAP Tabular Explainer\n",
"from interpret.ext.blackbox import KernelExplainer\n",
"from interpret.ext.blackbox import MimicExplainer\n",
"from interpret.ext.glassbox import LGBMExplainableModel"
],
"outputs": [],
"execution_count": null,
"metadata": {
"gather": {
"logged": 1617803683181
}
}
},
{
"cell_type": "markdown",
"source": [
"We can now load and inspect the data:"
],
"metadata": {}
},
{
"cell_type": "code",
"source": [
"dataset = fetch_adult(as_frame=True)\n",
"data = dataset['data']\n",
"data['target'] = dataset['target']\n",
"data = data.dropna(how='any')\n",
"# X_raw, y = dataset['data'], dataset['target']\n",
"X_raw, y = data.drop('target', axis = 1), data['target']\n",
"X_raw[\"race\"].value_counts().to_dict()"
],
"outputs": [],
"execution_count": null,
"metadata": {
"gather": {
"logged": 1617803684495
}
}
},
{
"cell_type": "code",
"source": [
"type(dataset)"
],
"outputs": [],
"execution_count": null,
"metadata": {
"collapsed": true,
"jupyter": {
"source_hidden": false,
"outputs_hidden": false
},
"nteract": {
"transient": {
"deleting": false
}
},
"gather": {
"logged": 1617803684649
}
}
},
{
"cell_type": "markdown",
"source": [
"We are going to treat the sex of each individual as a protected attribute (where 0 indicates female and 1 indicates male), and in this particular case we are going separate this attribute out and drop it from the main data. We then perform some standard data preprocessing steps to convert the data into a format suitable for the ML algorithms"
],
"metadata": {}
},
{
"cell_type": "code",
"source": [
"sensitive_features = X_raw[['sex','race']]\n",
"\n",
"le = LabelEncoder()\n",
"y = le.fit_transform(y)"
],
"outputs": [],
"execution_count": null,
"metadata": {
"gather": {
"logged": 1617803684738
}
}
},
{
"cell_type": "markdown",
"source": [
"Finally, we split the data into training and test sets:"
],
"metadata": {}
},
{
"cell_type": "code",
"source": [
"from sklearn.model_selection import train_test_split\n",
"X_train, X_test, y_train, y_test, sensitive_features_train, sensitive_features_test = \\\n",
" train_test_split(X_raw, y, sensitive_features,\n",
" test_size = 0.2, random_state=0, stratify=y)\n",
"\n",
"# Work around indexing bug\n",
"X_train = X_train.reset_index(drop=True)\n",
"sensitive_features_train = sensitive_features_train.reset_index(drop=True)\n",
"X_test = X_test.reset_index(drop=True)\n",
"sensitive_features_test = sensitive_features_test.reset_index(drop=True)"
],
"outputs": [],
"execution_count": null,
"metadata": {
"gather": {
"logged": 1617803684877
}
}
},
{
"cell_type": "markdown",
"source": [
"## Training a fairness-unaware predictor\n",
"\n",
"To show the effect of `Fairlearn` we will first train a standard ML predictor that does not incorporate fairness. For speed of demonstration, we use a simple logistic regression estimator from `sklearn`:"
],
"metadata": {}
},
{
"cell_type": "code",
"source": [
"numeric_transformer = Pipeline(\n",
" steps=[\n",
" (\"impute\", SimpleImputer()),\n",
" (\"scaler\", StandardScaler()),\n",
" ]\n",
")\n",
"categorical_transformer = Pipeline(\n",
" [\n",
" (\"impute\", SimpleImputer(strategy=\"most_frequent\")),\n",
" (\"ohe\", OneHotEncoder(handle_unknown=\"ignore\")),\n",
" ]\n",
")\n",
"preprocessor = ColumnTransformer(\n",
" transformers=[\n",
" (\"num\", numeric_transformer, make_column_selector(dtype_exclude=\"category\")),\n",
" (\"cat\", categorical_transformer, make_column_selector(dtype_include=\"category\")),\n",
" ]\n",
")\n",
"\n",
"model = Pipeline(\n",
" steps=[\n",
" (\"preprocessor\", preprocessor),\n",
" (\n",
" \"classifier\",\n",
" LogisticRegression(solver=\"liblinear\", fit_intercept=True),\n",
" ),\n",
" ]\n",
")\n",
"\n",
"model.fit(X_train, y_train)"
],
"outputs": [],
"execution_count": null,
"metadata": {
"scrolled": true,
"gather": {
"logged": 1617803687189
}
}
},
{
"cell_type": "markdown",
"source": [
"## Generate Model Explanations"
],
"metadata": {}
},
{
"cell_type": "code",
"source": [
"# Using SHAP KernelExplainer\n",
"# clf.steps[-1][1] returns the trained classification model\n",
"from interpret.ext.blackbox import TabularExplainer\n",
"# explainer = MimicExplainer(model.steps[-1][1], \n",
"# X_train,\n",
"# LGBMExplainableModel,\n",
"# features=X_raw.columns, \n",
"# classes=['Rejected', 'Approved'],\n",
"# transformations=preprocessor)\n",
"\n",
"explainer = TabularExplainer(model.steps[-1][1], \n",
" initialization_examples=X_train, \n",
" features=X_raw.columns,\n",
" classes=['Rejected', 'Approved'],\n",
" transformations=preprocessor)"
],
"outputs": [],
"execution_count": null,
"metadata": {
"gather": {
"logged": 1617803688985
}
}
},
{
"cell_type": "markdown",
"source": [
"### Generate global explanations\n",
"Explain overall model predictions (global explanation)"
],
"metadata": {}
},
{
"cell_type": "code",
"source": [
"### Note we downsample the test data since visualization dashboard can't handle the full dataset\n",
"# global_explanation = explainer.explain_global(X_test[:1000])\n",
"global_explanation = explainer.explain_global(X_test[:1000])"
],
"outputs": [],
"execution_count": null,
"metadata": {
"gather": {
"logged": 1617803689135
}
}
},
{
"cell_type": "code",
"source": [
"global_explanation.get_feature_importance_dict()"
],
"outputs": [],
"execution_count": null,
"metadata": {
"gather": {
"logged": 1617803689294
}
}
},
{
"cell_type": "markdown",
"source": [
"### Generate local explanations\n",
"Explain local data points (individual instances)"
],
"metadata": {}
},
{
"cell_type": "code",
"source": [
"# You can pass a specific data point or a group of data points to the explain_local function\n",
"# E.g., Explain the first data point in the test set\n",
"instance_num = 1\n",
"local_explanation = explainer.explain_local(X_test[:instance_num])"
],
"outputs": [],
"execution_count": null,
"metadata": {
"gather": {
"logged": 1617803689387
}
}
},
{
"cell_type": "code",
"source": [
"# Get the prediction for the first member of the test set and explain why model made that prediction\n",
"prediction_value = model.predict(X_test)[instance_num]\n",
"\n",
"sorted_local_importance_values = local_explanation.get_ranked_local_values()[prediction_value]\n",
"sorted_local_importance_names = local_explanation.get_ranked_local_names()[prediction_value]"
],
"outputs": [],
"execution_count": null,
"metadata": {
"gather": {
"logged": 1617803689525
}
}
},
{
"cell_type": "code",
"source": [
"print('local importance values: {}'.format(sorted_local_importance_values))\n",
"print('local importance names: {}'.format(sorted_local_importance_names))"
],
"outputs": [],
"execution_count": null,
"metadata": {
"gather": {
"logged": 1617803689668
}
}
},
{
"cell_type": "markdown",
"source": [
"## Visualize model explanations\n",
"Load the interpretability visualization dashboard"
],
"metadata": {}
},
{
"cell_type": "code",
"source": [
"from raiwidgets import ExplanationDashboard"
],
"outputs": [],
"execution_count": null,
"metadata": {
"gather": {
"logged": 1617803689947
}
}
},
{
"cell_type": "code",
"source": [
"# ExplanationDashboard(global_explanation, model, dataset=X_test[:1000], true_y=y_test[:1000])\n",
"ExplanationDashboard(global_explanation, model, dataset=X_test[:1000], true_y=y_test[:1000])"
],
"outputs": [],
"execution_count": null,
"metadata": {
"gather": {
"logged": 1617803710516
}
}
},
{
"cell_type": "markdown",
"source": [
"We can load this predictor into the Fairness dashboard, and examine how it is unfair:"
],
"metadata": {}
},
{
"cell_type": "markdown",
"source": [
"## Assess model fairness \n",
"Load the fairness visualization dashboard"
],
"metadata": {}
},
{
"cell_type": "code",
"source": [
"from raiwidgets import FairnessDashboard\n",
"\n",
"y_pred = model.predict(X_test)\n",
"\n",
"FairnessDashboard(sensitive_features=sensitive_features_test,\n",
" y_true=y_test,\n",
" y_pred=y_pred)"
],
"outputs": [],
"execution_count": null,
"metadata": {
"gather": {
"logged": 1617803720520
}
}
},
{
"cell_type": "markdown",
"source": [
"\n",
"\n",
"Looking at the disparity in accuracy, we see that males have an error rate about three times greater than the females. More interesting is the disparity in opportunitiy - males are offered loans at three times the rate of females.\n",
"\n",
"Despite the fact that we removed the feature from the training data, our predictor still discriminates based on sex. This demonstrates that simply ignoring a protected attribute when fitting a predictor rarely eliminates unfairness. There will generally be enough other features correlated with the removed attribute to lead to disparate impact."
],
"metadata": {}
},
{
"cell_type": "markdown",
"source": [
"## Registering Models"
],
"metadata": {
"nteract": {
"transient": {
"deleting": false
}
}
}
},
{
"cell_type": "code",
"source": [
"from azureml.core import Workspace, Experiment, Model\n",
"import joblib\n",
"import os\n",
"\n",
"ws = Workspace.from_config()\n",
"ws.get_details()\n",
"\n",
"os.makedirs('models', exist_ok=True)\n",
"\n",
"# Function to register models into Azure Machine Learning\n",
"def register_model(name, model):\n",
" print(\"Registering \", name)\n",
" model_path = \"models/{0}.pkl\".format(name)\n",
" joblib.dump(value=model, filename=model_path)\n",
" registered_model = Model.register(model_path=model_path,\n",
" model_name=name,\n",
" workspace=ws)\n",
" print(\"Registered \", registered_model.id)\n",
" return registered_model.id\n",
"\n",
"# Call the register_model function \n",
"lr_reg_id = register_model(\"respnsible_logistic_regression\", model)"
],
"outputs": [],
"execution_count": null,
"metadata": {
"collapsed": true,
"jupyter": {
"source_hidden": false,
"outputs_hidden": false
},
"nteract": {
"transient": {
"deleting": false
}
},
"gather": {
"logged": 1617803720664
}
}
},
{
"cell_type": "code",
"source": [
"sf = { 'sex': sensitive_features_test.sex, 'race': sensitive_features_test.race }\n",
"ys_pred = { lr_reg_id:model.predict(X_test) }\n",
"from fairlearn.metrics._group_metric_set import _create_group_metric_set\n",
"\n",
"dash_dict = _create_group_metric_set(y_true=y_test,\n",
" predictions=ys_pred,\n",
" sensitive_features=sf,\n",
" prediction_type='binary_classification')"
],
"outputs": [],
"execution_count": null,
"metadata": {
"collapsed": true,
"jupyter": {
"source_hidden": false,
"outputs_hidden": false
},
"nteract": {
"transient": {
"deleting": false
}
},
"gather": {
"logged": 1617803720809
}
}
},
{
"cell_type": "code",
"source": [
"from azureml.contrib.fairness import upload_dashboard_dictionary, download_dashboard_by_upload_id"
],
"outputs": [],
"execution_count": null,
"metadata": {
"collapsed": true,
"jupyter": {
"source_hidden": false,
"outputs_hidden": false
},
"nteract": {
"transient": {
"deleting": false
}
},
"gather": {
"logged": 1617803720955
}
}
},
{
"cell_type": "markdown",
"source": [
"### Fairness Upload"
],
"metadata": {
"nteract": {
"transient": {
"deleting": false
}
}
}
},
{
"cell_type": "code",
"source": [
"exp = Experiment(ws, \"Responsbile_ML_Demo\")\n",
"print(exp)\n",
"\n",
"run = exp.start_logging()\n",
"\n",
"# Upload the dashboard to Azure Machine Learning\n",
"try:\n",
" dashboard_title = \"Fairness insights of Logistic Regression Classifier\"\n",
" # Set validate_model_ids parameter of upload_dashboard_dictionary to False if you have not registered your model(s)\n",
" upload_id = upload_dashboard_dictionary(run,\n",
" dash_dict,\n",
" dashboard_name=dashboard_title)\n",
" print(\"\\nUploaded to id: {0}\\n\".format(upload_id))\n",
"\n",
" # To test the dashboard, you can download it back and ensure it contains the right information\n",
" downloaded_dict = download_dashboard_by_upload_id(run, upload_id)\n",
"finally:\n",
" run.complete()"
],
"outputs": [],
"execution_count": null,
"metadata": {
"collapsed": true,
"jupyter": {
"source_hidden": false,
"outputs_hidden": false
},
"nteract": {
"transient": {
"deleting": false
}
},
"gather": {
"logged": 1617803726557
}
}
},
{
"cell_type": "markdown",
"source": [
"### Explanation Upload"
],
"metadata": {
"nteract": {
"transient": {
"deleting": false
}
}
}
},
{
"cell_type": "code",
"source": [
"from azureml.interpret import ExplanationClient\n",
"\n",
"client = ExplanationClient.from_run(run)\n",
"\n",
"client.upload_model_explanation(global_explanation, comment='global explanation: all features', model_id=lr_reg_id)"
],
"outputs": [],
"execution_count": null,
"metadata": {
"collapsed": true,
"jupyter": {
"source_hidden": false,
"outputs_hidden": false
},
"nteract": {
"transient": {
"deleting": false
}
},
"gather": {
"logged": 1617803730379
}
}
},
{
"cell_type": "markdown",
"source": [
"## Mitigation with Fairlearn (GridSearch)\n",
"\n",
"The `GridSearch` class in `Fairlearn` implements a simplified version of the exponentiated gradient reduction of [Agarwal et al. 2018](https://arxiv.org/abs/1803.02453). The user supplies a standard ML estimator, which is treated as a blackbox. `GridSearch` works by generating a sequence of relabellings and reweightings, and trains a predictor for each.\n",
"\n",
"For this example, we specify demographic parity (on the protected attribute of sex) as the fairness metric. Demographic parity requires that individuals are offered the opportunity (are approved for a loan in this example) independent of membership in the protected class (i.e., females and males should be offered loans at the same rate). We are using this metric for the sake of simplicity; in general, the appropriate fairness metric will not be obvious."
],
"metadata": {}
},
{
"cell_type": "code",
"source": [
"# Fairlearn is not yet fully compatible with Pipelines, so we have to pass the estimator only\n",
"X_train_prep = preprocessor.transform(X_train).toarray()\n",
"X_test_prep = preprocessor.transform(X_test).toarray()\n",
"\n",
"sweep = GridSearch(LogisticRegression(solver=\"liblinear\", fit_intercept=True),\n",
" constraints=DemographicParity(),\n",
" grid_size=70)"
],
"outputs": [],
"execution_count": null,
"metadata": {
"gather": {
"logged": 1617803730494
}
}
},
{
"cell_type": "markdown",
"source": [
"Our algorithms provide `fit()` and `predict()` methods, so they behave in a similar manner to other ML packages in Python. We do however have to specify two extra arguments to `fit()` - the column of protected attribute labels, and also the number of predictors to generate in our sweep.\n",
"\n",
"After `fit()` completes, we extract the full set of predictors from the `GridSearch` object."
],
"metadata": {}
},
{
"cell_type": "code",
"source": [
"sweep.fit(X_train_prep, y_train,\n",
" sensitive_features=sensitive_features_train.sex)\n",
"\n",
"predictors = sweep.predictors_"
],
"outputs": [],
"execution_count": null,
"metadata": {
"gather": {
"logged": 1617803880653
}
}
},
{
"cell_type": "markdown",
"source": [
"We could load these predictors into the Fairness dashboard now. However, the plot would be somewhat confusing due to their number. In this case, we are going to remove the predictors which are dominated in the error-disparity space by others from the sweep (note that the disparity will only be calculated for the sensitive feature). In general, one might not want to do this, since there may be other considerations beyond the strict optimization of error and disparity (of the given protected attribute)."
],
"metadata": {}
},
{
"cell_type": "code",
"source": [
"accuracies, disparities = [], []\n",
"\n",
"for predictor in predictors:\n",
" accuracy_metric_frame = MetricFrame(accuracy_score, y_train, predictor.predict(X_train_prep), sensitive_features=sensitive_features_train.sex)\n",
" selection_rate_metric_frame = MetricFrame(selection_rate, y_train, predictor.predict(X_train_prep), sensitive_features=sensitive_features_train.sex)\n",
" accuracies.append(accuracy_metric_frame.overall)\n",
" disparities.append(selection_rate_metric_frame.difference())\n",
" \n",
"all_results = pd.DataFrame({\"predictor\": predictors, \"accuracy\": accuracies, \"disparity\": disparities})\n",
"\n",
"all_models_dict = {\"unmitigated\": model.steps[-1][1]}\n",
"dominant_models_dict = {\"unmitigated\": model.steps[-1][1]}\n",
"base_name_format = \"grid_{0}\"\n",
"row_id = 0\n",
"for row in all_results.itertuples():\n",
" model_name = base_name_format.format(row_id)\n",
" all_models_dict[model_name] = row.predictor\n",
" accuracy_for_lower_or_eq_disparity = all_results[\"accuracy\"][all_results[\"disparity\"] <= row.disparity]\n",
" if row.accuracy >= accuracy_for_lower_or_eq_disparity.max():\n",
" dominant_models_dict[model_name] = row.predictor\n",
" row_id = row_id + 1"
],
"outputs": [],
"execution_count": null,
"metadata": {
"gather": {
"logged": 1617803908676
}
}
},
{
"cell_type": "markdown",
"source": [
"We can construct predictions for all the models, and also for the dominant models:"
],
"metadata": {}
},
{
"cell_type": "code",
"source": [
"from raiwidgets import FairnessDashboard\n",
"\n",
"dashboard_all = {}\n",
"for name, predictor in all_models_dict.items():\n",
" value = predictor.predict(X_test_prep)\n",
" dashboard_all[name] = value\n",
" \n",
"dominant_all = {}\n",
"for name, predictor in dominant_models_dict.items():\n",
" dominant_all[name] = predictor.predict(X_test_prep)\n",
"\n",
"FairnessDashboard(sensitive_features=sensitive_features_test, \n",
" y_true=y_test,\n",
" y_pred=dominant_all)"
],
"outputs": [],
"execution_count": null,
"metadata": {
"gather": {
"logged": 1617803930143
}
}
},
{
"cell_type": "markdown",
"source": [
"We can look at just the dominant models in the dashboard:"
],
"metadata": {}
},
{
"cell_type": "markdown",
"source": [
"We see a Pareto front forming - the set of predictors which represent optimal tradeoffs between accuracy and disparity i predictions. In the ideal case, we would have a predictor at (1,0) - perfectly accurate and without any unfairness under demographic parity (with respect to the protected attribute \"sex\"). The Pareto front represents the closest we can come to this ideal based on our data and choice of estimator. Note the range of the axes - the disparity axis covers more values than the accuracy, so we can reduce disparity substantially for a small loss in accuracy.\n",
"\n",
"By clicking on individual models on the plot, we can inspect their metrics for disparity and accuracy in greater detail. In a real example, we would then pick the model which represented the best trade-off between accuracy and disparity given the relevant business constraints."
],
"metadata": {}
},
{
"cell_type": "code",
"source": [
"from raiwidgets import ErrorAnalysisDashboard\n",
"# ErrorAnalysisDashboard(global_explanation, model, dataset=X_test[:1000], true_y=y_test[:1000])\n",
"ErrorAnalysisDashboard(global_explanation, model, dataset=X_test[:1000], true_y=y_test[:1000])\n"
],
"outputs": [],
"execution_count": null,
"metadata": {
"collapsed": true,
"jupyter": {
"source_hidden": false,
"outputs_hidden": false
},
"nteract": {
"transient": {
"deleting": false
}
},
"gather": {
"logged": 1617803943394
}
}
},
{
"cell_type": "code",
"source": [],
"outputs": [],
"execution_count": null,
"metadata": {
"collapsed": true,
"jupyter": {
"source_hidden": false,
"outputs_hidden": false
},
"nteract": {
"transient": {
"deleting": false
}
}
}
}
],
"metadata": {
"kernelspec": {
"name": "python3-azureml",
"language": "python",
"display_name": "Python 3.6 - AzureML"
},
"language_info": {
"name": "python",
"version": "3.6.9",
"mimetype": "text/x-python",
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"pygments_lexer": "ipython3",
"nbconvert_exporter": "python",
"file_extension": ".py"
},
"kernel_info": {
"name": "python3-azureml"
},
"nteract": {
"version": "nteract-front-end@1.0.0"
},
"microsoft": {
"host": {
"AzureML": {
"notebookHasBeenCompleted": true
}
}
}
},
"nbformat": 4,
"nbformat_minor": 2
}
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment