Skip to content

Instantly share code, notes, and snippets.

Show Gist options
  • Save KOSASIH/c0ab9cbe546ab77b92ea1e7511bf2ed3 to your computer and use it in GitHub Desktop.
Save KOSASIH/c0ab9cbe546ab77b92ea1e7511bf2ed3 to your computer and use it in GitHub Desktop.
Display the source blob
Display the rendered blob
Raw
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": "![image](https://github.com/IBM/watson-machine-learning-samples/raw/master/cloud/notebooks/headers/AutoAI-Banner_Experiment-Notebook.png)\n# Experiment Notebook - AutoAI Notebook v1.15.10\n\n\nThis notebook contains the steps and code to demonstrate support of AutoAI experiments in Watson Machine Learning service. It introduces Python API commands for data retrieval, training experiments, persisting pipelines, testing pipelines, refining pipelines, and scoring the resulting model.\n\n**Note:** Notebook code generated using AutoAI will execute successfully. If code is modified or reordered, there is no guarantee it will successfully execute. For details, see: <a href=\"https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/autoai-notebook.html\">Saving an Auto AI experiment as a notebook</a>\n"
},
{
"cell_type": "markdown",
"metadata": {},
"source": "Some familiarity with Python is helpful. This notebook uses Python 3.8 and `ibm_watson_machine_learning` package.\n\n\n## Notebook goals\n\nThe learning goals of this notebook are:\n- Defining an AutoAI experiment\n- Training AutoAI models \n- Comparing trained models\n- Deploying the model as a web service\n- Scoring the model to generate predictions.\n\n\n\n## Contents\n\nThis notebook contains the following parts:\n\n**[Setup](#setup)**<br>\n&nbsp;&nbsp;[Package installation](#install)<br>\n&nbsp;&nbsp;[Watson Machine Learning connection](#connection)<br>\n**[Experiment configuration](#configuration)**<br>\n&nbsp;&nbsp;[Experiment metadata](#metadata)<br>\n**[Working with completed AutoAI experiment](#work)**<br>\n&nbsp;&nbsp;[Get fitted AutoAI optimizer](#get)<br>\n&nbsp;&nbsp;[Pipelines comparison](#comparison)<br>\n&nbsp;&nbsp;[Get pipeline as scikit-learn pipeline model](#get_pipeline)<br>\n&nbsp;&nbsp;[Inspect pipeline](#inspect_pipeline)<br>\n&nbsp;&nbsp;&nbsp;&nbsp;[Visualize pipeline model](#visualize)<br>\n&nbsp;&nbsp;&nbsp;&nbsp;[Preview pipeline model as python code](#preview)<br>\n**[Deploy and Score](#scoring)**<br>\n&nbsp;&nbsp;[Working with spaces](#working_spaces)<br>\n**[Running AutoAI experiment with Python API](#run)**<br>\n**[Clean up](#cleanup)**<br>\n**[Next steps](#next_steps)**<br>\n**[Copyrights](#copyrights)**"
},
{
"cell_type": "markdown",
"metadata": {},
"source": "<a id=\"setup\"></a>\n# Setup"
},
{
"cell_type": "markdown",
"metadata": {},
"source": "<a id=\"install\"></a>\n## Package installation\nBefore you use the sample code in this notebook, install the following packages:\n - ibm-watson-machine-learning,\n - autoai-libs,\n - lale,\n - scikit-learn,\n - xgboost,\n - lightgbm,\n - snapml.\n"
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"pycharm": {
"name": "#%%\n"
}
},
"outputs": [],
"source": "!pip install ibm-watson-machine-learning | tail -n 1\n!pip install -U autoai-libs==1.12.15 | tail -n 1\n!pip install -U lale>=0.5.3,<0.6 | tail -n 1\n!pip install -U scikit-learn==0.23.2 | tail -n 1\n!pip install -U xgboost==1.3.3 | tail -n 1\n!pip install -U lightgbm==3.1.1 | tail -n 1\n!pip install -U 'snapml>=1.7.4,<1.8.0' | tail -n 1"
},
{
"cell_type": "markdown",
"metadata": {},
"source": "<a id=\"configuration\"></a>\n# Experiment configuration"
},
{
"cell_type": "markdown",
"metadata": {
"pycharm": {
"name": "#%% md\n"
}
},
"source": "<a id=\"metadata\"></a>\n## Experiment metadata\nThis cell defines the metadata for the experiment, including: training_data_references, training_result_reference, experiment_metadata."
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"pycharm": {
"name": "#%%\n"
}
},
"outputs": [],
"source": "from ibm_watson_machine_learning.helpers import DataConnection\nfrom ibm_watson_machine_learning.helpers import ContainerLocation\n\ntraining_data_references = [\n DataConnection(\n data_asset_id='c1c128af-fc32-4894-a83f-52a85cb071a8'\n ),\n]\ntraining_result_reference = DataConnection(\n location=ContainerLocation(\n path='auto_ml/4c813fa2-b5f3-4c63-90ee-d1c984ead67d/wml_data/01223da9-ced4-4bcc-afd4-ce0aa618b145/data/automl',\n model_location='auto_ml/4c813fa2-b5f3-4c63-90ee-d1c984ead67d/wml_data/01223da9-ced4-4bcc-afd4-ce0aa618b145/data/automl/model.zip',\n training_status='auto_ml/4c813fa2-b5f3-4c63-90ee-d1c984ead67d/wml_data/01223da9-ced4-4bcc-afd4-ce0aa618b145/training-status.json'\n )\n)"
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"pycharm": {
"name": "#%%\n"
}
},
"outputs": [],
"source": "experiment_metadata = dict(\n prediction_type='binary',\n prediction_column='y',\n holdout_size=0.1,\n scoring='roc_auc',\n csv_separator=',',\n random_state=33,\n max_number_of_estimators=2,\n training_data_references=training_data_references,\n training_result_reference=training_result_reference,\n deployment_url='https://us-south.ml.cloud.ibm.com',\n project_id='4f1cef8b-d406-4e7d-9735-292b92300369',\n positive_label='yes',\n drop_duplicates=True\n)"
},
{
"cell_type": "markdown",
"metadata": {},
"source": "<a id=\"connection\"></a>\n## Watson Machine Learning connection\n\nThis cell defines the credentials required to work with the Watson Machine Learning service.\n\n**Action**: Please provide IBM Cloud apikey following [docs](https://cloud.ibm.com/docs/account?topic=account-userapikey)."
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"pycharm": {
"name": "#%%\n"
}
},
"outputs": [],
"source": "api_key = 'PUT_YOUR_APIKEY_HERE'"
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"pycharm": {
"name": "#%%\n"
}
},
"outputs": [],
"source": "wml_credentials = {\n \"apikey\": api_key,\n \"url\": experiment_metadata['deployment_url']\n}"
},
{
"cell_type": "markdown",
"metadata": {},
"source": "<a id=\"work\"></a>\n\n\n# Working with the completed AutoAI experiment\n\nThis cell imports the pipelines generated for the experiment so they can be compared to find the optimal pipeline to save as a model."
},
{
"cell_type": "markdown",
"metadata": {},
"source": "<a id=\"get\"></a>\n\n\n## Get fitted AutoAI optimizer"
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"pycharm": {
"name": "#%%\n"
}
},
"outputs": [],
"source": "from ibm_watson_machine_learning.experiment import AutoAI\n\npipeline_optimizer = AutoAI(wml_credentials, project_id=experiment_metadata['project_id']).runs.get_optimizer(metadata=experiment_metadata)"
},
{
"cell_type": "markdown",
"metadata": {},
"source": "Use `get_params()`- to retrieve configuration parameters."
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"pycharm": {
"name": "#%%\n"
}
},
"outputs": [],
"source": "pipeline_optimizer.get_params()"
},
{
"cell_type": "markdown",
"metadata": {
"pycharm": {
"name": "#%% md\n"
}
},
"source": "<a id=\"comparison\"></a>\n## Pipelines comparison\n\nUse the `summary()` method to list trained pipelines and evaluation metrics information in\nthe form of a Pandas DataFrame. You can use the DataFrame to compare all discovered pipelines and select the one you like for further testing."
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"pycharm": {
"name": "#%%\n"
}
},
"outputs": [],
"source": "summary = pipeline_optimizer.summary()\nbest_pipeline_name = list(summary.index)[0]\nsummary"
},
{
"cell_type": "markdown",
"metadata": {},
"source": "<a id=\"get_pipeline\"></a>\n### Get pipeline as scikit-learn pipeline model\n\nAfter you compare the pipelines, download and save a scikit-learn pipeline model object from the\nAutoAI training job.\n\n**Tip:** To get a specific pipeline pass the pipeline name in:\n```\npipeline_optimizer.get_pipeline(pipeline_name=pipeline_name)\n```"
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"pycharm": {
"name": "#%%\n"
}
},
"outputs": [],
"source": "pipeline_model = pipeline_optimizer.get_pipeline()"
},
{
"cell_type": "markdown",
"metadata": {},
"source": "Next, check features importance for selected pipeline."
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"pycharm": {
"name": "#%%\n"
}
},
"outputs": [],
"source": "pipeline_optimizer.get_pipeline_details()['features_importance']"
},
{
"cell_type": "markdown",
"metadata": {},
"source": "**Tip:** If you want to check all model evaluation metrics-details, use:\n```\npipeline_optimizer.get_pipeline_details()\n```"
},
{
"cell_type": "markdown",
"metadata": {},
"source": "<a id=\"inspect_pipeline\"></a>\n## Inspect pipeline"
},
{
"cell_type": "markdown",
"metadata": {},
"source": "<a id=\"visualize\"></a>\n### Visualize pipeline model\n\nPreview pipeline model stages as a graph. Each node's name links to a detailed description of the stage.\n"
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"pycharm": {
"name": "#%%\n"
}
},
"outputs": [],
"source": "pipeline_model.visualize()"
},
{
"cell_type": "markdown",
"metadata": {},
"source": "<a id=\"preview\"></a>\n### Preview pipeline model as Python code\nIn the next cell, you can preview the saved pipeline model as Python code. \nYou can review the exact steps used to create the model.\n\n**Note:** If you want to get sklearn representation, add the following parameter to `pretty_print` call: `astype='sklearn'`."
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"pycharm": {
"name": "#%%\n"
}
},
"outputs": [],
"source": "pipeline_model.pretty_print(combinators=False, ipython_display=True)"
},
{
"cell_type": "markdown",
"metadata": {},
"source": "### Calling the `predict` method\nIf you want to get a prediction using pipeline model object, call `pipeline_model.predict()`.\n\n**Note:** If you want to work with pure sklearn model:\n - add the following parameter to `get_pipeline` call: `astype='sklearn'`,\n - or `scikit_learn_pipeline = pipeline_model.export_to_sklearn_pipeline()`"
},
{
"cell_type": "markdown",
"metadata": {},
"source": "<a id=\"scoring\"></a>\n## Deploy and Score\n\nIn this section you will learn how to deploy and score the model as a web service."
},
{
"cell_type": "markdown",
"metadata": {},
"source": "<a id=\"working_spaces\"></a>\n### Working with spaces\n\nIn this section you will specify a deployment space for organizing the assets for deploying and scoring the model. If you do not have an existing space, you can use [Deployment Spaces Dashboard](https://dataplatform.cloud.ibm.com/ml-runtime/spaces?context=cpdaas) to create a new space, following these steps:\n\n- Click **New Deployment Space**.\n- Create an empty space.\n- Select Cloud Object Storage.\n- Select Watson Machine Learning instance and press **Create**.\n- Copy `space_id` and paste it below.\n\n**Tip**: You can also use the API to prepare the space for your work. Learn more [here](https://github.com/IBM/watson-machine-learning-samples/blob/master/notebooks/python_sdk/instance-management/Space%20management.ipynb).\n\n**Action**: assign or update space ID below."
},
{
"cell_type": "markdown",
"metadata": {},
"source": "### Deployment creation"
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"pycharm": {
"name": "#%%\n"
}
},
"outputs": [],
"source": "target_space_id = \"PUT_YOUR_TARGET_SPACE_ID_HERE\"\n\nfrom ibm_watson_machine_learning.deployment import WebService\n\nservice = WebService(\n source_wml_credentials=wml_credentials,\n target_wml_credentials=wml_credentials,\n source_project_id=experiment_metadata['project_id'],\n target_space_id=target_space_id\n)\nservice.create(\n model=best_pipeline_name,\n metadata=experiment_metadata,\n deployment_name='Best_pipeline_webservice'\n)"
},
{
"cell_type": "markdown",
"metadata": {},
"source": "Use the `print` method for the deployment object to show basic information about the service: "
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"pycharm": {
"name": "#%%\n"
}
},
"outputs": [],
"source": "print(service)"
},
{
"cell_type": "markdown",
"metadata": {},
"source": "To show all available information about the deployment use the `.get_params()` method."
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"pycharm": {
"name": "#%%\n"
}
},
"outputs": [],
"source": "service.get_params()"
},
{
"cell_type": "markdown",
"metadata": {},
"source": "### Scoring of webservice\nYou can make scoring request by calling `score()` on the deployed pipeline."
},
{
"cell_type": "markdown",
"metadata": {},
"source": "If you want to work with the web service in an external Python application, follow these steps to retrieve the service object:\n\n - Initialize the service by `service = WebService(target_wml_credentials=wml_credentials,target_space_id=experiment_metadata['space_id'])`\n - Get deployment_id by `service.list()` method\n - Get webservice object by `service.get('deployment_id')` method\n\nAfter that you can call `service.score(score_records_df)` method. The `score()` method accepts `pandas.DataFrame` object. "
},
{
"cell_type": "markdown",
"metadata": {},
"source": "<a id=\"cleanup\"></a>\n### Deleting deployment\nYou can delete the existing deployment by calling the `service.delete()` command.\nTo list the existing web services, use `service.list()`."
},
{
"cell_type": "markdown",
"metadata": {},
"source": "<a id=\"run\"></a>\n\n## Running AutoAI experiment with Python API"
},
{
"cell_type": "markdown",
"metadata": {},
"source": "If you want to run the AutoAI experiment using the Python API, follow these. The experiment settings were generated basing on parameters set in the AutoAI UI.\n"
},
{
"cell_type": "markdown",
"metadata": {
"pycharm": {
"name": "#%%\n"
}
},
"source": "```\nfrom ibm_watson_machine_learning.experiment import AutoAI\n\nexperiment = AutoAI(wml_credentials, project_id=experiment_metadata['project_id'])\n```"
},
{
"cell_type": "markdown",
"metadata": {},
"source": "```\nOPTIMIZER_NAME = 'custom_name'\n```"
},
{
"cell_type": "markdown",
"metadata": {
"pycharm": {
"name": "#%% raw\n"
}
},
"source": "```\nfrom ibm_watson_machine_learning.helpers import DataConnection\nfrom ibm_watson_machine_learning.helpers import ContainerLocation\n\ntraining_data_references = [\n DataConnection(\n data_asset_id='c1c128af-fc32-4894-a83f-52a85cb071a8'\n ),\n]\ntraining_result_reference = DataConnection(\n location=ContainerLocation(\n path='auto_ml/4c813fa2-b5f3-4c63-90ee-d1c984ead67d/wml_data/01223da9-ced4-4bcc-afd4-ce0aa618b145/data/automl',\n model_location='auto_ml/4c813fa2-b5f3-4c63-90ee-d1c984ead67d/wml_data/01223da9-ced4-4bcc-afd4-ce0aa618b145/data/automl/model.zip',\n training_status='auto_ml/4c813fa2-b5f3-4c63-90ee-d1c984ead67d/wml_data/01223da9-ced4-4bcc-afd4-ce0aa618b145/training-status.json'\n )\n)\n```"
},
{
"cell_type": "markdown",
"metadata": {},
"source": "The new pipeline optimizer will be created and training will be triggered."
},
{
"cell_type": "markdown",
"metadata": {
"pycharm": {
"name": "#%%raw\n"
}
},
"source": "```\npipeline_optimizer = experiment.optimizer(\n name=OPTIMIZER_NAME,\n prediction_type=experiment_metadata['prediction_type'],\n prediction_column=experiment_metadata['prediction_column'],\n scoring=experiment_metadata['scoring'],\n holdout_size=experiment_metadata['holdout_size'],\n csv_separator=experiment_metadata['csv_separator'],\n positive_label=experiment_metadata['positive_label'],\n drop_duplicates=experiment_metadata['drop_duplicates'],\n)\n```"
},
{
"cell_type": "markdown",
"metadata": {},
"source": "```\npipeline_optimizer.fit(\n training_data_references=training_data_references,\n training_results_reference=training_result_reference,\n)\n```"
},
{
"cell_type": "markdown",
"metadata": {},
"source": "\n<a id=\"next_steps\"></a>\n# Next steps\nYou successfully completed this notebook!.\nYou learned how to use watson-machine-learning-client to run and explore AutoAI experiments.\nCheck out our [Online Documentation](https://www.ibm.com/cloud/watson-studio/autoai) for more samples, tutorials, documentation, how-tos, and blog posts."
},
{
"cell_type": "markdown",
"metadata": {},
"source": "<a id=\"copyrights\"></a>\n### Copyrights\n\nLicensed Materials - Copyright \u00a9 2021 IBM. This notebook and its source code are released under the terms of the ILAN License.\nUse, duplication disclosure restricted by GSA ADP Schedule Contract with IBM Corp.\n\n**Note:** The auto-generated notebooks are subject to the International License Agreement for Non-Warranted Programs (or equivalent) and License Information document for Watson Studio Auto-generated Notebook (License Terms), such agreements located in the link below. Specifically, the Source Components and Sample Materials clause included in the License Information document for Watson Studio Auto-generated Notebook applies to the auto-generated notebooks. \n\nBy downloading, copying, accessing, or otherwise using the materials, you agree to the <a href=\"http://www14.software.ibm.com/cgi-bin/weblap/lap.pl?li_formnum=L-AMCU-BYC7LF\">License Terms</a> \n\n___"
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.10"
},
"pycharm": {
"stem_cell": {
"cell_type": "raw",
"metadata": {
"collapsed": false
},
"source": [
"\n"
]
}
}
},
"nbformat": 4,
"nbformat_minor": 1
}
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment