Skip to content

Instantly share code, notes, and snippets.

Show Gist options
  • Save AnnaNguyenYip/4635c87cf51aa13fdf8e97013b08fae7 to your computer and use it in GitHub Desktop.
Save AnnaNguyenYip/4635c87cf51aa13fdf8e97013b08fae7 to your computer and use it in GitHub Desktop.
Created on Skills Network Labs
Display the source blob
Display the rendered blob
Raw
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<a href=\"https://cognitiveclass.ai/\">\n",
" <img src=\"https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/CV0101/Logo/SNLogo.png\" width=\"200\" align=\"center\">\n",
"</a>"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<h1>Lab - Training Custom Classifiers with IBM Watson Visual Recognition in Python</h1>\n",
"\n",
"<h2>Introduction</h2>\n",
"\n",
"<p><b>Welcome!</b> This notebook how to operate the Watson Visual Recognition API using the Python Programming Language. The advantage of using the Watson Visual Recognition API over the Graphic User Interface on the Browser that you did earlier in this course is because you can automate the training, and testing of your Visual Recognition model.</p>\n",
"\n",
"<p>In this lab you will be training a Visual Recognition model that classify different kinds of dogs by running python code.</p>"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<div class=\"alert alert-block alert-info\" style=\"margin-top: 20px\">\n",
"<font size=\"3\"><strong>Click on the links to go to the following sections:</strong></font>\n",
"<br>\n",
"<h2>Table of Contents</h2>\n",
"<ol>\n",
" <li><a href=\"#ref1\">IBM Watson Package</a></li>\n",
" <li><a href=\"#ref2\">Setting the API key for IBM Watson Visual Recognition</a></li>\n",
" <li><a href=\"#ref3\">Training the Classifier</a></li>\n",
" <li><a href=\"#ref4\">Testing the Classifier</a></li>\n",
" <li><a href=\"#ref5\">Exercises</a></li>\n",
"</ol> \n",
"</div>"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<a id=\"ref1\"></a>\n",
"<h2>IBM Watson Package</h2>\n",
"In order to run this lab we need to import the following package.\n",
"<ul>\n",
" <li>IBM Watson: which allows access to the Watson Visual Recognition API</li>\n",
"</ul>\n",
"The code below will install IBM Watson. \n",
"\n",
"To run, click on the code cell below and press \"shift + enter\".\n",
"\n",
"<b>NOTE - The Watson Developer Cloud Package has been deprecated and has been replaced by the IBM Watson Package </b>"
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Requirement already up-to-date: ibm-watson in /home/jupyterlab/conda/envs/python/lib/python3.6/site-packages (4.7.1)\n",
"Requirement already satisfied, skipping upgrade: ibm-cloud-sdk-core==1.7.3 in /home/jupyterlab/conda/envs/python/lib/python3.6/site-packages (from ibm-watson) (1.7.3)\n",
"Requirement already satisfied, skipping upgrade: requests<3.0,>=2.0 in /home/jupyterlab/conda/envs/python/lib/python3.6/site-packages (from ibm-watson) (2.24.0)\n",
"Requirement already satisfied, skipping upgrade: websocket-client==0.48.0 in /home/jupyterlab/conda/envs/python/lib/python3.6/site-packages (from ibm-watson) (0.48.0)\n",
"Requirement already satisfied, skipping upgrade: python-dateutil>=2.5.3 in /home/jupyterlab/conda/envs/python/lib/python3.6/site-packages (from ibm-watson) (2.8.1)\n",
"Requirement already satisfied, skipping upgrade: PyJWT>=1.7.1 in /home/jupyterlab/conda/envs/python/lib/python3.6/site-packages (from ibm-cloud-sdk-core==1.7.3->ibm-watson) (1.7.1)\n",
"Requirement already satisfied, skipping upgrade: idna<3,>=2.5 in /home/jupyterlab/conda/envs/python/lib/python3.6/site-packages (from requests<3.0,>=2.0->ibm-watson) (2.10)\n",
"Requirement already satisfied, skipping upgrade: certifi>=2017.4.17 in /home/jupyterlab/conda/envs/python/lib/python3.6/site-packages (from requests<3.0,>=2.0->ibm-watson) (2020.6.20)\n",
"Requirement already satisfied, skipping upgrade: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /home/jupyterlab/conda/envs/python/lib/python3.6/site-packages (from requests<3.0,>=2.0->ibm-watson) (1.25.10)\n",
"Requirement already satisfied, skipping upgrade: chardet<4,>=3.0.2 in /home/jupyterlab/conda/envs/python/lib/python3.6/site-packages (from requests<3.0,>=2.0->ibm-watson) (3.0.4)\n",
"Requirement already satisfied, skipping upgrade: six in /home/jupyterlab/conda/envs/python/lib/python3.6/site-packages (from websocket-client==0.48.0->ibm-watson) (1.15.0)\n"
]
}
],
"source": [
"!pip install --upgrade ibm-watson"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<h3>Goal of this lab:</h3>"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<p>In this lab, we will be creating a completely new image classifier using training images. We will train a custom classifier to identify between three different dog breeds (Golden Retriever, Beagle and Husky).</p>"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<img src=\"https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/CV0101/Images/dog-breed.png\" width=\"480\"/>"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<a id=\"ref2\"></a>\n",
"<h2>Setting the API key for IBM Watson Visual Recognition</h2>\n",
"\n",
"<p>In order for you to use the IBM Watson Visual Recognition API, you will need the API key of the Visual Recognition instance that you have created in the previous sections.</p>\n",
"\n",
"<p>Log into your IBM Cloud Account with the following link.</p> <a href=\"https://cocl.us/CV0101EN_IBM_Cloud_Login\">https://cloud.ibm.com</a>\n",
"<ol>\n",
" <li>Click on <b>Services</b></li>\n",
" <li>Under Services, click on your Watson Visual Recognition Instance</li>\n",
" <li>Copy the <b>API Key</b> and past it in the code cell below</li>\n",
" <img src=\"https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/CV0101/Images/API_Key.png\" width=\"680\">\n",
" <li>Then press \"ctrl + enter\" to run the code cell.</li>\n",
"</ol>"
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [],
"source": [
"# Paste your API key for IBM Watson Visual Recognition below:\n",
"my_apikey = 'OsIpdAMJ82mi6JjDFcNl'"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<h4>Initialize Watson Visual Recognition</h4>\n",
"Lets create your own Watson Visual Recognition instance, it will allow you to make calls to the Watson Visual Recognition API."
]
},
{
"cell_type": "code",
"execution_count": 30,
"metadata": {},
"outputs": [],
"source": [
"from ibm_watson import VisualRecognitionV3\n",
"from ibm_cloud_sdk_core.authenticators import IAMAuthenticator\n",
"authenticator = IAMAuthenticator(my_apikey)\n",
"\n",
"visrec = VisualRecognitionV3('2018-03-19', \n",
" authenticator=authenticator)\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<p>We are going to train an Image Recognition model to classify different types of dog. The dataset that we are going to use are the zip files that we use below</p>"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<ul>\n",
" <li>beagle.zip</li>\n",
" <li>husky.zip</li>\n",
" <li>golden-retriever.zip</li>\n",
"</ul>"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<a id=\"ref3\"></a>\n",
"<h2>Training Classifier</h2>"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<h4>Download the differerent breed of dog images as zip files</h4>\n",
"<p>We will use the <b>urlretrieve</b> method from the <b>urllib.request</b> library to download the dataset above.</p> "
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"('goldenretriever.zip', <http.client.HTTPMessage at 0x7f3bd62d70b8>)"
]
},
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"import urllib.request\n",
"\n",
"# Downloading Beagle dataset\n",
"urllib.request.urlretrieve(\"http://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/CV0101/Dataset/Beagle.zip\", \n",
" \"beagle.zip\")\n",
"\n",
"# Downloading Husky dataset\n",
"urllib.request.urlretrieve(\"http://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/CV0101/Dataset/Husky.zip\", \n",
" \"husky.zip\")\n",
"\n",
"# Downloading Golden Retriever dataset\n",
"urllib.request.urlretrieve(\"http://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/CV0101/Dataset/GoldenRetriever.zip\", \n",
" \"goldenretriever.zip\") #note that we should remove any hyphens from the zip file name"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<p>Lets train our Visual Recognition model to recognize the three breeds of dogs using the <b>create_classifier</b> method from the Watson Image Recognition API.</p>"
]
},
{
"cell_type": "code",
"execution_count": 34,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"{\n",
" \"classifier_id\": \"dogbreedclassifier_1445528489\",\n",
" \"name\": \"dogbreedclassifier\",\n",
" \"status\": \"training\",\n",
" \"owner\": \"d906db0a-4c9f-4f91-b5da-0af88352493d\",\n",
" \"created\": \"2020-09-07T02:00:09.252Z\",\n",
" \"updated\": \"2020-09-07T02:00:09.252Z\",\n",
" \"classes\": [\n",
" {\n",
" \"class\": \"husky\"\n",
" },\n",
" {\n",
" \"class\": \"goldenretriever\"\n",
" },\n",
" {\n",
" \"class\": \"beagle\"\n",
" }\n",
" ],\n",
" \"rscnn_enabled\": false,\n",
" \"core_ml_enabled\": true\n",
"}\n"
]
}
],
"source": [
"import json\n",
"with open('beagle.zip', 'rb') as beagle, \\\n",
" open('goldenretriever.zip', 'rb') as gretriever, \\\n",
" open('husky.zip', 'rb') as husky:\n",
" response = visrec.create_classifier(name=\"dogbreedclassifier\",\n",
" positive_examples={'beagle': beagle, \\\n",
" 'goldenretriever': gretriever, \\\n",
" 'husky': husky})\n",
"print(json.dumps(response.get_result(), indent=2))"
]
},
{
"cell_type": "code",
"execution_count": 35,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'dogbreedclassifier_1445528489'"
]
},
"execution_count": 35,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"#lets grab the classifier id\n",
"classifier_id = response.get_result()[\"classifier_id\"]\n",
"classifier_id"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<div style=\"background-color: #fcf2f2\">\n",
" <h2>Note!</h2> \n",
" <p>If you receive the following error.</p>\n",
" <img src=\"https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/CV0101/Images/Error-Code.png\" width=\"1080\">\n",
" <p>It means that you have more than 1 Visual Recognition Instances running on your lite plan, and the lite plan only allows for no more than 2 Visual Recognition instances. So you might want to delete one of your custom classifier in your Watson Visual Recognition Instance.</p>\n",
" <p>Log into your IBM Cloud Account with the following link.</p> <p><a href=\"https://cocl.us/CV0101EN_IBM_Cloud_Login\">https://cloud.ibm.com</a></p>\n",
" <ol>\n",
" <li>Click on <b>Services</b></li>\n",
" <li>Under Services, click on your Watson Visual Recognition Instance</li>\n",
" <li>Then click on Create a Custom Model</li>\n",
" <li>Then delete one of your Custom Visual Recognition Model</li>\n",
" </ol>\n",
" <img src=\"https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/CV0101/Images/Delete-Instance.png\" width=\"680\">\n",
"</div>\n",
" "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<h4>Is the model still training?</h4>\n",
"Depending on the number of images, it may take <b>a Several Minutes </b> for Watson to build a custom classifier. Please wait tell you get <b> Good to Go<b> "
]
},
{
"cell_type": "code",
"execution_count": 40,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Good to go \n"
]
}
],
"source": [
"Status = visrec.get_classifier(classifier_id=classifier_id, verbose=True).get_result()['status']\n",
"if Status=='training': \n",
" print (\"Please, Wait to complete training.......\")\n",
"else:\n",
" print(\"Good to go \")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<h4>List all (custom) classifiers</h4>"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<h4>If the status is still training, please rerun the above cell and wait until you see ready</h4> "
]
},
{
"cell_type": "code",
"execution_count": 41,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'classifiers': [{'classifier_id': 'dogbreedclassifier_1445528489',\n",
" 'name': 'dogbreedclassifier',\n",
" 'status': 'ready',\n",
" 'owner': 'd906db0a-4c9f-4f91-b5da-0af88352493d',\n",
" 'created': '2020-09-07T02:00:09.252Z',\n",
" 'updated': '2020-09-07T02:00:09.252Z',\n",
" 'classes': [{'class': 'husky'},\n",
" {'class': 'goldenretriever'},\n",
" {'class': 'beagle'}],\n",
" 'rscnn_enabled': False,\n",
" 'core_ml_enabled': True},\n",
" {'classifier_id': 'fastfoodclassifier_1603691234',\n",
" 'name': 'fastfoodclassifier',\n",
" 'status': 'ready',\n",
" 'owner': 'd906db0a-4c9f-4f91-b5da-0af88352493d',\n",
" 'created': '2020-09-07T01:59:14.246Z',\n",
" 'updated': '2020-09-07T01:59:14.246Z',\n",
" 'classes': [{'class': 'pizza'}, {'class': 'fries'}, {'class': 'burger'}],\n",
" 'rscnn_enabled': False,\n",
" 'core_ml_enabled': True}]}"
]
},
"execution_count": 41,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"visrec.list_classifiers(verbose=True).get_result()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<a id=\"ref4\"></a>\n",
"<h2>Testing Classifier</h2>\n",
"<p>Let's test the classifier, the function <b>getdf_visrec</b> below uses the method <b>classify</b> from Watson Visual Recognition API to upload the image to the classifier and give us a result in JSON(JavaScript Object Notation) format. Then we use the method <b>json_normalize</b> from the \"Pandas\" library in Python to turn the result into a table because it is more human readable.</p>"
]
},
{
"cell_type": "code",
"execution_count": 43,
"metadata": {},
"outputs": [],
"source": [
"from pandas.io.json import json_normalize\n",
"\n",
"def getdf_visrec(url, classifier_ids, apikey = my_apikey):\n",
" \n",
" json_result = visrec.classify(url=url,\n",
" threshold='0.6',\n",
" classifier_ids=classifier_id).get_result()\n",
" \n",
" json_classes = json_result['images'][0]['classifiers'][0]['classes']\n",
" \n",
" df = json_normalize(json_classes).sort_values('score', ascending=False).reset_index(drop=True)\n",
" \n",
" return df"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<img src=\"https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/CV0101/Dataset/GoldenRetriever1_stacked.jpg\">\n",
"<p>Let's test our Visual Recognition model on this picture of Golden Retriever</p>"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<p>Please wait for your Custom Model to finish training before you upload your test image to your Custom Classifier, <b>you might get an error if your model is still training and you run the function below.</p>\n",
"<img src=\"https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/CV0101/Images/Training.png\" width=\"680\">"
]
},
{
"cell_type": "code",
"execution_count": 44,
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"/home/jupyterlab/conda/envs/python/lib/python3.6/site-packages/ipykernel_launcher.py:11: FutureWarning: pandas.io.json.json_normalize is deprecated, use pandas.json_normalize instead\n",
" # This is added back by InteractiveShellApp.init_path()\n"
]
},
{
"data": {
"text/html": [
"<div>\n",
"<style scoped>\n",
" .dataframe tbody tr th:only-of-type {\n",
" vertical-align: middle;\n",
" }\n",
"\n",
" .dataframe tbody tr th {\n",
" vertical-align: top;\n",
" }\n",
"\n",
" .dataframe thead th {\n",
" text-align: right;\n",
" }\n",
"</style>\n",
"<table border=\"1\" class=\"dataframe\">\n",
" <thead>\n",
" <tr style=\"text-align: right;\">\n",
" <th></th>\n",
" <th>class</th>\n",
" <th>score</th>\n",
" </tr>\n",
" </thead>\n",
" <tbody>\n",
" <tr>\n",
" <th>0</th>\n",
" <td>goldenretriever</td>\n",
" <td>0.908</td>\n",
" </tr>\n",
" </tbody>\n",
"</table>\n",
"</div>"
],
"text/plain": [
" class score\n",
"0 goldenretriever 0.908"
]
},
"execution_count": 44,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"getdf_visrec(url = 'https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/CV0101/Dataset/GoldenRetriever1_stacked.jpg',\n",
" classifier_ids=classifier_id)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<img src=\"https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/CV0101/Dataset/cat-2083492_960_720.jpg\">\n",
"<p>Lets test our Visual Recognition model on this picture of cat</p>"
]
},
{
"cell_type": "code",
"execution_count": 45,
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"/home/jupyterlab/conda/envs/python/lib/python3.6/site-packages/ipykernel_launcher.py:11: FutureWarning: pandas.io.json.json_normalize is deprecated, use pandas.json_normalize instead\n",
" # This is added back by InteractiveShellApp.init_path()\n"
]
},
{
"data": {
"text/html": [
"<div>\n",
"<style scoped>\n",
" .dataframe tbody tr th:only-of-type {\n",
" vertical-align: middle;\n",
" }\n",
"\n",
" .dataframe tbody tr th {\n",
" vertical-align: top;\n",
" }\n",
"\n",
" .dataframe thead th {\n",
" text-align: right;\n",
" }\n",
"</style>\n",
"<table border=\"1\" class=\"dataframe\">\n",
" <thead>\n",
" <tr style=\"text-align: right;\">\n",
" <th></th>\n",
" <th>class</th>\n",
" <th>score</th>\n",
" </tr>\n",
" </thead>\n",
" <tbody>\n",
" <tr>\n",
" <th>0</th>\n",
" <td>husky</td>\n",
" <td>0.821</td>\n",
" </tr>\n",
" </tbody>\n",
"</table>\n",
"</div>"
],
"text/plain": [
" class score\n",
"0 husky 0.821"
]
},
"execution_count": 45,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"getdf_visrec(url = 'http://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/CV0101/Dataset/cat-2083492_960_720.jpg',\n",
" classifier_ids=classifier_id)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<p>Our model will mis-classify the cat in the picture because our custom visual recognition model is only trained for recognizing different breeds of dogs.</p>"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<h4>Delete all classifiers</h4>\n",
"<p>If you want to delete you classifiers, lets get the classifier id tht you want to delete. The method <b>list_classifiers</b> from Watson Visual Recognition API list all the classifier in your IBM Cloud account.</p>"
]
},
{
"cell_type": "code",
"execution_count": 57,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"[\n",
" {\n",
" \"classifier_id\": \"dogbreedclassifier_1445528489\",\n",
" \"name\": \"dogbreedclassifier\",\n",
" \"status\": \"ready\",\n",
" \"owner\": \"d906db0a-4c9f-4f91-b5da-0af88352493d\",\n",
" \"created\": \"2020-09-07T02:00:09.252Z\",\n",
" \"updated\": \"2020-09-07T02:00:09.252Z\",\n",
" \"classes\": [\n",
" {\n",
" \"class\": \"husky\"\n",
" },\n",
" {\n",
" \"class\": \"goldenretriever\"\n",
" },\n",
" {\n",
" \"class\": \"beagle\"\n",
" }\n",
" ],\n",
" \"rscnn_enabled\": false,\n",
" \"core_ml_enabled\": true\n",
" },\n",
" {\n",
" \"classifier_id\": \"fastfoodclassifier_1603691234\",\n",
" \"name\": \"fastfoodclassifier\",\n",
" \"status\": \"ready\",\n",
" \"owner\": \"d906db0a-4c9f-4f91-b5da-0af88352493d\",\n",
" \"created\": \"2020-09-07T01:59:14.246Z\",\n",
" \"updated\": \"2020-09-07T01:59:14.246Z\",\n",
" \"classes\": [\n",
" {\n",
" \"class\": \"pizza\"\n",
" },\n",
" {\n",
" \"class\": \"fries\"\n",
" },\n",
" {\n",
" \"class\": \"burger\"\n",
" }\n",
" ],\n",
" \"rscnn_enabled\": false,\n",
" \"core_ml_enabled\": true\n",
" }\n",
"]\n"
]
}
],
"source": [
"import json\n",
"\n",
"classifiers = visrec.list_classifiers(verbose=True).get_result()['classifiers']\n",
"print(json.dumps(classifiers, indent=2))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Just paste your classifier id and it will be deleted"
]
},
{
"cell_type": "code",
"execution_count": 60,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"<ibm_cloud_sdk_core.detailed_response.DetailedResponse at 0x7f3bcb82b198>"
]
},
"execution_count": 60,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"mycid = '' #the classifier id you want to delete\n",
"visrec.delete_classifier(classifier_id = \"dogbreedclassifier_1445528489\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<a id=\"ref5\"></a>\n",
"<h2>Exercises</h2>\n",
"<p>For the following exercises you are going to train a Custom Visual Recognition Classifier to recognize fast food items, in particular it will be able to classify food items into <b>Burger</b>, <b>Fries</b> or <b>Coke</b>.<p>"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<h3>Question 1: Optional</h3>\n",
"<p>After training your custom classifier, you might want to delete all the custom classifiers from your account, write a piece of code to use the *list_classifier* and *delete_classifier* method to delete all the custom classifiers in your account.</p>\n",
"\n",
"<p><b>Note!</b> This question is optional, if there is a classifier that you have trained and do not want to delete, skip this question.</p>"
]
},
{
"cell_type": "code",
"execution_count": 71,
"metadata": {},
"outputs": [],
"source": [
"# Write your code below and press Shift+Enter to execute \n",
"for i in visrec.list_classifiers().get_result()['classifiers']:\n",
" print('Deleting ...' + i['classifier_id'])\n",
" visrec.delete_classifier(classifier_id=i['classifier_id'])"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Double-click <font color=\"red\"><b><u>here</b></u></font> for the solution.\n",
"\n",
"<!-- The answer is below:\n",
"\n",
"# The code below will delete all classifiers in your account\n",
"for i in visrec.list_classifiers().get_result()['classifiers']:\n",
" print('Deleting ...' + i['classifier_id'])\n",
" visrec.delete_classifier(classifier_id=i['classifier_id'])\n",
"-->"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<h3>Question 2</h3>\n",
"<p>The link to the data set for Burger, Fries and Coke is given below</p> \n",
" \n",
"<p>We will use the <b>urlretrieve</b> method from the <b>urllib.request</b> library to download the dataset below.</p> \n",
"\n",
"\n",
"<ul>\n",
" <li>https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/CV0101/Dataset/Burger.zip</li>\n",
" <li>https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/CV0101/Dataset/Fries.zip</li>\n",
" <li>https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/CV0101/Dataset/Pizza.zip</li>\n",
"</ul>"
]
},
{
"cell_type": "code",
"execution_count": 13,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"('pizza.zip', <http.client.HTTPMessage at 0x7f3bcbab3a58>)"
]
},
"execution_count": 13,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# Write your code below and press Shift+Enter to execute \n",
"\n",
"import urllib.request\n",
"\n",
"#Downloading Burger dataset\n",
"urllib.request.urlretrieve(\"https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/CV0101/Dataset/Burger.zip\", \n",
" \"burger.zip\")\n",
"\n",
"#Downloading Fries dataset\n",
"urllib.request.urlretrieve(\"https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/CV0101/Dataset/Fries.zip\", \n",
" \"fries.zip\")\n",
"\n",
"#Downloading Pizza dataset\n",
"urllib.request.urlretrieve(\"https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/CV0101/Dataset/Pizza.zip\", \n",
" \"pizza.zip\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Double-click <font color=\"red\"><b><u>here</b></u></font> for the solution.\n",
"\n",
"<!-- The answer is below:\n",
"import urllib.request\n",
"\n",
"#Downloading Burger dataset\n",
"urllib.request.urlretrieve(\"https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/CV0101/Dataset/Burger.zip\", \n",
" \"burger.zip\")\n",
"\n",
"#Downloading Fries dataset\n",
"urllib.request.urlretrieve(\"https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/CV0101/Dataset/Fries.zip\", \n",
" \"fries.zip\")\n",
"\n",
"#Downloading Pizza dataset\n",
"urllib.request.urlretrieve(\"https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/CV0101/Dataset/Pizza.zip\", \n",
" \"pizza.zip\")\n",
"-->"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<h3>Question 3.1</h3>\n",
"<p>Now we have the dataset, use the <b>create_classifier</b> method to create your fast food classifier.</p>"
]
},
{
"cell_type": "code",
"execution_count": 74,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"{\n",
" \"classifier_id\": \"fastfoodclassifier_732411393\",\n",
" \"name\": \"fastfoodclassifier\",\n",
" \"status\": \"training\",\n",
" \"owner\": \"d906db0a-4c9f-4f91-b5da-0af88352493d\",\n",
" \"created\": \"2020-09-07T03:10:12.351Z\",\n",
" \"updated\": \"2020-09-07T03:10:12.351Z\",\n",
" \"classes\": [\n",
" {\n",
" \"class\": \"pizza\"\n",
" },\n",
" {\n",
" \"class\": \"fries\"\n",
" },\n",
" {\n",
" \"class\": \"burger\"\n",
" }\n",
" ],\n",
" \"rscnn_enabled\": false,\n",
" \"core_ml_enabled\": true\n",
"}\n"
]
}
],
"source": [
"# Write your code below and press Shift+Enter to execute \n",
"my_apikey = 'OsIpdAMJ82mi6JjDFcNlzo5G'\n",
"\n",
"from ibm_watson import VisualRecognitionV3\n",
"from ibm_cloud_sdk_core.authenticators import IAMAuthenticator\n",
"authenticator = IAMAuthenticator(my_apikey)\n",
"\n",
"visrec_fast_food = VisualRecognitionV3('2018-03-19', \n",
" authenticator=authenticator)\n",
"\n",
"\n",
"import json\n",
"with open('burger.zip', 'rb') as burger, \\\n",
" open('fries.zip', 'rb') as fries, \\\n",
" open('pizza.zip', 'rb') as pizza:\n",
" new_response = visrec_fast_food.create_classifier(name=\"fastfoodclassifier\",\n",
" positive_examples={'burger': burger, \\\n",
" 'fries': fries, \\\n",
" 'pizza': pizza})\n",
"\n",
" print(json.dumps(new_response.get_result(), indent=2))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Double-click <font color=\"red\"><b><u>here</b></u></font> for the solution.\n",
"\n",
"<!-- The answer is below:\n",
"my_apikey = '<paste your api key here>'\n",
"\n",
"from ibm_watson import VisualRecognitionV3\n",
"\n",
"visrec_fast_food = VisualRecognitionV3(version = '2019-01-01', \n",
" iam_apikey = my_apikey)\n",
"\n",
"import json\n",
"with open('burger.zip', 'rb') as burger, \\\n",
" open('fries.zip', 'rb') as fries, \\\n",
" open('pizza.zip', 'rb') as pizza:\n",
" new_response = visrec_fast_food.create_classifier(name=\"fastfoodclassifier\",\n",
" positive_examples={'burger': burger, \\\n",
" 'fries': fries, \\\n",
" 'pizza': pizza})\n",
"\n",
" print(json.dumps(new_response.get_result(), indent=2))\n",
"-->"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<h3>Question 3.2</h3>\n",
"<p>Since we will need the classifier_id, grab the classifier_id from the response of create_classifier and store it into a variable.</p>"
]
},
{
"cell_type": "code",
"execution_count": 96,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'fastfoodclassifier_732411393'"
]
},
"execution_count": 96,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"# Write your code below and press Shift+Enter to execute \n",
"#lets grab the classifier id\n",
"\n",
"fast_food_classifier_id = new_response.get_result()['classifier_id']\n",
"fast_food_classifier_id"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Double-click <font color=\"red\"><b><u>here</b></u></font> for the solution.\n",
"\n",
"<!-- The answer is below:\n",
"#lets grab the classifier id\n",
"fast_food_classifier_id = new_response.get_result()['classifier_id']\n",
"-->"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<h3>Question 4</h3>\n",
"<p>Get a url of a picture of fast food and use the <b>getdf_visrec</b> function to classify the picture. <b>Before that, please make sure that your model is trained and ready </b>"
]
},
{
"cell_type": "code",
"execution_count": 95,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Good to go \n"
]
}
],
"source": [
"# Write your code below and press Shift+Enter to check if your model is ready to go \n",
"gStatus = visrec.get_classifier(classifier_id=fast_food_classifier_id, verbose=True).get_result()['status']\n",
"if Status=='training': \n",
" print (\"Please, Wait to complete training.......\")\n",
"else:\n",
" print(\"Good to go \") "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Double-click <font color=\"red\"><b><u>here</b></u></font> for the solution.\n",
"\n",
"<!-- The answer is below:\n",
"gStatus = visrec.get_classifier(classifier_id=fast_food_classifier_id, verbose=True).get_result()['status']\n",
"if Status=='training': \n",
" print (\"Please, Wait to complete training.......\")\n",
"else:\n",
" print(\"Good to go \")\n",
"-->"
]
},
{
"cell_type": "code",
"execution_count": 101,
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"ERROR:root:Unknown error\n",
"Traceback (most recent call last):\n",
" File \"/home/jupyterlab/conda/envs/python/lib/python3.6/site-packages/ibm_cloud_sdk_core/base_service.py\", line 225, in send\n",
" response.status_code, http_response=response)\n",
"ibm_cloud_sdk_core.api_exception.ApiException: Error: Unknown error, Code: 404 , X-global-transaction-id: 5bcae68d9aeaf966bc9b706a27ddab79\n"
]
},
{
"ename": "ApiException",
"evalue": "Error: Unknown error, Code: 404 , X-global-transaction-id: 5bcae68d9aeaf966bc9b706a27ddab79",
"output_type": "error",
"traceback": [
"\u001b[0;31m---------------------------------------------------------------------------\u001b[0m",
"\u001b[0;31mApiException\u001b[0m Traceback (most recent call last)",
"\u001b[0;32m<ipython-input-101-cabe5f34d848>\u001b[0m in \u001b[0;36m<module>\u001b[0;34m\u001b[0m\n\u001b[1;32m 1\u001b[0m \u001b[0;31m# Write your code below and press Shift+Enter to execute\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 2\u001b[0m getdf_visrec(url = 'https://images.ctfassets.net/oewsurrg31ok/2BxSmgM43Mgv6ax7Jgp85i/947a6b18053f6de531de508a06983d98/2019_Burger_Banzai.png',\n\u001b[0;32m----> 3\u001b[0;31m classifier_ids=fast_food_classifier_id)\n\u001b[0m",
"\u001b[0;32m<ipython-input-43-e03d89ee12f8>\u001b[0m in \u001b[0;36mgetdf_visrec\u001b[0;34m(url, classifier_ids, apikey)\u001b[0m\n\u001b[1;32m 5\u001b[0m json_result = visrec.classify(url=url,\n\u001b[1;32m 6\u001b[0m \u001b[0mthreshold\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0;34m'0.6'\u001b[0m\u001b[0;34m,\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m----> 7\u001b[0;31m classifier_ids=classifier_id).get_result()\n\u001b[0m\u001b[1;32m 8\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 9\u001b[0m \u001b[0mjson_classes\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mjson_result\u001b[0m\u001b[0;34m[\u001b[0m\u001b[0;34m'images'\u001b[0m\u001b[0;34m]\u001b[0m\u001b[0;34m[\u001b[0m\u001b[0;36m0\u001b[0m\u001b[0;34m]\u001b[0m\u001b[0;34m[\u001b[0m\u001b[0;34m'classifiers'\u001b[0m\u001b[0;34m]\u001b[0m\u001b[0;34m[\u001b[0m\u001b[0;36m0\u001b[0m\u001b[0;34m]\u001b[0m\u001b[0;34m[\u001b[0m\u001b[0;34m'classes'\u001b[0m\u001b[0;34m]\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
"\u001b[0;32m~/conda/envs/python/lib/python3.6/site-packages/ibm_watson/visual_recognition_v3.py\u001b[0m in \u001b[0;36mclassify\u001b[0;34m(self, images_file, images_filename, images_file_content_type, url, threshold, owners, classifier_ids, accept_language, **kwargs)\u001b[0m\n\u001b[1;32m 181\u001b[0m files=form_data)\n\u001b[1;32m 182\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m--> 183\u001b[0;31m \u001b[0mresponse\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0msend\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mrequest\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m 184\u001b[0m \u001b[0;32mreturn\u001b[0m \u001b[0mresponse\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 185\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n",
"\u001b[0;32m~/conda/envs/python/lib/python3.6/site-packages/ibm_cloud_sdk_core/base_service.py\u001b[0m in \u001b[0;36msend\u001b[0;34m(self, request, **kwargs)\u001b[0m\n\u001b[1;32m 223\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 224\u001b[0m raise ApiException(\n\u001b[0;32m--> 225\u001b[0;31m response.status_code, http_response=response)\n\u001b[0m\u001b[1;32m 226\u001b[0m \u001b[0;32mexcept\u001b[0m \u001b[0mrequests\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mexceptions\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mSSLError\u001b[0m\u001b[0;34m:\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m 227\u001b[0m \u001b[0mlogging\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mexception\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mself\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mERROR_MSG_DISABLE_SSL\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
"\u001b[0;31mApiException\u001b[0m: Error: Unknown error, Code: 404 , X-global-transaction-id: 5bcae68d9aeaf966bc9b706a27ddab79"
]
}
],
"source": [
"# Write your code below and press Shift+Enter to execute \n",
"getdf_visrec(url = 'https://images.ctfassets.net/oewsurrg31ok/2BxSmgM43Mgv6ax7Jgp85i/947a6b18053f6de531de508a06983d98/2019_Burger_Banzai.png',\n",
" classifier_ids=fast_food_classifier_id)\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Double-click <font color=\"red\"><b><u>here</b></u></font> for the solution.\n",
"\n",
"<!-- The answer is below:\n",
"getdf_visrec(url = 'fast_food_image_url',\n",
" classifier_ids=fast_food_classifier_id)\n",
"-->"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<h1>Thank you for completing this notebook</h1>\n",
"You can read more about Watson Visual Recognition APIs from the following link.\n",
"<a href=\"https://cloud.ibm.com/apidocs/visual-recognition?code=python\">https://cloud.ibm.com/apidocs/visual-recognition</a>\n",
"\n",
"<div class=\"alert alert-block alert-info\" style=\"margin-top: 20px\">\n",
"<h2>Get IBM Watson Studio free of charge!</h2>\n",
" <p><a href=\"https://cocl.us/NotebooksPython101bottom\"><img src=\"https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/CV0101/Logo/BottomAd.png\" width=\"750\" align=\"center\"></a></p>\n",
"</div>"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<h3>About the Authors:</h3>\n",
"<p>This notebook was written by <a href=\"https://www.linkedin.com/in/yi-leng-yao-84451275/\" target=\"_blank\" >Yi Yao</a> and revised by Nayef Abou Tayoun\n",
"<p><a href=\"https://www.linkedin.com/in/yi-leng-yao-84451275/\" target=\"_blank\">Yi Yao</a> is a Data Scientist and Software Engineer at IBM, and holds a Masters in Statistics. His research focused on Cloud Computing, Machine Learning and Computer Vision.</p>\n",
"<p>Nayef Abou Tayoun is a Cognitive Data Scientist at IBM, and pursuing a Master's degree in Artificial Intelligence.</p>\n",
"<hr>\n",
"<p>Copyright &copy; 2019 IBM Developer Skills Network. This notebook and its source code are released under the terms of the <a href=\"https://cognitiveclass.ai/mit-license/\">MIT License</a>.</p>"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python",
"language": "python",
"name": "conda-env-python-py"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.11"
}
},
"nbformat": 4,
"nbformat_minor": 4
}
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment