Skip to content

Instantly share code, notes, and snippets.

@Calibr3-IO
Created June 28, 2020 07:52
Show Gist options
  • Save Calibr3-IO/c5add40cba479ab835ebcd2446233d50 to your computer and use it in GitHub Desktop.
Save Calibr3-IO/c5add40cba479ab835ebcd2446233d50 to your computer and use it in GitHub Desktop.
Created on Skills Network Labs
Display the source blob
Display the rendered blob
Raw
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<a href=\"https://cognitiveclass.ai/\">\n",
" <img src=\"https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/Ad/CCLog.png\" width=\"200\" align=\"center\">\n",
"</a>\n",
"\n",
"<h1 align=\"center\"><font size=\"5\"><b>Application Programming Interface</b></font></h1>"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<p>In this notebook, you will learn to convert an audio file of an English speaker to text using a Speech to Text API. Then you will translate the English version to a Spanish version using a Language Translator API. <b>Note:</b> You must obtain the API keys and enpoints to complete the lab.</p>"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<div class=\"alert alert-block alert-info\" style=\"margin-top: 20px\">\n",
" <a href=\"https://cocl.us/topNotebooksPython101Coursera\">\n",
" <img src=\"https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/Ad/TopAd.png\" width=\"750\" align=\"center\">\n",
" </a>\n",
"</div>"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<div class=\"alert alert-block alert-info\" style=\"margin-top: 20px\">\n",
"<h2>Table of Contents</h2>\n",
"<ul>\n",
" <li><a href=\"#ref0\">Speech To Text</a></li>\n",
" <li><a href=\"#ref1\">Language Translator</a></li>\n",
" <li><a href=\"#ref2\">Exercise</a></li>\n",
"</ul>\n",
"<br>\n",
"<p>Estimated Time Needed: <strong>25 min</strong></p>\n",
"</div>"
]
},
{
"cell_type": "code",
"execution_count": 39,
"metadata": {
"collapsed": false,
"jupyter": {
"outputs_hidden": false
}
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Requirement already satisfied: ibm_watson in /home/jupyterlab/conda/envs/python/lib/python3.6/site-packages (4.5.0)\n",
"Requirement already satisfied: wget in /home/jupyterlab/conda/envs/python/lib/python3.6/site-packages (3.2)\n",
"Requirement already satisfied: websocket-client==0.48.0 in /home/jupyterlab/conda/envs/python/lib/python3.6/site-packages (from ibm_watson) (0.48.0)\n",
"Requirement already satisfied: python-dateutil>=2.5.3 in /home/jupyterlab/conda/envs/python/lib/python3.6/site-packages (from ibm_watson) (2.8.1)\n",
"Requirement already satisfied: requests<3.0,>=2.0 in /home/jupyterlab/conda/envs/python/lib/python3.6/site-packages (from ibm_watson) (2.23.0)\n",
"Requirement already satisfied: ibm-cloud-sdk-core==1.5.1 in /home/jupyterlab/conda/envs/python/lib/python3.6/site-packages (from ibm_watson) (1.5.1)\n",
"Requirement already satisfied: six in /home/jupyterlab/conda/envs/python/lib/python3.6/site-packages (from websocket-client==0.48.0->ibm_watson) (1.15.0)\n",
"Requirement already satisfied: chardet<4,>=3.0.2 in /home/jupyterlab/conda/envs/python/lib/python3.6/site-packages (from requests<3.0,>=2.0->ibm_watson) (3.0.4)\n",
"Requirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /home/jupyterlab/conda/envs/python/lib/python3.6/site-packages (from requests<3.0,>=2.0->ibm_watson) (1.25.9)\n",
"Requirement already satisfied: certifi>=2017.4.17 in /home/jupyterlab/conda/envs/python/lib/python3.6/site-packages (from requests<3.0,>=2.0->ibm_watson) (2020.4.5.2)\n",
"Requirement already satisfied: idna<3,>=2.5 in /home/jupyterlab/conda/envs/python/lib/python3.6/site-packages (from requests<3.0,>=2.0->ibm_watson) (2.9)\n",
"Requirement already satisfied: PyJWT>=1.7.1 in /home/jupyterlab/conda/envs/python/lib/python3.6/site-packages (from ibm-cloud-sdk-core==1.5.1->ibm_watson) (1.7.1)\n"
]
}
],
"source": [
"#you will need the following library \n",
"!pip install ibm_watson wget"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<h2 id=\"ref0\">Speech to Text</h2>"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<p>First we import <code>SpeechToTextV1</code> from <code>ibm_watson</code>.For more information on the API, please click on this <a href=\"https://cloud.ibm.com/apidocs/speech-to-text?code=python\">link</a></p>"
]
},
{
"cell_type": "code",
"execution_count": 40,
"metadata": {
"collapsed": false,
"jupyter": {
"outputs_hidden": false
}
},
"outputs": [],
"source": [
"from ibm_watson import SpeechToTextV1 \n",
"import json\n",
"from ibm_cloud_sdk_core.authenticators import IAMAuthenticator"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<p>The service endpoint is based on the location of the service instance, we store the information in the variable URL. To find out which URL to use, view the service credentials.</p>"
]
},
{
"cell_type": "code",
"execution_count": 41,
"metadata": {},
"outputs": [],
"source": [
"url_s2t = \"https://api.eu-gb.speech-to-text.watson.cloud.ibm.com/instances/f514f4bc-c512-4cc8-9d28-dda64040b039\""
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<p>You require an API key, and you can obtain the key on the <a href=\"https://cloud.ibm.com/resources\">Dashboard </a>.</p>"
]
},
{
"cell_type": "code",
"execution_count": 42,
"metadata": {
"collapsed": false,
"jupyter": {
"outputs_hidden": false
}
},
"outputs": [],
"source": [
"iam_apikey_s2t = \"flTjEJKQI6pbAvKNBLac9QHCYX13KOIDOFd_J2JFsLLO\""
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<p>You create a <a href=\"http://watson-developer-cloud.github.io/python-sdk/v0.25.0/apis/watson_developer_cloud.speech_to_text_v1.html\">Speech To Text Adapter object</a> the parameters are the endpoint and API key.</p>"
]
},
{
"cell_type": "code",
"execution_count": 43,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"<ibm_watson.speech_to_text_v1_adapter.SpeechToTextV1Adapter at 0x7f5068f10da0>"
]
},
"execution_count": 43,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"authenticator = IAMAuthenticator(iam_apikey_s2t)\n",
"s2t = SpeechToTextV1(authenticator=authenticator)\n",
"s2t.set_service_url(url_s2t)\n",
"s2t"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<p>Lets download the audio file that we will use to convert into text.</p>"
]
},
{
"cell_type": "code",
"execution_count": 44,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"--2020-06-28 07:49:12-- https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/labs/PolynomialRegressionandPipelines.mp3\n",
"Resolving s3-api.us-geo.objectstorage.softlayer.net (s3-api.us-geo.objectstorage.softlayer.net)... 67.228.254.196\n",
"Connecting to s3-api.us-geo.objectstorage.softlayer.net (s3-api.us-geo.objectstorage.softlayer.net)|67.228.254.196|:443... connected.\n",
"HTTP request sent, awaiting response... 200 OK\n",
"Length: 4234179 (4.0M) [audio/mpeg]\n",
"Saving to: ‘PolynomialRegressionandPipelines.mp3’\n",
"\n",
"PolynomialRegressio 100%[===================>] 4.04M 5.32MB/s in 0.8s \n",
"\n",
"2020-06-28 07:49:13 (5.32 MB/s) - ‘PolynomialRegressionandPipelines.mp3’ saved [4234179/4234179]\n",
"\n"
]
}
],
"source": [
"!wget -O PolynomialRegressionandPipelines.mp3 https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/labs/PolynomialRegressionandPipelines.mp3\n",
"\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<p>We have the path of the wav file we would like to convert to text</p>"
]
},
{
"cell_type": "code",
"execution_count": 45,
"metadata": {
"collapsed": false,
"jupyter": {
"outputs_hidden": false
}
},
"outputs": [],
"source": [
"filename='PolynomialRegressionandPipelines.mp3'"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<p>We create the file object <code>wav</code> with the wav file using <code>open</code> ; we set the <code>mode</code> to \"rb\" , this is similar to read mode, but it ensures the file is in binary mode.We use the method <code>recognize</code> to return the recognized text. The parameter audio is the file object <code>wav</code>, the parameter <code>content_type</code> is the format of the audio file.</p>"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false,
"jupyter": {
"outputs_hidden": false
}
},
"outputs": [],
"source": [
"with open(filename, mode=\"rb\") as wav:\n",
" response = s2t.recognize(audio=wav, content_type='audio/mp3')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<p>The attribute result contains a dictionary that includes the translation:</p>"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"response.result"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from pandas.io.json import json_normalize\n",
"\n",
"json_normalize(response.result['results'],\"alternatives\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"response"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<p>We can obtain the recognized text and assign it to the variable <code>recognized_text</code>:</p>"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false,
"jupyter": {
"outputs_hidden": false
}
},
"outputs": [],
"source": [
"recognized_text=response.result['results'][0][\"alternatives\"][0][\"transcript\"]\n",
"type(recognized_text)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<h2 id=\"ref1\">Language Translator</h2>"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<p>First we import <code>LanguageTranslatorV3</code> from ibm_watson. For more information on the API click <a href=\"https://cloud.ibm.com/apidocs/speech-to-text?code=python\"> here</a></p>"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from ibm_watson import LanguageTranslatorV3"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<p>The service endpoint is based on the location of the service instance, we store the information in the variable URL. To find out which URL to use, view the service credentials.</p>"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false,
"jupyter": {
"outputs_hidden": false
}
},
"outputs": [],
"source": [
"url_lt='https://gateway.watsonplatform.net/language-translator/api'"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<p>You require an API key, and you can obtain the key on the <a href=\"https://cloud.ibm.com/resources\">Dashboard</a>.</p>"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false,
"jupyter": {
"outputs_hidden": false
}
},
"outputs": [],
"source": [
"apikey_lt=''"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<p>API requests require a version parameter that takes a date in the format version=YYYY-MM-DD. This lab describes the current version of Language Translator, 2018-05-01</p>"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false,
"jupyter": {
"outputs_hidden": false
}
},
"outputs": [],
"source": [
"version_lt='2018-05-01'"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<p>we create a Language Translator object <code>language_translator</code>:</p>"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false,
"jupyter": {
"outputs_hidden": false
}
},
"outputs": [],
"source": [
"authenticator = IAMAuthenticator(apikey_lt)\n",
"language_translator = LanguageTranslatorV3(version=version_lt,authenticator=authenticator)\n",
"language_translator.set_service_url(url_lt)\n",
"language_translator"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<p>We can get a Lists the languages that the service can identify.\n",
"The method Returns the language code. For example English (en) to Spanis (es) and name of each language.</p>"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false,
"jupyter": {
"outputs_hidden": false
}
},
"outputs": [],
"source": [
"from pandas.io.json import json_normalize\n",
"\n",
"json_normalize(language_translator.list_identifiable_languages().get_result(), \"languages\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<p>We can use the method <code>translate</code> this will translate the text. The parameter text is the text. Model_id is the type of model we would like to use use we use list the the langwich . In this case, we set it to 'en-es' or English to Spanish. We get a Detailed Response object translation_response</p>"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false,
"jupyter": {
"outputs_hidden": false
}
},
"outputs": [],
"source": [
"translation_response = language_translator.translate(\\\n",
" text=recognized_text, model_id='en-es')\n",
"translation_response"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<p>The result is a dictionary.</p>"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": false,
"jupyter": {
"outputs_hidden": false
}
},
"outputs": [],
"source": [
"translation=translation_response.get_result()\n",
"translation"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<p>We can obtain the actual translation as a string as follows:</p>"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"spanish_translation =translation['translations'][0]['translation']\n",
"spanish_translation "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<p>We can translate back to English</p>"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"translation_new = language_translator.translate(text=spanish_translation ,model_id='es-en').get_result()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<p>We can obtain the actual translation as a string as follows:</p>"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"translation_eng=translation_new['translations'][0]['translation']\n",
"translation_eng"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<p>We can convert it to french as well:</p>"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"French_translation=language_translator.translate(\n",
" text=translation_eng , model_id='en-fr').get_result()"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"French_translation['translations'][0]['translation']"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<h3>Language Translator</h3>"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
" <a href=\"http://cocl.us/NotebooksPython101bottom\"><img src=\"https://ibm.box.com/shared/static/irypdxea2q4th88zu1o1tsd06dya10go.png\" width=\"750\" align=\"center\"></a>"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<b>References</b>"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"https://cloud.ibm.com/apidocs/speech-to-text?code=python"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"https://cloud.ibm.com/apidocs/language-translator?code=python"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<hr>"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<h4>About the Author:</h4>\n",
"<p><a href=\"https://www.linkedin.com/in/joseph-s-50398b136/\">Joseph Santarcangelo</a> has a PhD in Electrical Engineering, his research focused on using machine learning, signal processing, and computer vision to determine how videos impact human cognition. Joseph has been working for IBM since he completed his PhD.</p>"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Other contributor: <a href=\"https://www.linkedin.com/in/fanjiang0619/\">Fan Jiang</a>"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Copyright &copy; 2019 [cognitiveclass.ai](https:cognitiveclass.ai). This notebook and its source code are released under the terms of the [MIT License](cognitiveclass.ai)."
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python",
"language": "python",
"name": "conda-env-python-py"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.10"
}
},
"nbformat": 4,
"nbformat_minor": 4
}
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment