Skip to content

Instantly share code, notes, and snippets.

Show Gist options
  • Save DiegoHernanSalazar/8bb2bdd7425b529ca3018eaff8c3bb78 to your computer and use it in GitHub Desktop.
Save DiegoHernanSalazar/8bb2bdd7425b529ca3018eaff8c3bb78 to your computer and use it in GitHub Desktop.
OpenAI / DeepLearning.AI. Building Systems with the ChatGpt API, L7: Build an End-to-End System.
Display the source blob
Display the rendered blob
Raw
{
"metadata": {
"kernelspec": {
"name": "python",
"display_name": "Python (Pyodide)",
"language": "python"
},
"language_info": {
"codemirror_mode": {
"name": "python",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8"
}
},
"nbformat_minor": 4,
"nbformat": 4,
"cells": [
{
"cell_type": "markdown",
"source": "<img src=\"https://wordpress.deeplearning.ai/wp-content/uploads/2023/05/OpenAI-DeepLearning_Short_Courses_Campaign_1080.png\"/>",
"metadata": {}
},
{
"cell_type": "markdown",
"source": "# L7: Build an End-to-End System\n\nThis puts together the chain of prompts that you saw throughout the course.",
"metadata": {}
},
{
"cell_type": "markdown",
"source": "## Setup\n#### Load the API key and relevant Python libaries.\nIn this course, we've provided some code that loads the OpenAI API key for you.",
"metadata": {}
},
{
"cell_type": "markdown",
"source": "```python\nimport openai # Load OpenAI API library constructor/object\nimport os # Set your openai API key secretly, at the \n # operating system, using os constructor\nimport sys # access to system-specific parameters and functions in Python\nsys.path.append('../..') # it searches modules to be imported at the new directory path '../..'\nimport utils # make common patterns shorter and easier in python, \n # via small functions and classes.\n\nimport panel as pn # 'pn' Panel constructor, to build a Chatbot User Interface\npn.extension() # Initialize the panel extension at a notebook, to get there interactive\n # data apps, displaying user message and response together, at one panel widget.\nimport tiktoken # get tokenizer constructor to get input, output and total tokens count\n\nfrom dotenv import load_dotenv, find_dotenv # Get load and Read functions\n_ = load_dotenv(find_dotenv()) # Read and then Load local file called '.env' that contains API key\n # at os -> environ dictionary {} -> ['OPENAI_API_KEY'] key -> write API key value as 'sk-...'\n\n# Get an openai API key from the OpenAI website, and set your API key as openai.api_key=\"sk-...\" \nopenai.api_key = os.environ['OPENAI_API_KEY'] # This way you set/store the API key more securely, as an\n # environmental variable, into the operating system, using\n # openai.api_key=os.getenv('OPEN_API_KEY')\n # so the API key's value is set in the jupyter environment,\n # by selecting this written API 'key' value at: \n # os -> environ dictionary {} -> ['OPENAI_API_KEY'] key \n # -> API key value is a text/string 'sk-...'\n```",
"metadata": {}
},
{
"cell_type": "markdown",
"source": "```python\ndef get_completion_from_messages(messages, # List of multiple messages [{},{},...]\n model=\"gpt-3.5-turbo\", # model's name used\n temperature=0, # Randomness in model's response \n max_tokens=500): # max input+output characters\n response = openai.ChatCompletion.create(\n model=model, # \"gpt-3.5-turbo\"\n messages=messages, # [{},{},...]\n temperature=temperature, # this is the degree of randomness/exploration in the model's output. \n # Max exploration occurs when temperature is closer to 1\n max_tokens=max_tokens, # maximum number of tokens (common sequences of characters) \n # the model can accept as input'context' + output'completion' \n )\n return response.choices[0].message[\"content\"] # model's response message or output completion\n```",
"metadata": {}
},
{
"cell_type": "markdown",
"source": "## System of chained prompts for processing the user query",
"metadata": {}
},
{
"cell_type": "markdown",
"source": "```python\ndef process_user_message(user_input, # 'user' message/value \r\n all_messages, # empty list [] + filled with [{},{},...] multiple messages\r\n debug=True): # Boolean flag=True, to execute each step code via if True: -> if debug:\r\n \r\n delimiter = \"```\" # Create delimiter of 'user' messages as ```\r\n \r\n # Step 1: Check input to see if it flags at the 'Moderation' API output, or it's a prompt injection,\r\n # which intents the model doesn't answer the previous query.\r\n response = openai.Moderation.create(input=user_input) # 'Moderation' API evaluates if user_input message is harmful.\r\n # 'Moderation' API Output/response is a obj/dict {}.\r\n moderation_output = response[\"results\"][0] # Dict {}/response with 'categories', 'scores' of harmfulness, \r\n # and 'flagged' boolean output.\r\n # { \"categories\":{...},\r\n # \"cat_scores\":{...},\r\n # \"flagged\": boolean\r\n # }\r\n\r\n if moderation_output[\"flagged\"]: # If 'user' message was catalogued as harmful, at some \"category\",\r\n # then \"flagged\" key value is 'True',\r\n print(\"Step 1: Input flagged by Moderation API.\") # So, print that if debug=true\r\n return \"Sorry, we cannot process this request.\" # and return fail processing user's request.\r\n\r\n if debug: print(\"Step 1: Input passed moderation check.\") # If 'Moderation' Flag was 'False', then code inside that, \r\n # was not executed, so execute this statement instead,\r\n # when debug=True, printing that the input is not harmful.\r\n \r\n # Look for 'products' and 'category' related to input user_message, and get model's response to query,\r\n # as 'string' list of dicts [{},\r\n # {},\r\n # {}]\r\n category_and_product_response = utils.find_category_and_product_only(user_input, utils.get_products_and_category())\r\n #print(category_and_product_response) # Display model's response to user query, about'products' and 'category', \r\n # as 'string' list of dicts [{},{},{}]\r\n \r\n # Step 2: Extract the list of products, converted from 'string' to JSON iterable list of dicts [{},{},{}]\r\n category_and_product_list = utils.read_string_to_list(category_and_product_response)\r\n #print(category_and_product_list) # Display model's response to user query, as info of products,\r\n # by 'name' and 'category' as JSON iterable list of dicts [{},{},{}]\r\n\r\n if debug: print(\"Step 2: Extracted list of products.\") # Execute if statement, saying that this JSON iterable list\r\n # of dicts [{},{},{}] was extracted, as Step 2, when debug=True.\r\n\r\n # Step 3: If products are found, convert list from JSON to 'String', to be look them up by the model, \r\n # Return a 'string' of products/dicts {\"name\":_, \"category\":_, ..., \"price\":_},\r\n # {\"name\":_, \"category\":_, ..., \"price\":_},\r\n # ... \r\n # {\"name\":_, \"category\":_, ..., \"price\":_}\r\n # If no products are found, give an empty 'string' {}.\r\n product_information = utils.generate_output_string(category_and_product_list)\r\n if debug: print(\"Step 3: Looked up product information.\") # Display that info of products as dicts was found\r\n # when debug=True.\r\n # {\"name\":_, \"category\":_, ..., \"price\":_},\r\n # {\"name\":_, \"category\":_, ..., \"price\":_},\r\n # ... \r\n # {\"name\":_, \"category\":_, ..., \"price\":_}\r\n\r\n # Step 4: Answer the user question, so we give the conversation history 'system' msg, 'user' msg,\r\n # and the 'assistant' msg, which provides to the model the relevant 'string', \r\n # with the previous product_information dictionaries {},{},...,{}\r\n system_message = f\"\"\"\r\n You are a customer service assistant for a large electronic store. \\\r\n Respond in a friendly and helpful tone, with concise answers. \\\r\n Make sure to ask the user relevant follow-up questions.\r\n \"\"\"\r\n messages = [\r\n {'role': 'system', 'content': system_message}, # messages[0]\r\n {'role': 'user', 'content': f\"{delimiter}{user_input}{delimiter}\"}, # messages[1]\r\n {'role': 'assistant', 'content': f\"Relevant product information:\\n{product_information}\"}\r\n ]\r\n \r\n # First time, we add multiple elements at list above: [{'system'},{'user'},{'assistant'}] \r\n # at the end of all_messages [] empty list, [{'system'},{'user'},{'assistant'}], \r\n # but doesn't store all_messages. This way we get the model response.\r\n final_response = get_completion_from_messages(all_messages + messages)\r\n if debug:print(\"Step 4: Generated response to user question.\") # Print that model response, was generated\r\n # when debug=True.\r\n \r\n # Then, add to [] empty list, just these ones [{'user'},{'assistant'}]\r\n # at the end, because select from messages[1] to end : -> messages[1:], \r\n # so all_messages = [{'user'},{'assistant'}] \r\n all_messages = all_messages + messages[1:] # a+=b or a.extend(b) do the same, \r\n # add multiple b content at the end of a.\r\n\r\n # Step 5: Put the answer/model response through the 'Moderation' API, \r\n # and check if the model output is flagged at the output.\r\n response = openai.Moderation.create(input=final_response) # 'Moderation' API evaluates if model_output message is harmful. \r\n # 'Moderation' API Output/response is a obj/dict {}.\r\n \r\n moderation_output = response[\"results\"][0] # Dict {}/response with 'categories', 'scores' of harmfulness, \r\n # and 'flagged' boolean output.\r\n # { \"categories\":{...},\r\n # \"cat_scores\":{...},\r\n # \"flagged\": boolean\r\n # } \r\n \r\n if moderation_output[\"flagged\"]: # If 'assistant' message was catalogued as harmful, at some \"category\",\r\n # then \"flagged\" key value is 'True',\r\n if debug: print(\"Step 5: Response flagged by Moderation API.\") # So, print that, when debug=True,\r\n return \"Sorry, we cannot provide this information.\" # and say it can't display flagged model's message.\r\n\r\n if debug: print(\"Step 5: Response passed moderation check.\") # If 'Moderation' Flag was 'False', then code inside that, \r\n # was not executed, so execute this statement instead,\r\n # when debug=True, printing that the output is not harmful.\r\n\r\n # Step 6: Ask the model if the response answers the initial user query well\r\n user_message = f\"\"\"\r\n Customer message: {delimiter}{user_input}{delimiter}\r\n Agent response: {delimiter}{final_response}{delimiter}\r\n\r\n Does the response sufficiently answer the question?\r\n \"\"\"\r\n messages = [\r\n {'role': 'system', 'content': system_message},\r\n {'role': 'user', 'content': user_message}\r\n ]\r\n \r\n # Comparison/evaluation of model's response, respect to user query.\r\n evaluation_response = get_completion_from_messages(messages)\r\n if debug: print(\"Step 6: Model evaluated the response.\") # Comparison of model's response \r\n # respect to first user query, was done,\r\n # Display this debugging text, when debug=True.\r\n\r\n # Step 7: If evaluation says that model sufficiently answer the first user question as (yes,'Y'), use this answer. \r\n if \"Y\" in evaluation_response: # Using \"in\" instead of \"==\" strictly equal to, to be safer for model output variation \r\n # (e.g., \"Y.\" or \"Yes\")\r\n if debug: print(\"Step 7: Model approved the response.\") # Print that model sufficiently answer the first user question,\r\n # when debug=True.\r\n\r\n return final_response, all_messages # So, return that model response and the stored list \r\n # [{'user'},{'assistant'},...,{'user'},{'assistant'}] \r\n # with 'user' and 'assistant' historic of chats.\r\n\r\n else: # If not (no,'N'), say that you will connect the user to a human.\r\n if debug: print(\"Step 7: Model disapproved the response.\") # Evaluation says that model response doesn't satisfy \r\n # the 'user' query well, when debug=True.\r\n\r\n neg_str = \"I'm unable to provide the information you're looking for. I'll connect you with a human representative for further assistance.\"\r\n\r\n return neg_str, all_messages # So, return the negative 'text' and the stored list \r\n # [{'user'},{'assistant'},...,{'user'},{'assistant'}] \r\n # with 'user' and 'assistant' historic of chats.\r\n\r\n# First 'user' message query with question\r\nuser_input = \"tell me about the smartx pro phone and the fotosnap camera, the dslr one. Also what tell me about your tvs\"\r\n\r\n# Outputs model's 'response' if evaluation is 'Y' yes, negative 'text' if evaluation is 'N' no. \r\n# Also 'all_messages' list with historic of chats.\r\n# Inputs 'user_message' and 'all_messages' list, as a default empty [] list.\r\nresponse,_ = process_user_message(user_input,[]) \r\n\r\nprint(response) # Print model response if evaluation is 'Y' yes, or\r\n # negative 'text' if evaluation is 'N' no.\n```",
"metadata": {}
},
{
"cell_type": "markdown",
"source": "```\nStep 1: Input passed moderation check.\nStep 2: Extracted list of products.\nStep 3: Looked up product information.\nStep 4: Generated response to user question.\nStep 5: Response passed moderation check.\nStep 6: Model evaluated the response.\nStep 7: Model approved the response.\nSure! Here's some information about the SmartX ProPhone and the FotoSnap DSLR Camera:\n\n1. SmartX ProPhone:\n - Brand: SmartX\n - Model Number: SX-PP10\n - Features: 6.1-inch display, 128GB storage, 12MP dual camera, 5G connectivity\n - Description: A powerful smartphone with advanced camera features.\n - Price: $899.99\n - Warranty: 1 year\n\n2. FotoSnap DSLR Camera:\n - Brand: FotoSnap\n - Model Number: FS-DSLR200\n - Features: 24.2MP sensor, 1080p video, 3-inch LCD, interchangeable lenses\n - Description: Capture stunning photos and videos with this versatile DSLR camera.\n - Price: $599.99\n - Warranty: 1 year\n\nNow, could you please let me know which specific TV models you are interested in?\n```",
"metadata": {}
},
{
"cell_type": "markdown",
"source": "### Function that collects user and assistant messages over time",
"metadata": {}
},
{
"cell_type": "markdown",
"source": "```python\ndef collect_messages(debug=False): # Accumulate the messages as we \r\n # interact with the assistant.\r\n\r\n user_input = inp.value_input # Get 'user' input message\r\n \r\n if debug: print(f\"User Input = {user_input}\") # When debug=True, print the 'user' message.\r\n \r\n if user_input == \"\": # If there is no 'user' message (it's empty),\r\n return # then return nothing.\r\n \r\n inp.value = '' # Init input 'user' value, as an empty string ''\r\n\r\n global context # Global variables can be used by everyone, both inside of functions and outside.\r\n\r\n #response, context = process_user_message(user_input, context, utils.get_products_and_category(),debug=True)\r\n \r\n # Outputs model's 'response' if evaluation is 'Y' yes, negative 'text' if evaluation is 'N' no. \r\n # Also 'all_messages' or 'context' list with historic of chats\r\n # [{'new_system'},{'new_user'},{'product_info_assistant'}]\r\n # Inputs 'new_user' message and 'all_messages' or 'context' list, with just 'new_system' message.\r\n response, context = process_user_message(user_input, context, debug=False)\r\n \r\n # context=[{'new_system'},{'new_user'},{'product_info_assistant'},{'final_response_assistant'}]\r\n context.append({'role':'assistant', 'content':f\"{response}\"}) # Add final 'response' message to previous 'context' \r\n # list of messages, from model, at {'assistant'} role.\r\n \r\n ### Build User Interface UI ###\r\n # Append '|User: |user_input||' Row panel, to 'panels' empty list [], with panel Row objects.\r\n panels.append(\r\n pn.Row('User:', pn.pane.Markdown(user_input, width=600))) # Create |Row panel->| and append to 'panels' list.\r\n\r\n # Append '|Assistant: |response||' Row panel, to 'panels' empty list, with panel Row objects.\r\n panels.append(\r\n pn.Row('Assistant:', pn.pane.Markdown(response, width=600, style={'background-color': '#F6F6F6'}))) # Create |Row panel->| and \r\n # append to 'panels' list.\r\n # '#F6F6F6' is light red\r\n # background color.\r\n # Add 'panels' list = [User Row panel, Assistant Row panel], into a unique Column panel.\r\n return pn.Column(*panels) # and return the Column panel, which includes both Row panels.\r\n # ---------------------\r\n # |User: user_input |\r\n # |Assistant: response|\r\n # ---------------------\n```",
"metadata": {}
},
{
"cell_type": "markdown",
"source": "### Chat with the chatbot!\r\nNote that the 'system' message includes detailed instructions at Step 4 of 'process_user_message()' function, about what Customer Service Assistant should do.",
"metadata": {}
},
{
"cell_type": "markdown",
"source": "```python\npanels = [] # Empty list of panel Row objects, to be passed into panel Column, for chat displaying. \r\n\r\n# context = all_messages, initial list with 'system' as first message.\r\n# The second/detailed 'system' message, is given at Step 4 of 'process_user_messages()' function.\r\ncontext = [ {'role':'system', 'content':\"You are Service Assistant\"} ] \r\n\r\n# Create the |text| input box for the 'user', as new input value.\r\n# Recall that it was initialized as an empty 'string', inp.value='' \r\n# at 'collect_messages()' function.\r\ninp = pn.widgets.TextInput( placeholder='Enter text here…')\r\n\r\n# Create the Button |Service Assistant| to trigger/execute the chatting\r\n# event, each time it is clicked <-.\r\nbutton_conversation = pn.widgets.Button(name=\"Service Assistant\")\r\n\r\n# pn.bind(function, trigger) executes 'collect_messages' function,\r\n# each time the trigger 'button_conversation' is clicked <-, so it returns\r\n# a binding panel object (to be input to a panel function in dashboard), \r\n# of 1 Column panel, with both, 'user' msg and 'assistant' \r\n# response Row panels, each time this trigger button is clicked <-.\r\ninteractive_conversation = pn.bind(collect_messages, button_conversation)\r\n\r\n# Attach one single element at a time (not as list -> *[_,_,_]), in 1 panel Column / Dashboard: \r\ndashboard = pn.Column(\r\n inp, # |'text'| input box with 'user' message.\r\n pn.Row(button_conversation), # |Trigger Button|<- to execute one chatting event per click <-.\r\n \r\n # And panel of 1 Column with 2 Rows inside, including 'user' message and 'assistant' response.\r\n # Add a loading indicator and its height, when 'interactive_conversation' panel (obj) occurs.\r\n pn.panel(interactive_conversation, loading_indicator=True, height=300),\r\n)\r\n\r\ndashboard # dasboard = |Main Panel Column| attached with 3 elements -> \r\n # |'user_input'| text box, \r\n # |'Service_Assistant'|<- trigger button,\r\n # |Column panel| with 'user' and 'assistant' messages.\r\n # (A new |Column panel| appears each time a user query is triggered by trigger button).\n```",
"metadata": {}
},
{
"cell_type": "markdown",
"source": "```\n-----------------------------------\n| Enter Text Here... |\n-----------------------------------\n---------------------\n| Service Assistant | <--\n---------------------\n\nUser: What tvs do you have?\n\nAssistant: We have a wide range of TVs available, including LED, QÑED, OLED, and Smart TVs. Could you please\n provide me with more specific details about what you are looking for? For example, what size, brand or\n features are you interested in?\n\nUser: Which is the cheapest tv?\n\nAssistant: The cheapest TV we have is the CineView 4K TV, price at $599.99. It features a 55-inch display, 4K\n resolution, HDR and Smart TV capabilities. Is there anything else I can assist you with?\n\nUser: Which is the most expensive tv?\n\nAssistant: The most expensive TV is the CineView 8K TV, priced at $2999.99. It features a 65-inch\n display, 8K resolution, HDR and Smart TV capabilities. Is there anything else I can assist you with?\n\nUser: Tell me more about CineView 8K TV...\n\nAssistant: The CineView 8K TV is a top-of-the-line television that offers an exceptional viewing experience. here\n are some key features:\n - Display: It features a large 65-inch display, providing a truly immersive viewing experience.\n - Resolution: The TV boasts an impressive 8K resolution, delivering incredibly sharp and defined\n images.\n - HDR: High Dynamic Range (HDR) technology enhaces the contrast and color accuracy, resulting\n in vibrant and lifelike visuals.\n - Smart TV: The CineView 8K TV is a Smart TV, allowing you to access a variety of streaming\n services, apps and online content.\n - Warranty: It comes with a 2-years warranty, providing peace of mind for your purchase.\n \n Please let me konw if you have any specific questions or if there's anything else I can assist you with! \n```",
"metadata": {}
},
{
"cell_type": "code",
"source": "",
"metadata": {
"trusted": true
},
"outputs": [],
"execution_count": null
}
]
}
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment