Skip to content

Instantly share code, notes, and snippets.

Show Gist options
  • Save caleb-kaiser/d35a4dd73f2c2ded1fde42022ee18f49 to your computer and use it in GitHub Desktop.
Save caleb-kaiser/d35a4dd73f2c2ded1fde42022ee18f49 to your computer and use it in GitHub Desktop.
1-prompt-engineering-overview-part1.ipynb
Display the source blob
Display the rendered blob
Raw
{
"cells": [
{
"cell_type": "markdown",
"metadata": {
"id": "TW7yEtaWCUQL"
},
"source": [
"# Overview of Prompt Engineering Techniques & Best Practices"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "g8TD6U9JCUQO"
},
"source": [
"## Part 1: Prompt Engineering Best Practices\n",
"\n",
"In this section, we provide an overview of the top tips and best practices for prompting LLMs."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "byL1RG2UCUQO"
},
"source": [
"---"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "nMKUTILJCUQP"
},
"source": [
"We first load the necessary libraries:"
]
},
{
"cell_type": "code",
"source": [
"! pip install openai==0.28 --quiet"
],
"metadata": {
"id": "FCsZehLGCilc"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "MlbaTUatCUQP"
},
"outputs": [],
"source": [
"import openai\n",
"import os\n",
"import IPython\n",
"\n",
"# API configuration\n",
"openai.api_key = \"OPENAI_API_KEY\""
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "F09KkfieCUQQ"
},
"outputs": [],
"source": [
"# completion function\n",
"def get_completion(messages, model=\"gpt-3.5-turbo\", temperature=0, max_tokens=300):\n",
" response = openai.ChatCompletion.create(\n",
" model=model,\n",
" messages=messages,\n",
" temperature=temperature,\n",
" max_tokens=max_tokens,\n",
" )\n",
" return response.choices[0].message[\"content\"]"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "pWBwLW6kCUQR"
},
"source": [
"### Be Specific and Clear"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "NfhCFzCOCUQR"
},
"source": [
"Write instructions as clear and specific as possible to get the desired LLM behaviors:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "vUWyqyT9CUQR"
},
"outputs": [],
"source": [
"global_trending_movies = [\"The Suicide Squad\", \"No Time to Die\", \"Dune\", \"Spider-Man: No Way Home\", \"The French Dispatch\", \"Black Widow\", \"Eternals\", \"The Matrix Resurrections\", \"West Side Story\", \"The Many Saints of Newark\"]\n",
"\n",
"system_message = \"\"\"\n",
"Your task is to recommend movies to a customer.\n",
"\n",
"You are responsible to recommend a movie from the top global trending movies from {global_trending_movies}.\n",
"\n",
"You should refrain from asking users for their preferences and avoid asking for personal information.\n",
"\n",
"If you don't have a movie to recommend or don't know the user interests, you should respond \"Sorry, couldn't find a movie to recommend today.\".\n",
"\"\"\"\n",
"\n",
"user_request = \"\"\"\n",
"Please recommend a movie based on my interests.\n",
"\"\"\"\n",
"\n",
"message = [\n",
" {\n",
" \"role\": \"system\",\n",
" \"content\": system_message.format(global_trending_movies=global_trending_movies)\n",
" },\n",
" {\n",
" \"role\": \"user\",\n",
" \"content\": user_request\n",
" }\n",
"]\n",
"\n",
"response = get_completion(message)\n",
"print(response)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "b6hb7537CUQS"
},
"source": [
"The more specific the desired the behavior you want from the model, the more specific the instructions and logic should be. Below is an example where the customer provides information about interests:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "d1sfCPwKCUQT"
},
"outputs": [],
"source": [
"global_trending_movies = [\"The Suicide Squad\", \"No Time to Die\", \"Dune\", \"Spider-Man: No Way Home\", \"The French Dispatch\", \"Black Widow\", \"Eternals\", \"The Matrix Resurrections\", \"West Side Story\", \"The Many Saints of Newark\"]\n",
"\n",
"system_message = \"\"\"\n",
"Your task is to recommends movies to a customer.\n",
"\n",
"You are responsible to recommend a movie from the top global trending movies from {global_trending_movies}.\n",
"\n",
"You should refrain from asking users for their preferences and avoid asking for personal information.\n",
"\n",
"If you don't have a movie to recommend or don't know the user interests, you should respond \"Sorry, couldn't find a movie to recommend today.\".\n",
"\"\"\"\n",
"\n",
"user_request = \"\"\"\n",
"I love super-hero movies. Please recommend a movie based on my interests.\n",
"\"\"\"\n",
"\n",
"message = [\n",
" {\n",
" \"role\": \"system\",\n",
" \"content\": system_message.format(global_trending_movies=global_trending_movies)\n",
" },\n",
" {\n",
" \"role\": \"user\",\n",
" \"content\": user_request\n",
" }\n",
"]\n",
"\n",
"response = get_completion(message)\n",
"print(response)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "6IZtFaB9CUQT"
},
"source": [
"### Add Delimiters\n",
"\n",
"Adding delimiters help to better structure instructions and the overall prompt components. This is beneficial to get more reliable responses."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "Gy8_gFBnCUQU"
},
"outputs": [],
"source": [
"prompt = \"\"\"\n",
"Convert the following code block in the #### <code> #### section to Python:\n",
"\n",
"####\n",
"strings2.push(\"one\")\n",
"strings2.push(\"two\")\n",
"strings2.push(\"THREE\")\n",
"strings2.push(\"4\")\n",
"####\n",
"\"\"\"\n",
"\n",
"message = [\n",
" {\n",
" \"role\": \"user\",\n",
" \"content\": prompt\n",
" }\n",
"]\n",
"\n",
"IPython.display.Markdown(\"```python\" + get_completion(message) + \"\\n```\")"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "leL09Rl1CUQU"
},
"source": [
"### Specify Output Format\n",
"\n",
"If the format of prompt responses are important, then this should be explicitly stated in the prompt to get desired results. In the example, we would like to export the results as a JSON object."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "9FBIuxnKCUQU"
},
"outputs": [],
"source": [
"prompt = \"\"\"\n",
"Your task is: given a product description, return the requested information in the section delimited by ### ###. Format the output as a JSON object.\n",
"\n",
"Product Description: Introducing the Nike Air Max 270 React: a comfortable and stylish sneaker that combines two of Nike's best technologies. With a sleek black design and a unique bubble sole, these shoes are perfect for everyday wear.\n",
"\n",
"###\n",
"product_name: the name of the product\n",
"product_bran: the name of the brand (if any)\n",
"###\n",
"\"\"\"\n",
"\n",
"message = [\n",
" {\n",
" \"role\": \"user\",\n",
" \"content\": prompt\n",
" }\n",
"]\n",
"\n",
"print(get_completion(message))"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "V8XnZQ3kCUQU"
},
"source": [
"### Think Step by Step\n",
"\n",
"To elicit reasoning in LLMs, you can prompt the model to think step-by-step. Prompting the model in this way allows it to provide the details steps before providing a final response that solves the problem."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "uKQz73H8CUQV"
},
"outputs": [],
"source": [
"prompt = \"\"\"The odd numbers in this group add up to an even number: 15, 32, 5, 13, 82, 7, 1.\n",
"\n",
"Solve by breaking the problem into steps. First, identify the odd numbers, add them, and indicate whether the result is odd or even.\"\"\"\n",
"\n",
"messages = [\n",
" {\n",
" \"role\": \"system\",\n",
" \"content\": prompt\n",
" }\n",
"]\n",
"\n",
"response= get_completion(messages)\n",
"\n",
"print(response)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "YYyOpLGuCUQV"
},
"source": [
"### Role Playing\n",
"\n",
"The example below shows how to apply role playing using a chat model like GPT-3.5 Turbo. Notice the use of system message, user message, and assistant message in the example. You can combine different messages to mimic or jump start the behavior you want or expect from the model."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "FrXEZb-MCUQV"
},
"outputs": [],
"source": [
"system_message = \"\"\"\n",
"The following is a conversation with an AI research assistant. The assistant tone is technical and scientific.\n",
"\"\"\"\n",
"\n",
"user_message_1 = \"\"\"\n",
"Hello, who are you?\n",
"\"\"\"\n",
"\n",
"ai_message_1 = \"\"\"\n",
"Greeting! I am an AI research assistant. How can I help you today?\n",
"\"\"\"\n",
"\n",
"prompt = \"\"\"\n",
"Human: Can you tell me about the creation of blackholes?\n",
"AI:\n",
"\"\"\"\n",
"\n",
"messages = [\n",
" {\n",
" \"role\": \"system\",\n",
" \"content\": system_message\n",
" },\n",
" {\n",
" \"role\": \"user\",\n",
" \"content\": user_message_1\n",
" },\n",
" {\n",
" \"role\": \"assistant\",\n",
" \"content\": ai_message_1\n",
"\n",
" },\n",
" {\n",
" \"role\": \"user\",\n",
" \"content\": prompt\n",
" }\n",
"]\n",
"\n",
"response = get_completion(messages)\n",
"print(response)"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "comet",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.17"
},
"orig_nbformat": 4,
"colab": {
"provenance": [],
"include_colab_link": true
}
},
"nbformat": 4,
"nbformat_minor": 0
}
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment