Skip to content

Instantly share code, notes, and snippets.

@jayo78
Last active April 27, 2024 16:37
Show Gist options
  • Star 62 You must be signed in to star a gist
  • Fork 24 You must be signed in to fork a gist
  • Save jayo78/79d8834e6e31bf942c7b604e1611b68d to your computer and use it in GitHub Desktop.
Save jayo78/79d8834e6e31bf942c7b604e1611b68d to your computer and use it in GitHub Desktop.
import openai
openai.api_key = "YOUR API KEY HERE"
model_engine = "text-davinci-003"
chatbot_prompt = """
As an advanced chatbot, your primary goal is to assist users to the best of your ability. This may involve answering questions, providing helpful information, or completing tasks based on user input. In order to effectively assist users, it is important to be detailed and thorough in your responses. Use examples and evidence to support your points and justify your recommendations or solutions.
<conversation_history>
User: <user input>
Chatbot:"""
def get_response(conversation_history, user_input):
prompt = chatbot_prompt.replace(
"<conversation_history>", conversation_history).replace("<user input>", user_input)
# Get the response from GPT-3
response = openai.Completion.create(
engine=model_engine, prompt=prompt, max_tokens=2048, n=1, stop=None, temperature=0.5)
# Extract the response from the response object
response_text = response["choices"][0]["text"]
chatbot_response = response_text.strip()
return chatbot_response
def main():
conversation_history = ""
while True:
user_input = input("> ")
if user_input == "exit":
break
chatbot_response = get_response(conversation_history, user_input)
print(f"Chatbot: {chatbot_response}")
conversation_history += f"User: {user_input}\nChatbot: {chatbot_response}\n"
main()
@yumeminami
Copy link

yumeminami commented Feb 12, 2023

What if the length of the conversation history exceeds the range allowed by the API?

@jayo78
Copy link
Author

jayo78 commented Feb 12, 2023

What if the length of the conversation history exceeds the range allowed by the API?

You can use semantic search to find relevant parts of the previous conversation to insert before generating a response. David Shapiro has a great video on this: https://www.youtube.com/watch?v=c3aiCrk0F0U

@Nic-DC
Copy link

Nic-DC commented Mar 8, 2023

Hi, thank you for putting code like this out in the open. I'm a newbie web developer and stumbled upon your work while going through learnprompting.org. I was inspired and refactored it into javascript and also built a React front-end to test it out. The server-side code works well [tested it with postman and the ai responses are valid] but can not seem to understand why the response on the client side is not what it should be. If you're still following this thread and in the mood to share your expertise, I would very much appreciate it :)

@sjbaines
Copy link

sjbaines commented Apr 5, 2023

I tried this, and wondered why it didn't seem to be remembering anything I'd said.
Then I noticed that in the prompt you have <conversation history> (with a space), but in the replace command you use <conversation_history> (with an underscore), so no replacement actually happens...

@jayo78
Copy link
Author

jayo78 commented Apr 8, 2023

I tried this, and wondered why it didn't seem to be remembering anything I'd said. Then I noticed that in the prompt you have <conversation history> (with a space), but in the replace command you use <conversation_history> (with an underscore), so no replacement actually happens...

Good catch - just updated. Thanks!

@natanaelfonseca
Copy link

Hello, thank you very much for this example, as a Java programmer, learning python with a focus on generative AI and also using prompt engineering has been a super pleasant adventure.

@Temitayolds
Copy link

love this

@KayceeACollins
Copy link

Ah! Thank you - found what I was looking to learn here! Thank you @jayo78 !!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment