Skip to content

Instantly share code, notes, and snippets.

@sunilkumardash9
Created April 28, 2023 18:39
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
Star You must be signed in to star a gist
Embed
What would you like to do?
def generate_response(history, model ):
global messages, cost
response = openai.ChatCompletion.create(
model = model,
messages=messages,
temperature=0.2,
)
response_msg = response.choices[0].message.content
cost = cost + (response.usage['total_tokens'])*(0.002/1000)
messages = messages + [{"role":'assistant', 'content': response_msg}]
for char in response_msg:
history[-1][1] += char
#time.sleep(0.05)
yield history
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment