Skip to content

Instantly share code, notes, and snippets.

@FoobarProtocol
Created October 21, 2023 23:44
Show Gist options
  • Save FoobarProtocol/fb2df802e89c4d815cb629f68f794807 to your computer and use it in GitHub Desktop.
Save FoobarProtocol/fb2df802e89c4d815cb629f68f794807 to your computer and use it in GitHub Desktop.
This is a script that allows for multiple threaded requests up at OpenAI so that we can create prompts from it within the pipeline
import openai
api_keys = ['api-key-1', 'api-key-2', 'api-key-3'] # Replace with your actual API keys
num_prompts = 1000
prompts_per_request = 100 # Adjust based on your needs
num_requests = 10
prompts = []
for i in range(num_requests):
openai.api_key = api_keys[i % len(api_keys)] # Rotate API keys
response = openai.Completion.create(
engine="text-davinci-002",
prompt=f"Generate {prompts_per_request} prompts for Solidity programming problems.",
temperature=0.5,
max_tokens=500 # Adjust based on your needs
)
# Post-process the output to split it into individual prompts
prompts += response.choices[0].text.strip().split('\n')
# If num_prompts is not a multiple of prompts_per_request, generate the remaining prompts
remaining_prompts = num_prompts % prompts_per_request
if remaining_prompts > 0:
openai.api_key = api_keys[num_requests % len(api_keys)] # Rotate API keys
response = openai.Completion.create(
engine="text-davinci-002",
prompt=f"Generate {remaining_prompts} prompts for Solidity programming problems.",
temperature=0.5,
max_tokens=500 # Adjust based on your needs
)
# Post-process the output to split it into individual prompts
prompts += response.choices[0].text.strip().split('\n')
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment