Skip to content

Instantly share code, notes, and snippets.

Last active April 2, 2024 15:49
Show Gist options
  • Save hopsoft/0c9305bdcae31773406e489ebd34b777 to your computer and use it in GitHub Desktop.
Save hopsoft/0c9305bdcae31773406e489ebd34b777 to your computer and use it in GitHub Desktop.
AI Prompt Cheatsheet

AI/Large Language Model Prompt Tuning Cheatsheet

When crafting prompts for AI or LLMs, you can adjust various inputs or variables to tailor the model's responses. Here's a simple guide to understanding what each of these terms means:

  • prompt: The question or statement you provide to the model, essentially telling it what you want to know or do.

    • Example: "What is the weather today?" vs. "Write a poem about the rain."
    • Effect: Directly sets the topic and style of the AI's response.
  • max_tokens: Limits how long the AI's response can be, like setting a maximum number of words or sentences it can use to answer.

    • 50 - Makes the AI provide a concise, often one-sentence reply.
    • 150 - Allows for a more detailed explanation or story.
    • 300 - Gives the AI space for in-depth analysis or creative storytelling.
  • temperature: Controls the creativity of the AI's responses; a lower value makes the AI more predictable, while a higher value encourages more unique answers.

    • 0.0 - Generates deterministic, highly predictable responses.
    • 0.5 - Offers a balance between creativity and reliability.
    • 1.0 - Maximizes creativity, leading to more varied and unexpected answers.
  • top_p: Adjusts the diversity of the AI's responses by limiting the range of words the AI considers; lower values make responses more focused, while higher values allow for more variability.

    • 0.1 - Results in highly focused and consistent responses.
    • 0.5 - Strikes a balance between focus and creativity.
    • 0.9 - Encourages a wide variety of words and ideas in responses.
  • frequency_penalty: Discourages the AI from repeating itself, pushing for more varied vocabulary in its responses.

    • 0 - No penalty for repeating words or phrases.
    • 0.5 - Moderately prevents repetition.
    • 1 - Strongly discourages any repetition, leading to more diverse language.
  • presence_penalty: Encourages the AI to introduce new concepts and ideas instead of sticking to what has already been mentioned.

    • 0 - No penalty, allowing the AI to stick to introduced concepts.
    • 0.5 - Encourages some introduction of new ideas.
    • 1 - Strongly motivates the AI to keep bringing up new concepts.
  • stop_sequences: Tells the AI when to stop, like punctuation marks or specific words that signal the end of a response.

    • [".", "?", "!"] - Stops the response at the end of a sentence.
    • ["end"] - Ends the response when "end" is mentioned.
    • ["Thank you", "Please"] - Stops the response if either "Thank you" or "Please" is encountered.
  • n: Determines the number of responses the AI generates for a single prompt, allowing for multiple answers or ideas.

    • 1 - The AI provides a single response.
    • 3 - The AI offers three different takes or answers.
    • 5 - Generates five varied responses to choose from.
  • logprobs: Requests the AI to provide probabilities of its answers, giving insight into how confident it is about different responses.

    • 0 - Does not provide probability scores.
    • 10 - Returns the top 10 most likely next tokens.
    • 20 - Provides detailed probabilities for the top 20 possibilities.
  • echo: Makes the AI repeat the prompt in its response, useful for context in conversation or when logging interactions.

    • true - The prompt is included in the AI's response.
    • false - Only the AI's answer is provided, without repeating the prompt.
  • user: Identifies who is interacting with the AI, allowing for personalized responses based on user history or preferences.

    • Example: User IDs or names can be used to tailor responses.
    • Effect: Personalizes the interaction, potentially altering the tone or content based on the user's history.

Each of these variables gives you a knob to tweak, making it possible to fine-tune how the AI understands and responds to your prompts, whether you're looking for a straightforward answer, creative ideas, or in-depth explanations.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment