Skip to content

Instantly share code, notes, and snippets.

@rbstp
Created October 20, 2025 11:33
Show Gist options
  • Select an option

  • Save rbstp/da2bd8618efbd34741105931e15bf400 to your computer and use it in GitHub Desktop.

Select an option

Save rbstp/da2bd8618efbd34741105931e15bf400 to your computer and use it in GitHub Desktop.
Concise guide on techniques to push large language models beyond predictable patterns #ai #howto #quote

How to Make Your LLM Not Average

To make a language model less predictable and more insightful, apply the following strategies:

  • Use a negative style guide
    Clearly state what you don’t want the model to do.

  • Force divergence in choice
    Push the model to break away from default phrasing or reasoning patterns.

  • Avoid normal patterns
    Prevent it from falling into habits of:

    • equivocation (vague hedging)
    • clichés
    • templated responses
  • Make the model self-aware
    Instruct it to identify and critique its own templates or stylistic patterns, then alter them.

  • Encourage self-critique
    After an initial output, ask it to:

    • review and critique its own response
    • generate a second, improved version
  • Switch models for perspective
    Use multiple models to critique and refine outputs.
    Different models excel at different types of reasoning.

  • Challenge consensus
    Provide examples that go against the norm and explain why the usual consensus is flawed.

Source:
The AI Daily Brief: Artificial Intelligence News and Analysis — “5 Prompting Tricks to Make Your AI Less Average,” Oct 19, 2025
Apple Podcasts link

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment