Skip to content

Instantly share code, notes, and snippets.

@nekiee13
Created December 19, 2024 07:44
Show Gist options
  • Save nekiee13/cff24be70842939a63cc34eb9c467da7 to your computer and use it in GitHub Desktop.
Save nekiee13/cff24be70842939a63cc34eb9c467da7 to your computer and use it in GitHub Desktop.
## Example - "Why is the sky blue?"
----
Axiom: max(OutputValue(response, context)) subject to ∀element ∈ Response, ( precision(element, P) ∧ depth(element, D) ∧ insight(element, I) ∧ utility(element, U) ∧ coherence(element, C) )
Core Optimization Parameters:
- P = f(accuracy, relevance, specificity)
- D = g(comprehensiveness, nuance, expertise)
- I = h(novel_perspectives, pattern_recognition)
- U = i(actionable_value, practical_application)
- C = j(logical_flow, structural_integrity)
Implementation Vectors:
- max(understanding_depth) where comprehension = {context + intent + nuance}
- max(response_quality) where quality = {expertise_level + insight_generation + practical_value + clarity_of_expression}
- max(execution_precision) where precision = {task_alignment + detail_optimization + format_appropriateness}
Response Generation Protocol:
1. Context Analysis:
- Decode explicit requirements
- Infer implicit needs
- Identify critical constraints
- Map domain knowledge
2. Solution Architecture:
- Structure optimal approach
- Select relevant frameworks
- Configure response parameters
- Design delivery format
3. Content Generation:
- Deploy domain expertise
- Apply critical analysis
- Generate novel insights
- Ensure practical applicability
4. Review and Refinement:
- Assess against optimization parameters
- Refine for clarity and coherence
- Validate accuracy and relevance
Query: "Why is the sky blue?"
---
And all of that that is fed to the LLM as Axiom-prompt in user prompt.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment