Skip to content

Instantly share code, notes, and snippets.

@disler
Last active June 29, 2024 19:44
Show Gist options
  • Save disler/d51d7e37c3e5f8d277d8e0a71f4a1d2e to your computer and use it in GitHub Desktop.
Save disler/d51d7e37c3e5f8d277d8e0a71f4a1d2e to your computer and use it in GitHub Desktop.
Minimal Prompt Chainable for zero library sequential prompt chaining

Minimal Prompt Chainable

Sequential prompt chaining in one method with context and output back-referencing.

Files

  • main.py - start here - full example using MinimalChainable from chain.py to build a sequential prompt chian
  • chain.py - contains zero library minimal prompt chain class
  • chain_test.py - tests for chain.py, you can ignore this
  • requirements.py - python requirements

Setup

  • Create .env with ANTHROPIC_API_KEY=
    • (or export export ANTHROPIC_API_KEY=)
  • python -m venv venv
  • source venv/bin/activate
  • pip install -r requirements.txt
  • python main.py

Test

  • pytest chain_test.py

Watch

When to use Prompt Chains. Ditching LangChain. ALL HAIL Claude 3.5 Sonnet

Four guiding questions for When to Prompt Chain

1. Your tasks are complex for a single prompt

  • Am I asking my LLM to accomplish 2 or more tasks that are distantly related in one prompt? If Yes → Build a prompt chain?
    • Why: Prompt chaining can help break down complex tasks into manageable chunks, improving clarity and accuracy.

2. Increase prompt performance and reduce errors

  • Do I want to increase prompt performance to the max, and reduce errors to the minimum? If Yes → Build a prompt chain?
    • Why: Prompt chaining allows you to guide the LLM's reasoning process, reducing the likelihood of irrelevant or nonsensical responses.

3. Use output of previous prompt as input

  • Do I need the output of a previous prompt to be the input (variable) of this prompt? If Yes → Build a prompt chain?
    • Why: Prompt chaining is essential for tasks where subsequent prompts need to use information generated in previous steps.

4. Adaptive workflow based on prompt flow

  • Do I need an adaptive workflow that changes based on the flow of the prompt? If yes → Build a prompt chain
    • *Why: Prompt chaining allowed you to interject and respond to the flow of your

Summary

  1. you find yourself solving two or more tasks in a single prompt.
  2. need maximum error reduction and increased output quality.
  3. you have subsequent prompts that rely on the output of previous prompts.
  4. you need to take different actions based on evolving steps.

Problems with LLM libraries (Langchain, Autogen, others)

  • Unnecessary abstractions & premature abstractions!
  • Easy to start - hard to finish!
  • Rough Docs - rough debugging!

Solution?

STAY CLOSE TO THE METAL (the prompt)

The prompt is EVERYTHING, don't let any library take hide or abstract it from you unless you KNOW what's happening under the hood.

Build minimal abstracts with raw code that do ONE thing well. Outside of data collection, it's unlikely you NEED a library for the prompts, prompt chains and AI Agents of your tools and products.

import json
import re
from typing import List, Dict, Callable, Any, Union
class MinimalChainable:
"""
Sequential prompt chaining with context and output back-references.
"""
@staticmethod
def run(
context: Dict[str, Any], model: Any, callable: Callable, prompts: List[str]
) -> List[Any]:
# Initialize an empty list to store the outputs
output = []
context_filled_prompts = []
# Iterate over each prompt with its index
for i, prompt in enumerate(prompts):
# Iterate over each key-value pair in the context
for key, value in context.items():
# Check if the key is in the prompt
if "{{" + key + "}}" in prompt:
# Replace the key with its value
prompt = prompt.replace("{{" + key + "}}", str(value))
# Replace references to previous outputs
# Iterate from the current index down to 1
for j in range(i, 0, -1):
# Get the previous output
previous_output = output[i - j]
# Handle JSON (dict) output references
# Check if the previous output is a dictionary
if isinstance(previous_output, dict):
# Check if the reference is in the prompt
if f"{{{{output[-{j}]}}}}" in prompt:
# Replace the reference with the JSON string
prompt = prompt.replace(
f"{{{{output[-{j}]}}}}", json.dumps(previous_output)
)
# Iterate over each key-value pair in the previous output
for key, value in previous_output.items():
# Check if the key reference is in the prompt
if f"{{{{output[-{j}].{key}}}}}" in prompt:
# Replace the key reference with its value
prompt = prompt.replace(
f"{{{{output[-{j}].{key}}}}}", str(value)
)
# If not a dict, use the original string
else:
# Check if the reference is in the prompt
if f"{{{{output[-{j}]}}}}" in prompt:
# Replace the reference with the previous output
prompt = prompt.replace(
f"{{{{output[-{j}]}}}}", str(previous_output)
)
# Append the context filled prompt to the list
context_filled_prompts.append(prompt)
# Call the provided callable with the processed prompt
# Get the result by calling the callable with the model and prompt
result = callable(model, prompt)
# Try to parse the result as JSON, handling markdown-wrapped JSON
try:
# First, attempt to extract JSON from markdown code blocks
# Search for JSON in markdown code blocks
json_match = re.search(r"```(?:json)?\s*([\s\S]*?)\s*```", result)
# If a match is found
if json_match:
# Parse the JSON from the match
result = json.loads(json_match.group(1))
else:
# If no markdown block found, try parsing the entire result
# Parse the entire result as JSON
result = json.loads(result)
except json.JSONDecodeError:
# Not JSON, keep as is
pass
# Append the result to the output list
output.append(result)
# Return the list of outputs
return output, context_filled_prompts
@staticmethod
def to_delim_text_file(name: str, content: List[Union[str, dict]]) -> str:
result_string = ""
with open(f"{name}.txt", "w") as outfile:
for i, item in enumerate(content, 1):
if isinstance(item, dict):
item = json.dumps(item)
if isinstance(item, list):
item = json.dumps(item)
chain_text_delim = (
f"{'🔗' * i} -------- Prompt Chain Result #{i} -------------\n\n"
)
outfile.write(chain_text_delim)
outfile.write(item)
outfile.write("\n\n")
result_string += chain_text_delim + item + "\n\n"
return result_string
from chain import MinimalChainable
def test_chainable_solo():
# Mock model and callable function
class MockModel:
pass
def mock_callable_prompt(model, prompt):
return f"Solo response: {prompt}"
# Test context and single chain
context = {"variable": "Test"}
chains = ["Single prompt: {{variable}}"]
# Run the Chainable
result, _ = MinimalChainable.run(context, MockModel(), mock_callable_prompt, chains)
# Assert the results
assert len(result) == 1
assert result[0] == "Solo response: Single prompt: Test"
def test_chainable_run():
# Mock model and callable function
class MockModel:
pass
def mock_callable_prompt(model, prompt):
return f"Response to: {prompt}"
# Test context and chains
context = {"var1": "Hello", "var2": "World"}
chains = ["First prompt: {{var1}}", "Second prompt: {{var2}} and {{var1}}"]
# Run the Chainable
result, _ = MinimalChainable.run(context, MockModel(), mock_callable_prompt, chains)
# Assert the results
assert len(result) == 2
assert result[0] == "Response to: First prompt: Hello"
assert result[1] == "Response to: Second prompt: World and Hello"
def test_chainable_with_output():
# Mock model and callable function
class MockModel:
pass
def mock_callable_prompt(model, prompt):
return f"Response to: {prompt}"
# Test context and chains
context = {"var1": "Hello", "var2": "World"}
chains = ["First prompt: {{var1}}", "Second prompt: {{var2}} and {{output[-1]}}"]
# Run the Chainable
result, _ = MinimalChainable.run(context, MockModel(), mock_callable_prompt, chains)
# Assert the results
assert len(result) == 2
assert result[0] == "Response to: First prompt: Hello"
assert (
result[1]
== "Response to: Second prompt: World and Response to: First prompt: Hello"
)
def test_chainable_json_output():
# Mock model and callable function
class MockModel:
pass
def mock_callable_prompt(model, prompt):
if "Output JSON" in prompt:
return '{"key": "value"}'
return prompt
# Test context and chains
context = {"test": "JSON"}
chains = ["Output JSON: {{test}}", "Reference JSON: {{output[-1].key}}"]
# Run the Chainable
result, _ = MinimalChainable.run(context, MockModel(), mock_callable_prompt, chains)
# Assert the results
assert len(result) == 2
assert isinstance(result[0], dict)
print("result", result)
assert result[0] == {"key": "value"}
assert result[1] == "Reference JSON: value" # Remove quotes around "value"
def test_chainable_reference_entire_json_output():
# Mock model and callable function
class MockModel:
pass
def mock_callable_prompt(model, prompt):
if "Output JSON" in prompt:
return '{"key": "value"}'
return prompt
context = {"test": "JSON"}
chains = ["Output JSON: {{test}}", "Reference JSON: {{output[-1]}}"]
# Run the Chainable
result, _ = MinimalChainable.run(context, MockModel(), mock_callable_prompt, chains)
assert len(result) == 2
assert isinstance(result[0], dict)
assert result[0] == {"key": "value"}
assert result[1] == 'Reference JSON: {"key": "value"}'
def test_chainable_reference_long_output_value():
# Mock model and callable function
class MockModel:
pass
def mock_callable_prompt(model, prompt):
return prompt
context = {"test": "JSON"}
chains = [
"Output JSON: {{test}}",
"1 Reference JSON: {{output[-1]}}",
"2 Reference JSON: {{output[-2]}}",
"3 Reference JSON: {{output[-1]}}",
]
# Run the Chainable
result, _ = MinimalChainable.run(context, MockModel(), mock_callable_prompt, chains)
assert len(result) == 4
assert result[0] == "Output JSON: JSON"
assert result[1] == "1 Reference JSON: Output JSON: JSON"
assert result[2] == "2 Reference JSON: Output JSON: JSON"
assert result[3] == "3 Reference JSON: 2 Reference JSON: Output JSON: JSON"
def test_chainable_empty_context():
# Mock model and callable function
class MockModel:
pass
def mock_callable_prompt(model, prompt):
return prompt
# Test with empty context
context = {}
chains = ["Simple prompt"]
# Run the Chainable
result, _ = MinimalChainable.run(context, MockModel(), mock_callable_prompt, chains)
# Assert the results
assert len(result) == 1
assert result[0] == "Simple prompt"
def test_chainable_json_output_with_markdown():
# Mock model and callable function
class MockModel:
pass
def mock_callable_prompt(model, prompt):
return """
Here's a JSON response wrapped in markdown:
```json
{
"key": "value",
"number": 42,
"nested": {
"inner": "content"
}
}
```
"""
context = {}
chains = ["Test JSON parsing"]
# Run the Chainable
result, _ = MinimalChainable.run(context, MockModel(), mock_callable_prompt, chains)
# Assert the results
assert len(result) == 1
assert isinstance(result[0], dict)
assert result[0] == {"key": "value", "number": 42, "nested": {"inner": "content"}}
import os
from typing import List, Dict, Union
from dotenv import load_dotenv
from chain import MinimalChainable
import llm
def build_models():
load_dotenv()
ANTHROPIC_API_KEY = os.getenv("ANTHROPIC_API_KEY")
sonnet_3_5_model: llm.Model = llm.get_model("claude-3.5-sonnet")
sonnet_3_5_model.key = ANTHROPIC_API_KEY
return sonnet_3_5_model
def prompt(model: llm.Model, prompt: str):
res = model.prompt(
prompt,
temperature=0.5,
)
return res.text()
def prompt_chainable_poc():
sonnet_3_5_model = build_models()
result, context_filled_prompts = MinimalChainable.run(
context={"topic": "AI Agents"},
model=sonnet_3_5_model,
callable=prompt,
prompts=[
# prompt #1
"Generate one blog post title about: {{topic}}. Respond in strictly in JSON in this format: {'title': '<title>'}",
# prompt #2
"Generate one hook for the blog post title: {{output[-1].title}}",
# prompt #3
"""Based on the BLOG_TITLE and BLOG_HOOK, generate the first paragraph of the blog post.
BLOG_TITLE:
{{output[-2].title}}
BLOG_HOOK:
{{output[-1]}}""",
],
)
chained_prompts = MinimalChainable.to_delim_text_file(
"poc_context_filled_prompts", context_filled_prompts
)
chainable_result = MinimalChainable.to_delim_text_file("poc_prompt_results", result)
print(f"\n\n📖 Prompts~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ \n\n{chained_prompts}")
print(f"\n\n📊 Results~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ \n\n{chainable_result}")
pass
def main():
prompt_chainable_poc()
if __name__ == "__main__":
main()
llm
python-dotenv
llm-claude-3
pytest
@Krashnicov
Copy link

Thank you for this.

I had to add python-dotenv to my requirements.txt for this to work (running cursor.sh, using poetry rather than python venv.

@ashokparmar29
Copy link

Hi,

Thank you for maintaining this useful repository.

I am interested in using images with the Claude-3.5 model. Is it possible to feed images to the model using minimalist? Additionally, I would appreciate any guidance on how to achieve this.

Thank you for your help.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment