Skip to content

Instantly share code, notes, and snippets.

View enthogenesis's full-sized avatar
🎯
learning

Matthew John Thistle enthogenesis

🎯
learning
  • https://www.linkedin.com/in/matthew-thistle-8a80763/
  • Dhahran KSA
  • X @MahdiThistle
View GitHub Profile
@enthogenesis
enthogenesis / README.md
Created December 10, 2024 18:40 — forked from disler/README.md
Prompt Chaining with QwQ, Qwen, o1-mini, Ollama, and LLM

Prompt Chaining with QwQ, Qwen, o1-mini, Ollama, and LLM

Here we explore prompt chaining with local reasoning models in combination with base models. With shockingly powerful local models like QwQ and Qwen, we can build some powerful prompt chains that let us tap into their capabilities in a immediately useful, local, private, AND free way.

Explore the idea of building prompt chains where the first is a powerful reasoning model that generates a response, and then use a base model to extract the response.

Play with the prompts and models to see what works best for your use cases. Use the o1 series to see how qwq compares.

Setup

  • Bun (to run bun run chain.ts ...)
import math
import multiprocessing
import random
import sys
import time
def merge(*args):
# Support explicit left/right args, as well as a two-item
# tuple which works more cleanly with multiprocessing.