Skip to content

Instantly share code, notes, and snippets.

@donbr
donbr / langraph-agents.ipynb
Created September 30, 2025 03:51
LangGraph Agents
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
@donbr
donbr / uv-playbook.md
Created September 28, 2025 18:35
The Authoritative Playbook for `uv` in Production Teams

The Authoritative Playbook for uv in Production Teams

This guide is the complete, canonical workflow for managing Python project dependencies using uv. It provides a clear, two-mode approach, production-ready best practices, and drop-in templates to ensure reproducibility, security, and developer efficiency across teams.

Core Concept: The Two Modes of uv

uv operates in two distinct modes. Your team should choose one and use it consistently.

  1. Project Mode (Recommended): This is the modern, preferred approach. It's managed by commands like uv add, uv lock, and uv sync, using the cross-platform uv.lock file as the single source of truth for reproducibility.
  2. Requirements Mode (Compatibility): This mode mirrors the classic pip-tools workflow and is useful when you need a requirements.txt file for legacy tools or specific deployment platforms.

uv Project Template Scaffold

This scaffold provides a production-ready starting point for any new Python project, with best practices for dependency management, CI, and collaboration baked in.

Project Structure

my-python-project/
├── .github/
│ └── workflows/
@donbr
donbr / uv-project-scaffolding.md
Created September 28, 2025 18:22
The Authoritative uv Project Scaffold & Playbook

The Authoritative uv Project Scaffold & Playbook

This repository contains a production-ready Python project starter with uv dependency management, linting, formatting, testing, CI, and contribution guidelines all baked in. It represents a gold-standard foundation for building robust Python applications, designed to eliminate setup friction and enforce quality from the very first commit.

🚀 Rollout Strategy: How to Use This Template

There are three recommended ways to use this scaffold, depending on your team's needs.

  1. GitHub Template Repository (Recommended) – For org-wide standards. Create a repository from this scaffold and enable the “Template repository” setting. Team members can then click Use this template to generate new, fully compliant projects instantly.
  2. Cookiecutter (CLI-driven Templating) – For parameterized, automated scaffolding. Users can generate a new project by running
@donbr
donbr / claude-code-ci-cd-processes.md
Last active September 23, 2025 18:02
claude-code-ci-cd-processes.md

Research Prompt (paste into Claude)

Role & Goal You are an expert DevEx engineer researching best-practice change-management and task-management workflows using Claude Code (CLI & SDK) in real engineering teams as of Sept 22, 2025. Produce actionable guidance I can adopt in a repo that already uses a YAML task plan and a Prefect 3 flow to orchestrate phases/tasks 1:1 (think .claude/tasks.yamlflows/golden_testset_flow.py).

What to cover (prioritize authoritative sources):

  1. MCP configuration & scopes — current, documented best practice for using project-scoped .mcp.json in VCS vs user-scoped/global config; precedence with .claude/settings.json and managed policy files; environment-variable expansion and approval prompts for project MCP. Cite docs.
  2. Claude Code settings for governance — permission model (allow/ask/deny), enabling/approving .mcp.json servers, “includeCoAuthoredBy” in commits, relevant env vars (MCP_TIMEOUT, MAX_MCP_OUTPUT_TOKENS)
@donbr
donbr / ollama-gpt-oss-comparison.md
Created September 19, 2025 02:12
ollama-gpt-oss-comparison

Here’s a short list of Ollama models that track closest to gpt-oss:20b in capability/feel, plus how to prompt them so you don’t lose quality when you swap away from OpenAI-tuned prompts.

Best bets (by “feel” + instruction-following)

  1. Mistral NeMo 12B Instruct — very solid instruction following, large context (128k), good tool/JSON behavior for its size. ollama pull mistral-nemo:12b (or mistral-nemo:12b-instruct where available) ([Ollama][1])

  2. Llama 3.1 Instruct (8B or 70B) — stable, widely used baseline; 70B will beat 20B-class models on reasoning, 8B is a fast local workhorse. ollama pull llama3.1:8b-instruct (or llama3.1:70b-instruct if you have VRAM) ([Ollama][2])

@donbr
donbr / llms-txt-optimization-design.md
Last active September 18, 2025 20:42
Optimized Documentation Retrieval with MCP + Local Index

Optimized Documentation Retrieval with MCP + Local Index

Date: September 18, 2025
Author: Don


Overview

Large llms-full.txt files contain thousands of URLs and can overwhelm LLM context windows.

@donbr
donbr / chatgpt-how-to-build-sept-2025.md
Last active September 17, 2025 20:25
How to Build ChatGPT: Complete Study Guide

How to Build ChatGPT: Complete Study Guide

AI Makerspace Series - Step-by-Step Implementation Guide


📚 Series Overview

The "How to Build ChatGPT" series from AI Makerspace provides a comprehensive roadmap for building production-ready LLM applications, following OpenAI's product evolution from simple chat interfaces to sophisticated AI systems. This study guide covers all four core parts with detailed lesson notes and presentation slides.

🎯 Learning Objectives

@donbr
donbr / ai-makerspace-cohort8-git.md
Last active September 26, 2025 20:43
AI Makerspace - Submitting Your Homework

AIE8 Student Git Guide (Rebase-Only, No Merges to main)

This guide shows how to:

  • Initialize your GitHub repo (origin)
  • Keep your main branch identical to the class repo (upstream)
  • Do all work on feature branches
  • Rebase feature branches to pick up updates (no merges)

@donbr
donbr / logits-temperature-probabilities.md
Created September 12, 2025 00:07
From Logits to Probabilities

Yes — exactly. The temperature parameter in LLMs is directly tied to logits (the raw, pre-softmax output of the model).

Here’s how it works step by step:


🔹 From Logits to Probabilities

  1. The LLM produces a logit vector: one score per token in the vocabulary.
  2. To convert logits into probabilities, we apply the softmax function: