Skip to content

Instantly share code, notes, and snippets.

@jonhilt
Last active April 8, 2026 08:59
Show Gist options
  • Select an option

  • Save jonhilt/b4e405fbeb6ecad2bcb5700a53cf8e6f to your computer and use it in GitHub Desktop.

Select an option

Save jonhilt/b4e405fbeb6ecad2bcb5700a53cf8e6f to your computer and use it in GitHub Desktop.
Reflect skill
description Reflect on the current AI coding session to extract learnings and improve guidance. Use when a session went off the rails, you want to capture learnings, improve instructions, document preferences, or ask "what should we remember from this session."

Reflect on Session

Extract practical learnings from the current conversation and persist them so they survive beyond this session. One user checkpoint, then implement.

Phase 1: Scan Session Evidence

Review the conversation history and extract:

Corrections

Where did the user correct your output?

  • Code style, patterns, naming, conventions
  • "No, use X instead of Y" / "We don't do it that way"

Friction

Where did you struggle or need multiple attempts?

  • Missing context that required clarification
  • Patterns you didn't know about
  • Wrong assumptions about the codebase

Explicit Preferences

Where did the user state a preference?

  • "I prefer..." / "Always use..." / "Remember to..."

Validated Approaches

What non-obvious approaches did the user confirm worked well?

  • Choices accepted without pushback
  • "Yes, exactly" / "Perfect, keep doing that"

For each finding, note:

  • The specific issue or preference
  • The correct approach
  • Whether it applies globally (user preference), project-wide, or to a specific module

Phase 2: Discover Existing Targets

Scan for existing documentation and configuration so you can avoid duplicates and route correctly:

  1. Check for project-level instruction files (e.g. CLAUDE.md, AGENTS.md, .cursorrules, copilot-instructions.md, CONVENTIONS.md — whatever this project uses)
  2. Check for user-level or global instruction files (e.g. ~/.claude/CLAUDE.md, global rules files)
  3. Check for any memory or knowledge-base system the tool provides
  4. List existing skills, custom commands, or reusable prompts
  5. Check for automation config (hooks, pre/post actions)

Build a target inventory. Know what already exists before proposing changes.

Phase 3: Propose and Route (User Checkpoint)

Present findings grouped by persistence target, not by finding type. Each finding goes to exactly one target.

Routing Rules

Finding type Target When
Personal preference (style, workflow, how they like to work) User-level instructions Applies across projects
Correction to AI behaviour User-level instructions or memory Changes how the AI works with this user
Codebase convention (naming, patterns, architecture) Project-level instructions Applies to all AI sessions in this repo
Project context, goals, deadlines Project-level instructions or memory Non-obvious context not derivable from code/git
External resource pointers Project-level instructions or memory URLs, dashboards, ticket boards
Reusable multi-step workflow Skill / custom command / prompt Complex enough to warrant standalone definition
Automated trigger Hooks / automation config Should run automatically on specific events

Presentation Format

## Session Findings

### User-Level (global preferences / behaviour corrections)
1. **[Title]** — [what happened] → [correct approach]
   Evidence: [conversation reference]

### Project-Level (codebase conventions / context)
1. **[Title]** — [convention or context to add/update]
   Target: [which file]

### Skills / Commands / Prompts
1. **[Title]** — [workflow to create/modify]

### No Action Needed
- [Findings already covered or too ephemeral to persist]

Stop here. Ask the user which findings to act on. Do not proceed until they confirm.

Phase 4: Implement Approved Changes

For each approved finding:

  1. Check for duplicates first. Read the target file and confirm the rule doesn't already exist in different words.
  2. Prefer updates over additions. Extend an existing section rather than creating a new one.
  3. Keep entries concise and actionable. A rule should be one or two sentences. Include "why" only when the reason isn't obvious.
  4. For new files: follow whatever convention the project already uses. If no instruction files exist yet, propose one at the project root and confirm the format with the user.

Phase 5: Summary

After implementation, report:

  • Changes made (with file paths)
  • Findings deferred or rejected (and why)
  • Any existing docs that were updated rather than duplicated

Keep it brief. The diffs tell the story.

Quality Rules

  • No speculative improvements. Every finding must have conversation evidence.
  • No duplicates. Always check existing docs before adding.
  • Route precisely. Don't put personal preferences in project docs. Don't put project conventions in user-level config.
  • Prefer updates over additions. Extend existing sections rather than creating new ones.
  • Ephemerality test. If a finding won't matter in a week, skip it.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment