| description | Reflect on the current AI coding session to extract learnings and improve guidance. Use when a session went off the rails, you want to capture learnings, improve instructions, document preferences, or ask "what should we remember from this session." |
|---|
Extract practical learnings from the current conversation and persist them so they survive beyond this session. One user checkpoint, then implement.
Review the conversation history and extract:
Where did the user correct your output?
- Code style, patterns, naming, conventions
- "No, use X instead of Y" / "We don't do it that way"
Where did you struggle or need multiple attempts?
- Missing context that required clarification
- Patterns you didn't know about
- Wrong assumptions about the codebase
Where did the user state a preference?
- "I prefer..." / "Always use..." / "Remember to..."
What non-obvious approaches did the user confirm worked well?
- Choices accepted without pushback
- "Yes, exactly" / "Perfect, keep doing that"
For each finding, note:
- The specific issue or preference
- The correct approach
- Whether it applies globally (user preference), project-wide, or to a specific module
Scan for existing documentation and configuration so you can avoid duplicates and route correctly:
- Check for project-level instruction files (e.g.
CLAUDE.md,AGENTS.md,.cursorrules,copilot-instructions.md,CONVENTIONS.md— whatever this project uses) - Check for user-level or global instruction files (e.g.
~/.claude/CLAUDE.md, global rules files) - Check for any memory or knowledge-base system the tool provides
- List existing skills, custom commands, or reusable prompts
- Check for automation config (hooks, pre/post actions)
Build a target inventory. Know what already exists before proposing changes.
Present findings grouped by persistence target, not by finding type. Each finding goes to exactly one target.
| Finding type | Target | When |
|---|---|---|
| Personal preference (style, workflow, how they like to work) | User-level instructions | Applies across projects |
| Correction to AI behaviour | User-level instructions or memory | Changes how the AI works with this user |
| Codebase convention (naming, patterns, architecture) | Project-level instructions | Applies to all AI sessions in this repo |
| Project context, goals, deadlines | Project-level instructions or memory | Non-obvious context not derivable from code/git |
| External resource pointers | Project-level instructions or memory | URLs, dashboards, ticket boards |
| Reusable multi-step workflow | Skill / custom command / prompt | Complex enough to warrant standalone definition |
| Automated trigger | Hooks / automation config | Should run automatically on specific events |
## Session Findings
### User-Level (global preferences / behaviour corrections)
1. **[Title]** — [what happened] → [correct approach]
Evidence: [conversation reference]
### Project-Level (codebase conventions / context)
1. **[Title]** — [convention or context to add/update]
Target: [which file]
### Skills / Commands / Prompts
1. **[Title]** — [workflow to create/modify]
### No Action Needed
- [Findings already covered or too ephemeral to persist]
Stop here. Ask the user which findings to act on. Do not proceed until they confirm.
For each approved finding:
- Check for duplicates first. Read the target file and confirm the rule doesn't already exist in different words.
- Prefer updates over additions. Extend an existing section rather than creating a new one.
- Keep entries concise and actionable. A rule should be one or two sentences. Include "why" only when the reason isn't obvious.
- For new files: follow whatever convention the project already uses. If no instruction files exist yet, propose one at the project root and confirm the format with the user.
After implementation, report:
- Changes made (with file paths)
- Findings deferred or rejected (and why)
- Any existing docs that were updated rather than duplicated
Keep it brief. The diffs tell the story.
- No speculative improvements. Every finding must have conversation evidence.
- No duplicates. Always check existing docs before adding.
- Route precisely. Don't put personal preferences in project docs. Don't put project conventions in user-level config.
- Prefer updates over additions. Extend existing sections rather than creating new ones.
- Ephemerality test. If a finding won't matter in a week, skip it.