This adaptive workflow systematically diagnoses and fixes bugs in unknown codebases by using strategic sub-agent delegation and evidence-based investigation strategies.
I need to fix [BUG DESCRIPTION] in this unfamiliar codebase. Follow this adaptive bug fixing workflow with strategic sub-agent delegation:
**PHASE 0 - BUG CLASSIFICATION & CONTEXT (Main Agent):**
1. Check existing knowledge graph for this codebase and related bug patterns
2. Analyze the bug report to determine:
- **Bug Type**: Crash, logic error, performance, UI issue, integration failure, data corruption, etc.
- **Available Evidence**: Error messages, stack traces, reproduction steps, failing tests, user reports
- **Scope Indicators**: Single component, multi-component, system-wide impact
- **Urgency Level**: Critical system failure, feature malfunction, edge case issue
3. Create initial todo breakdown using todo_write
4. Select appropriate investigation strategy based on available evidence
**PHASE 1A - ERROR-DRIVEN INVESTIGATION (3 Parallel Sub-Agents):**
5. Deploy THREE parallel sub-agents simultaneously:
- Sub-Agent A: Start from exact error point and trace backwards (bottom-up analysis)
- Sub-Agent B: Analyze call stack and execution flow leading to the error
- Sub-Agent C: Investigate data/state conditions that triggered the problematic code path
6. Main Agent: Synthesize findings to identify root cause vs symptoms, create Context DNA for Phase 2
7. **CHECKPOINT 1**: If checkpoints enabled, present investigation findings and root cause hypothesis
**PHASE 1B - SYMPTOM-DRIVEN INVESTIGATION (3 Parallel Sub-Agents):**
8. Deploy THREE parallel sub-agents simultaneously:
- Sub-Agent A: Reproduce the issue systematically and identify exact failure point
- Sub-Agent B: Compare expected vs actual behavior in the failing component areas
- Sub-Agent C: Trace data flow and logic that should produce correct behavior
9. Main Agent: Synthesize findings to identify where logic diverges from expectations, create Context DNA for Phase 2
10. **CHECKPOINT 1**: If checkpoints enabled, present reproduction results and failure analysis
**PHASE 1C - TEST-FAILURE-DRIVEN INVESTIGATION (3 Parallel Sub-Agents):**
11. Deploy THREE parallel sub-agents simultaneously:
- Sub-Agent A: Analyze failing test cases and their expectations in detail
- Sub-Agent B: Debug the exact code path that failing tests exercise
- Sub-Agent C: Identify recent changes or conditions that broke the tested functionality
12. Main Agent: Synthesize findings to pinpoint regression cause, create Context DNA for Phase 2
13. **CHECKPOINT 1**: If checkpoints enabled, present test analysis and regression findings
**PHASE 2 - ROOT CAUSE IDENTIFICATION & SOLUTION PLANNING (Main Agent):**
14. Apply bottom-up principle: Confirm lowest-level failure and trace impact upward
15. For multi-component issues: Map component boundaries, interfaces, and data flow
16. Plan minimal, targeted fix approach that addresses root cause (not just symptoms)
17. Identify all potentially affected areas and side effects
18. Create comprehensive test strategy to verify fix and prevent regression
19. **CHECKPOINT 2**: If checkpoints enabled, present root cause analysis and fix strategy
**PHASE 3 - TARGETED IMPLEMENTATION (Single Sub-Agent):**
20. Deploy ONE focused sub-agent for implementation with Context DNA from Phase 2:
- Receive Context DNA with root cause analysis and fix guidance
- Fix the identified root cause with minimal code changes
- Follow existing code patterns and error handling approaches
- Implement/update tests specifically for the bug scenario
- Generate Context DNA for verification phase
21. Main Agent: Track progress, update todos, store bug pattern knowledge, create Context DNA for Phase 4
22. **CHECKPOINT 3**: If checkpoints enabled, review implemented fix before verification
**PHASE 4 - COMPREHENSIVE VERIFICATION (Sequential Sub-Agents):**
23. Deploy THREE sub-agents sequentially:
- Sub-Agent A: Receive Context DNA from Phase 3, verify original bug is completely resolved using reproduction steps
- Sub-Agent B: Receive Context DNA from Phase 3, run comprehensive regression testing to ensure no new issues introduced
- Sub-Agent C: Receive Context DNA from Phase 3, test edge cases and boundary conditions around the fix area
24. Main Agent: Handle any verification failures, update knowledge graph with bug patterns and solutions
**WORKFLOW-SPECIFIC DELEGATION:**
- **Main Agent**: Bug analysis, strategy selection, root cause synthesis, coordination, Context DNA management
- **Investigation Sub-Agents**: Specialized analysis (error tracing, reproduction, data conditions)
- **Implementation Sub-Agent**: Focused fix implementation with minimal changes
- **Verification Sub-Agents**: Testing, validation, regression checking
**SUB-AGENT INVESTIGATION SCOPE:**
- **CRITICAL**: Investigation sub-agents must follow bottom-up approach - start from failure point and work backwards
- Each investigation sub-agent focuses on their assigned aspect (error tracing, reproduction, data analysis)
- Sub-agents must NOT attempt fixes during investigation - only gather and analyze evidence
- If sub-agent identifies potential cause, report findings without implementing solutions
- Sub-agents should IGNORE unrelated issues found during investigation unless directly connected to the bug
**IMPLEMENTATION PHASE ISOLATION:**
- Implementation sub-agent must ONLY modify code directly related to the identified root cause
- Must NOT run global checks during implementation (same rules as feature development)
- Focus on minimal, surgical changes rather than broad refactoring
- Validate changes only against the specific bug scenario, not entire codebase
**BUG FIXING SCOPE ISOLATION:**
- **Investigation sub-agents focus only on their assigned analysis** (error tracing, reproduction, data conditions)
- **Implementation sub-agent makes minimal surgical changes** to address root cause only
- **No global checks during implementation** - focus on specific bug scenario validation
- **Verification sub-agents test systematically** (bug resolution, regression, edge cases)
**KNOWLEDGE ENHANCEMENT:**
- Store bug patterns, root causes, and successful debugging techniques for future reference
- Build knowledge of code areas prone to specific bug types
**HUMAN FEEDBACK OPTIONS:**
- **Default**: Fully autonomous execution with final summary
- **Add "WITH CHECKPOINTS"**: Pause after each major phase for review and approval
- **Add "REVIEW INVESTIGATION"**: Pause only after Phase 1 to confirm root cause before fixing
- **Add "REVIEW FIX"**: Pause only after Phase 3 to review implemented solution before verification
**CHECKPOINT BEHAVIOR:**
When pausing, provide:
1. Clear summary of investigation findings or implemented changes
2. Root cause analysis with supporting evidence
3. Proposed fix approach with rationale
4. Potential risks or side effects identified
5. Option to continue autonomously or modify approach
**BUG TO FIX:**
[BUG DESCRIPTION] [OPTIONAL: WITH CHECKPOINTS | REVIEW INVESTIGATION | REVIEW FIX]
Begin with Phase 0 bug classification and proceed with the appropriate investigation strategy based on available evidence.
-
Replace
[BUG DESCRIPTION]with detailed description of the bug including:- Symptoms observed
- Error messages (if any)
- Reproduction steps
- Expected vs actual behavior
- Environment details
-
Choose feedback level by adding one of the optional modifiers:
- No modifier = fully autonomous
WITH CHECKPOINTS= pause after each major phaseREVIEW INVESTIGATION= pause only after investigation to confirm root causeREVIEW FIX= pause only after implementation to review solution
-
The workflow automatically adapts based on available evidence:
- Error messages/stack traces → Error-driven investigation
- Behavioral issues → Symptom-driven investigation
- Test failures → Test-failure-driven investigation
-
For subsequent bugs in the same codebase, the workflow leverages stored bug patterns and debugging knowledge
- Adaptive Investigation: Automatically selects optimal strategy based on available evidence
- Bottom-Up Analysis: Starts from failure point and traces backwards to find root cause
- Surgical Fixes: Minimal changes address root cause, not symptoms
- Specialized Analysis: Parallel investigation sub-agents provide focused expertise
- Evidence-Based: Systematic approach prevents guesswork and symptom fixes
- Comprehensive Verification: Sequential testing ensures fixes work without regressions
Error-Driven (Phase 1A) - Use when you have:
- Stack traces
- Exception messages
- Crash logs
- Runtime errors
Symptom-Driven (Phase 1B) - Use when you have:
- User behavior reports
- Incorrect outputs
- UI/UX issues
- Performance problems
Test-Failure-Driven (Phase 1C) - Use when you have:
- Failing unit/integration tests
- CI/CD pipeline failures
- Regression after changes
Investigation → Implementation Context DNA:
**CONTEXT DNA FROM INVESTIGATION:**
**Bug Root Cause Identified:**
The issue occurs in /src/utils/dateParser.js line 23 where timezone conversion fails for edge case dates. The function assumes UTC but receives local timezone data from the frontend DatePicker component.
**Code Flow Traced:**
User form submission → /pages/api/events POST → /lib/eventHelpers.js line 45 calls dateParser → /src/utils/dateParser.js line 23 (failure point). Error propagates as 500 status with "Invalid date format" message.
**Failure Conditions:**
Occurs when user selects dates during daylight saving transitions or enters dates in non-UTC timezone. Function fails on Date.parse() call with timezone-aware ISO strings.
**Fix Approach Needed:**
Minimal change required in dateParser.js to detect and handle timezone information. Add timezone normalization before Date.parse(). No database schema changes needed.
**Testing Requirements:**
Focus on edge cases: DST transitions (March/November), different timezone inputs, null dates, and malformed date strings. Existing test file at /tests/utils/dateParser.test.js can be extended.
**Integration Points:**
Frontend DatePicker at /components/forms/DateInput.jsx sends local timezone dates. API expects UTC. Error handling in eventHelpers.js line 52 is sufficient for graceful degradation.
- Investigation agents attempting fixes before understanding root cause
- Fixing symptoms instead of underlying problems
- Introducing regressions through overly broad changes
- Running global checks during focused implementation
- Ignoring bottom-up analysis in favor of top-down assumptions
- Parallel agents interfering with each other's investigation work
- Missing edge cases and boundary conditions during verification
- Not capturing debugging knowledge for future similar bugs
- Context loss between investigation and implementation phases
- Verification agents lacking understanding of the specific fix approach
- Always reproduce first - If you can't reproduce, you can't verify the fix
- Follow the evidence - Let stack traces and logs guide investigation
- Start small - Make minimal changes that address the root cause
- Test thoroughly - Verify both the fix and absence of regressions
- Document patterns - Capture debugging insights for future reference