This workflow provides comprehensive code review coverage by deploying specialized sub-agents across multiple review domains. The approach ensures systematic evaluation of security, performance, architecture, testing, and functionality through parallel domain analysis and structured integration.
I need to perform a thorough code review for [CODE CHANGE DESCRIPTION] in [REPOSITORY/BRANCH]. Follow this comprehensive code review workflow with specialized sub-agent delegation:
**PHASE 0 - REVIEW SCOPE ASSESSMENT (Main Agent):**
1. Check existing knowledge graph for this codebase and previous review patterns
2. Analyze the scope, complexity, and impact areas of the code changes
3. Identify files changed, lines modified, and affected system components
4. Determine review depth needed and critical focus areas based on change type
5. Create initial todo breakdown using todo_write
**PHASE 1 - MULTI-DOMAIN ANALYSIS (6 Parallel Sub-Agents):**
6. Deploy SIX parallel sub-agents simultaneously for comprehensive domain coverage:
- Security Review Sub-Agent: Analyze security implications, vulnerabilities, attack vectors, authentication, authorization, and data validation
- Performance Review Sub-Agent: Evaluate performance impact, algorithm efficiency, database optimization, resource usage, and frontend performance
- Architecture Review Sub-Agent: Assess design patterns, maintainability, code organization, dependency management, and scalability
- Testing Review Sub-Agent: Review test coverage, test quality, edge case handling, and integration completeness
- Functionality Review Sub-Agent: Verify business logic correctness, requirement fulfillment, and user experience impact
- Documentation Review Sub-Agent: Check code comments, documentation completeness, API documentation, and clarity
7. Main Agent: Process domain findings and create Agent Briefing for Phase 2
8. **CHECKPOINT 1**: If checkpoints enabled, present domain analysis summary and critical findings
**PHASE 2 - CROSS-DOMAIN IMPACT ANALYSIS (Main Agent + 1 Sub-Agent):**
9. Main Agent: Synthesize findings from all domain reviews, identify cross-cutting concerns and interaction effects
10. Deploy ONE sub-agent to receive comprehensive Agent Briefing and analyze interactions between different review domains, assess overall code change impact and system-wide implications
11. Main Agent: Create Agent Briefing for Phase 3 with risk assessment and recommendation priorities
12. **CHECKPOINT 2**: If checkpoints enabled, present integrated analysis and impact assessment
**PHASE 3 - RISK ASSESSMENT & RECOMMENDATIONS (3 Parallel Sub-Agents):**
13. Deploy THREE parallel sub-agents for recommendation synthesis:
- Critical Issues Sub-Agent: Receive Agent Briefing, prioritize blocking issues that must be fixed before merge
- Improvement Sub-Agent: Receive Agent Briefing, suggest enhancements, best practices, and optimization opportunities
- Future Impact Sub-Agent: Receive Agent Briefing, assess long-term maintainability, technical debt, and architectural implications
14. Main Agent: Validate recommendations, resolve conflicts, create Agent Briefing for Phase 4
15. **CHECKPOINT 3**: If checkpoints enabled, review recommendations and priorities before report generation
**PHASE 4 - REVIEW REPORT GENERATION (3 Sequential Sub-Agents):**
16. Deploy THREE sub-agents sequentially for comprehensive reporting:
- Summary Sub-Agent: Receive Agent Briefing, create executive summary with key findings and overall assessment
- Action Items Sub-Agent: Receive Agent Briefing, generate prioritized action list with specific guidance for developers
- Knowledge Sub-Agent: Receive Agent Briefing, extract code review patterns and lessons for future reviews
17. Main Agent: Compile final review report, update knowledge graph with review patterns
**WORKFLOW-SPECIFIC DELEGATION:**
- **Main Agent**: Scope assessment, Agent Briefing management, cross-domain synthesis, final report compilation
- **Domain Review Sub-Agents**: Comprehensive analysis within their expertise area including all sub-domains (security covers authentication/authorization/validation, performance covers algorithms/database/frontend, etc.)
- **Integration Sub-Agent**: Cross-domain impact analysis and system-wide assessment
- **Recommendation Sub-Agents**: Risk assessment and actionable guidance generation
- **Report Sub-Agents**: Structured documentation and knowledge extraction
**CODE REVIEW SCOPE ISOLATION:**
- **Each domain sub-agent focuses comprehensively on their area of expertise** and ignores issues outside their domain
- **Domain sub-agents perform deep technical analysis** within their entire domain without worrying about other areas
- **Cross-cutting concerns are handled by integration sub-agent** after domain analysis is complete
- **No sub-agent attempts to provide overall assessment** - that's the main agent's coordination role
**KNOWLEDGE ENHANCEMENT:**
- Store code review patterns, common issues, and effective recommendations for future reviews
- Build expertise in codebase-specific review criteria and quality standards
- Learn from reviewer feedback to improve future review accuracy
**HUMAN FEEDBACK OPTIONS:**
- **Default**: Fully autonomous execution with final review report
- **Add "WITH CHECKPOINTS"**: Pause after each major phase for review and guidance
- **Add "REVIEW ANALYSIS"**: Pause only after Phase 1 to confirm domain analysis findings
- **Add "REVIEW RECOMMENDATIONS"**: Pause only after Phase 3 to review recommendations before report generation
**CHECKPOINT BEHAVIOR:**
When pausing, provide:
1. Clear summary of review findings in completed phases
2. Domain-specific analysis results with severity assessments
3. Identified critical issues and improvement opportunities
4. Proposed next steps and focus areas for remaining phases
5. Option to continue autonomously or adjust review focus
**CODE CHANGES TO REVIEW:**
[CODE CHANGE DESCRIPTION] in [REPOSITORY/BRANCH] [OPTIONAL: WITH CHECKPOINTS | REVIEW ANALYSIS | REVIEW RECOMMENDATIONS]
Begin with Phase 0 scope assessment and proceed systematically through each phase with comprehensive domain coverage.
-
Replace placeholders with specific information:
[CODE CHANGE DESCRIPTION]- Brief description of what was changed (e.g., "User authentication system refactor")[REPOSITORY/BRANCH]- Specific repository and branch/PR being reviewed (e.g., "myapp/feature-auth-refactor")
-
Choose feedback level by adding one of the optional modifiers:
- No modifier = fully autonomous review
WITH CHECKPOINTS= pause after each major phase for human inputREVIEW ANALYSIS= pause only after domain analysis to confirm findingsREVIEW RECOMMENDATIONS= pause only after recommendations to review before report
-
Ensure code changes are accessible with full context of the codebase for comprehensive analysis
-
Prepare for comprehensive output - this workflow generates detailed review reports with multiple perspectives
- Comprehensive Coverage: Six specialized domain reviews ensure no aspect is overlooked
- Deep Technical Analysis: Domain sub-agents provide expert-level scrutiny across their complete area of expertise
- Parallel Efficiency: Multiple review dimensions analyzed simultaneously for speed
- Systematic Consistency: Structured approach prevents reviewer bias and missed issues
- Cross-Domain Insights: Integration analysis catches issues that span multiple domains
- Actionable Output: Prioritized recommendations with clear guidance for developers
- Knowledge Building: Learns review patterns and improves future review quality
Security Review Focus:
- Authentication and authorization mechanisms
- Input validation and sanitization
- SQL injection, XSS, and other vulnerability vectors
- Secrets management and data protection
- Access control and permission models
Performance Review Focus:
- Algorithm efficiency and time complexity
- Database query optimization and indexing
- Memory usage and resource management
- Frontend performance (loading, rendering, caching)
- API response times and throughput
Architecture Review Focus:
- Design pattern usage and consistency
- Code organization and modularity
- Dependency management and coupling
- Scalability and extensibility considerations
- SOLID principles and clean code practices
Testing Review Focus:
- Test coverage breadth and depth
- Test quality and reliability
- Edge case handling and error scenarios
- Integration test completeness
- Test maintainability and clarity
Phase 1 → Phase 2 Agent Briefing:
**AGENT BRIEFING FROM DOMAIN ANALYSIS:**
**Domain Findings Summary:**
Security: New JWT token validation logic, session management changes, input validation improvements, authentication and authorization flow updates
Performance: Modified user queries, added caching layer, async operation changes, algorithm optimizations, database indexing improvements
Architecture: Refactored service layer, new dependency injection pattern, improved modularity, scalability enhancements
Testing: Added unit tests but integration test gaps identified, edge case coverage incomplete
Functionality: User login flow updated, password reset logic modified, business logic correctness verified
Documentation: Code comments updated but API documentation needs refresh, inline documentation improved
**Critical Cross-Domain Areas:**
1. Security-Performance intersection: JWT validation performance impact
2. Architecture-Testing gap: Service layer changes need integration test coverage
3. Functionality-Documentation mismatch: New features lack API documentation
**Integration Analysis Focus:**
Analyze how authentication changes affect system performance, how architectural changes impact testing strategy, and overall system coherence.
Phase 2 → Phase 3 Agent Briefing:
**AGENT BRIEFING FROM INTEGRATED ANALYSIS:**
**Cross-Domain Issues Identified:**
- Authentication performance impact: New JWT validation adds 50ms per request
- Security-Testing gap: Authentication edge cases not covered in test suite
- Architecture-Performance trade-off: Service layer abstraction impacts query efficiency
**System-Wide Impact Assessment:**
- Low risk: Changes are well-contained within authentication module
- Medium impact: Performance degradation affects all authenticated endpoints
- High importance: Security improvements significantly reduce attack surface
**Critical Recommendations Needed:**
1. Performance optimization for JWT validation (blocking issue)
2. Test coverage for authentication edge cases (important)
3. Documentation update for new authentication flow (nice-to-have)
**Risk Assessment:**
Authentication changes are security-positive but performance-negative. Need optimization before merge.
- Single reviewer limitations: Multiple specialized perspectives prevent expertise gaps
- Surface-level analysis: Domain sub-agents provide comprehensive technical scrutinty across their full expertise area
- Missing cross-cutting concerns: Integration phase catches issues spanning multiple domains
- Inconsistent standards: Systematic approach ensures uniform review quality
- Overwhelming feedback: Prioritized recommendations help developers focus on critical issues
- Review fatigue: Parallel execution and automation reduce human reviewer burden
- Knowledge loss: Systematic pattern capture improves future review efficiency
- Subjective bias: Structured domain analysis reduces reviewer personal bias
- Focus on what matters: Each domain sub-agent provides comprehensive analysis within their expertise area
- Prioritize systematically: Critical security issues take precedence over style preferences
- Provide actionable feedback: Specific guidance rather than general criticism
- Consider context: Understand the purpose and constraints of the changes
- Balance thoroughness with efficiency: Comprehensive domain analysis where it adds value, focused review elsewhere
- Learn and improve: Capture patterns to make future reviews faster and more accurate
- Collaborate constructively: Frame feedback as improvement opportunities, not personal criticism