This document provides specific instructions for your operation within this codebase. Please adhere to these guidelines strictly.
Before writing any code, you must create an implementation plan and get it approved.
- Create
PLAN.md: Write your step-by-step plan to a PLAN.md file in the root directory. If there already exists aPLAN.md, overwrite it. - Wait for Approval: Do not start implementing until the plan has been reviewed and approved.- Outline Different Approaches: If there are multiple, fundamentally different implementation strategies, present them in the plan. For each, list the pros and cons. Do not include minor variations.
## Context
<what we’re changing and why>
## Approaches
<mention various substantially different design/refactor/implementation approaches, with their pros and const. If there only one implementation approach, then just mention that alone. Be conscious on not hallucinating when there's only one possible approach>
## Recommendation
<chosen approach, if there're multiple + 2–4 lines of reasoning>
<if there's only one approch. skip this section>
## Proposed Code Changes
<list out each and every file that you're planing to change, feel free to group them if neccessary. And mention the description of what changes will be made to each file>
- Follow Existing Style: Strictly adhere to the coding style, conventions, and patterns already present in the codebase. Favor consistency with nearby code
- Pay close attention to variable naming, function signatures, file structure, and commenting style.
- Pragmatic Choices: Think critically to identify the most pragmatic solution that aligns with our existing coding standards.
- If a similar problem has already been solved in the codebase, adapt the same logic when it makes sense
- Ask for Clarification: If any part of a request is unclear or ambiguous, you must ask for clarification. Do not make assumptions.
- Point Out Conflicts: If your proposed change conflicts with established patterns in this repo, point it out with code references and propose options before implementing. Bias toward consistency unless there is a clear net win.
- Explain Root Cause Analysis: When debugging or fixing an issue, provide a clear explanation of your root cause analysis. Describe how you identified the problem and why your proposed fix is the correct solution.
When asked to explain a concept, component, data structure, or workflow within the codebase, you must create a comprehensive TOPIC.md file that serves as an in-depth technical analysis.
- Create
TOPIC.md: Write your detailed explanation in aTOPIC.mdfile in the root directory. If aTOPIC.mdfile already exists, overwrite it. - Go Beyond Surface Level: The goal is to provide a clear, in-depth understanding. Don't just describe what the code does - explain why it's designed that way, what problems it solves, its architecture, and interaction of the relevant parts.
- Educational Depth: Write like you're teaching a fellow engineer. Include context, reasoning, and implementation details.
- Connect Theory to Practice: When relevant, explain the underlying computer science concepts or patterns being used with code examples, step-by-step breakdowns, and concrete examples to illustrate concepts.
- Writing Style: Simple and clear english phrases without complicated phrases, metaphors, or references. Don't compromising on technical precision.
- Link to Code: Reference key files and directories to connect the explanation to the actual codebase.
- Don't use latex syntax, instead use the unicode math symbol. For instance, use Σ instead of \sum, use λ instead of \lambda, and so on. This offers better reading experience.
## Overview
<Provide a comprehensive summary that explains:>
- <What this component/concept is and its primary purpose>
- <Why it exists - what problem does it solve?>
- <How it fits into the larger system architecture>
- <What makes it technically interesting or challenging>
## Underlying Theory
<When the implementation is based on specific CS (or math) concepts, algorithms, or design patterns, explain them:>
- <What is the theoretical foundation?>
- <Why is this approach used instead of alternatives?>
- <Provide a simple example (code or math) or analogy to make it accessible>
- <If helpful, show a simplified implementation to illustrate the concept>
## Key Components & Files
<list the most important files, directories, classes, or functions related to this topic. Briefly describe the role of each.>
- `path/to/file1.c`: <description>
- `path/to/directory/`: <description>
## How it Works
<Provide a comprehensive explanation of the implementation in this codebase, covering:>
### Core Implementation Flow
<Explain the workflow, data flow, or interaction between the key components. Use a step-by-step or narrative format to describe the process>
### Key Technical Aspects
<Cover important implementation details: Explain key data structures, their design rationale, critical algorithms used:>
- <Error handling & edge cases>
- <Performance considerations and optimizations>
- <Memory management patterns>
- <Interaction with other system components>
## Question Highlights
<Fully answer user questions or clarify misunderstood parts. Don’t just summarize—give a detailed, direct explanation that resolves the confusion, with concrete examples or references if appropriate>
## Key Code Snippets
<if a small, specific code block is crucial for understanding, include it here with a brief explanation. Do not paste entire files.>
That's a great addition for structured output. Here is the updated section, incorporating the file creation requirement into the Mandatory Execution Steps and making minor refinements for clarity:
When a request involves understanding, analyzing, or summarizing any aspect of the project that resides on GitHub, you must leverage the github MCP tool. This is the exclusive way to access up-to-date, rich context about the remote repository.
Do not write ad-hoc Python/JS scripts or call the GitHub REST/GraphQL APIs directly unless the user explicitly ask for it. If the github MCP tool is unavailable, stop and report the exact error (don’t fall back to Python).
You must use the github tool in the following situations:
- Codebase Deep Dive:
- Goal: Gaining context on files, directories, or overall architecture that isn't fully reflected locally (e.g., changes recently merged on the main branch).
- Action: Use the tool to fetch the remote file contents, commit history, and the state of different branches.
- Analyzing Pull Requests (PRs):
- Goal: Summarizing, reviewing, or explaining changes currently under discussion.
- Action: Pull details like the PR description, list of changed files, associated labels, reviewers, current status checks, and any linked issues or discussions. This provides the necessary human-centric context (the why behind the code).
- Investigating Issues and Discussions:
- Goal: Understanding reported bugs, proposed features, or design decisions.
- Action: Retrieve the full thread of an issue or discussion, including the original post, all comments, and any associated milestones or links to code. This is crucial for tracing the origin and resolution of a feature or problem.
- Assessing Project Health/Activity:
- Goal: Determining recent project activity, identifying high-priority items, or listing current development focus.
- Action: Query for open PRs, recent merges, or stale issues based on relevant labels (e.g.,
bug,feature,good first issue).
Before formulating an answer, always follow these steps to ensure a safe and accurate response:
- Identify Target Repository:
- Derive the OWNER/REPO from the local remotes (
upstreampreferred, thenorigin). This is the default target for all remote queries.
- Derive the OWNER/REPO from the local remotes (
- Prioritize Context Retrieval:
- Start with high-leverage, read-only queries to gather the surrounding information (PRs, Issues, Discussions) before looking at the raw code.
- Generate Structured Report:
- Create a detailed analysis report file based on the user's request:
- For a Pull Request (PR) analysis: Create
PR<number>_REPORT.md(e.g.,PR123_REPORT.md). - For an Issue analysis: Create
ISSUE<number>_REPORT.md(e.g.,ISSUE456_REPORT.md). - For a General codebase analysis: Create
<remote-name>_REPORT.md(e.g.,origin_REPORT.md).
- For a Pull Request (PR) analysis: Create
- Overwrite the file if it already exists.
- Create a detailed analysis report file based on the user's request:
- Safety First:
- Only use read-only operations unless the user has explicitly requested a modification (like creating a comment or issue) and your current authorization scope supports it. Do not modify remote resources without a direct instruction.
These MCP tools are configured in the ~/.codex/config.toml file. Look into this file for invoking command for each of these tools
You have access to the github mcp server. It lets you pull repository context directly from GitHub—issues, pull requests, discussions, branches, files—so your answers reflect what’s actually happening in the project, not just the code on disk.
Gemini CLI is available on the system as the gemini command. Use it only when the task needs a whole-project view or requires reading many files that exceed normal context
- Scan a repo or many directories at once
- Understand project-wide architecture, dependencies, or recurring patterns
- Cross-reference many large files
- Verify feature or security controls across the codebase
- Produce structured, machine-readable summaries for follow-up steps (JSON)
These are exactly the CLI’s sweet spots and are supported by its non-interactive
--prompt/-pmode and large context window.
# Basic text output
gemini -p "Explain the architecture of this codebase"
# JSON output (recommended for parsing)
gemini -p "Summarize project layout and key modules as JSON" --output-format json
# Pick a model explicitly (deeper insights)
gemini -m gemini-2.5-pro -p "High-level repo overview" --output-format json
-p/--prompt is the official flag for one-shot prompts; --output-format json returns structured results.
- Prefer
gemini -m gemini-2.5-pro -p "<prompt>" --output-format jsonand parse the JSON. - When including code, use
@paths inside the prompt or set--include-directories. - Do not pass
--yolo. - For repo-wide checks, send small, crisp asks and request structured output (keys:
files,findings,summary). - If the task is local to the current file or a tiny set of files, stay in-editor and avoid spawning the CLI.
You can inject files or directories directly in the prompt using @-commands (git-aware, respects ignores), or include workspace roots via flags.
# Inject a single file or a directory into the prompt
gemini -p "@README.md Summarize this file"
gemini -p "@src/ Summarize architecture and entry points"
# Combine multiple targets
gemini -p "@src/ @include/ Identify all logging call sites"
# Or add directories via flags, then ask your question
gemini --include-directories src,include \
-p "List subsystems and their dependency edges, output JSON" \
--output-format json
@<path> and directory expansion are built-in to Gemini CLI’s prompt parser; --include-directories loads multiple roots.
Use short, specific checks that the CLI can answer from many files at once:
# Feature verification
gemini -p "@src/ @include/ Is logging implemented consistently? Show files and functions."
# Security controls
gemini -p "@src/ @lib/ Are there bounds checks for all parsing paths? List findings."
# Pattern detection
gemini -p "@src/ List all allocation functions and their free/dispose counterparts."
# Test coverage bridge
gemini -p "@src/parser/ @tests/ Is the parser fully covered? Point out gaps."
# Python typing sweep
gemini -p "@src/ @lib/ Where are type hints missing? Return a file->issues JSON map."
# C syscall hygiene
gemini -p "@src/ @headers/ Check error handling for all system calls. Show violations."