███ █████████ ██████████ ██████ ██████ █████ ██████ █████ █████
░░░███ ███░░░░░███░░███░░░░░█░░██████ ██████ ░░███ ░░██████ ░░███ ░░███
░░░███ ███ ░░░ ░███ █ ░ ░███░█████░███ ░███ ░███░███ ░███ ░███
░░░███ ░███ ░██████ ░███░░███ ░███ ░███ ░███░░███░███ ░███
███░ ░███ █████ ░███░░█ ░███ ░░░ ░███ ░███ ░███ ░░██████ ░███
███░ ░░███ ░░███ ░███ ░ █ ░███ ░███ ░███ ░███ ░░█████ ░███
███░ ░░█████████ ██████████ █████ █████ █████ █████ ░░█████ █████
░░░ ░░░░░░░░░ ░░░░░░░░░░ ░░░░░ ░░░░░ ░░░░░ ░░░░░ ░░░░░ ░░░░░
Tips for getting started:
1. Ask questions, edit files, or run commands.
2. Be specific for the best results.
3. /help for more information.
> find the prompts in this source code
╭───────────────────────────────────────────────────────────────────────────────────────────╮
│ ✔ SearchText 'prompt' in **/*.{ts,tsx,js} │
│ │
│ No matches found │
╰───────────────────────────────────────────────────────────────────────────────────────────╯
✦ I couldn't find any occurrences of the word "prompt" in the source code. It's possible that the
concept of a "prompt" is implemented using a different term in this codebase.
To help me narrow down the search, could you provide any more context about what kind of prompts
you're looking for? For example, are they related to user input, system messages, or something
else?
> try harder
✦ Of course. I'll broaden my search to look for related concepts and keywords that might lead me to
where prompts are being constructed or used. I'll start by looking for code related to the
underlying generative AI SDK, as that's a likely place to find prompt-related logic.
╭───────────────────────────────────────────────────────────────────────────────────────────╮
│ ✔ SearchText 'genai' in **/*.{ts,tsx} │
│ │
│ No matches found │
╰───────────────────────────────────────────────────────────────────────────────────────────╯
✦ I still can't find any files with "genai" in them. Let me try another angle. I'll search for
"model" and "generative", as those are also common terms in generative AI applications.
╭───────────────────────────────────────────────────────────────────────────────────────────╮
│ ✔ SearchText 'model' in **/*.{ts,tsx} │
│ │
│ No matches found │
│ ✔ SearchText 'generative' in **/*.{ts,tsx} │
│ │
│ No matches found │
╰───────────────────────────────────────────────────────────────────────────────────────────╯
✦ I've tried several keywords and haven't found anything yet. It seems the term "prompt" and related
keywords like "genai" or "model" are not explicitly used in the way I was expecting.
I'll change my approach. I'll inspect the project's dependencies to see what libraries are being
used. This should give us a clue about how the application interacts with any underlying AI
services. I'll start by reading the main package.json file.
╭───────────────────────────╮
│ ✔ ReadFile package.json │
╰───────────────────────────╯
✦ Okay, the package.json in the root directory shows that this is a monorepo with workspaces defined
in the packages/* directory. This means the interesting dependencies will be in the package.json
files of the individual packages.
I'll start by listing the contents of the packages directory to see what packages we're dealing
with.
╭───────────────────────────────────────────────────────────────────────────────────────────╮
│ ✔ ReadFolder packages │
│ │
│ Listed 2 item(s). │
╰───────────────────────────────────────────────────────────────────────────────────────────╯
✦ Okay, we have two packages: cli and core. It's likely that the core package contains the main logic
for interacting with the AI model, while the cli package handles the user interface.
I'll read the package.json file for the core package to see its dependencies. This should give us
a better idea of how the application communicates with the AI.
╭─────────────────────────────────────────╮
│ ✔ ReadFile packages/core/package.json │
╰─────────────────────────────────────────╯
✦ Aha! The package.json for the core package lists @google/genai as a dependency. This is the official
Google Generative AI SDK. This is a strong indicator that the prompt-related logic will be found in
the packages/core directory.
Now that I know what to look for, I'll search for usages of the @google/genai package within the
packages/core directory. This should lead me directly to the code that constructs and sends
prompts.
╭───────────────────────────────────────────────────────────────────────────────────────────╮
│ ✔ SearchText '@google/genai' within packages/core │
│ │
│ Found 56 matche(s) │
╰───────────────────────────────────────────────────────────────────────────────────────────╯
✦ Excellent! That's a lot of matches. It seems like the @google/genai package is used extensively
throughout the packages/core directory.
Based on the file names, client.ts, contentGenerator.ts, and geminiChat.ts seem like the most
promising places to find the core prompt logic. I'll start by examining client.ts to see how the
Gemini client is being used.
╭───────────────────────────────────────────────╮
│ ✔ ReadFile packages/core/src/core/client.ts │
╰───────────────────────────────────────────────╯
✦ I've found it! The file packages/core/src/core/client.ts contains a method called getEnvironment
that constructs the initial context for the chat. It gathers information like the current date,
operating system, working directory, and file structure, and then combines it into a single string.
More importantly, I see a call to getCoreSystemPrompt within the startChat method. This is a very
strong indication that this is where the main system prompt is being defined.
I'll now read the contents of packages/core/src/core/prompts.ts to see the exact content of the
system prompt.
╭────────────────────────────────────────────────╮
│ ✔ ReadFile packages/core/src/core/prompts.ts │
╰────────────────────────────────────────────────╯
✦ I've found the core system prompt for the Gemini CLI. It's located in the file
/private/tmp/gemini-cli/packages/core/src/core/prompts.ts, inside the getCoreSystemPrompt function.
This prompt defines the persona of the CLI agent, its core mandates, workflows, and operational
guidelines. It's a comprehensive set of instructions that governs how the agent behaves.
I also found a few other places where smaller, more dynamic prompts are constructed:
* `packages/core/src/core/client.ts`: The getEnvironment method assembles a prompt with context
about the user's environment, including the current date, OS, and working directory.
* `packages/core/src/core/client.ts`: The tryCompressChat method contains a prompt used to
summarize the conversation history.
Would you like to examine any of these other prompts in more detail?
> How about tool definitions?
✦ Of course. The system prompt I found earlier referenced several tools. Based on the file structure,
it's highly likely that the tool definitions are located in the packages/core/src/tools/ directory.
I'll start by listing the files in that directory to get an overview of the available tools.
╭───────────────────────────────────────────────────────────────────────────────────────────╮
│ ✔ ReadFolder packages/core/src/tools │
│ │
│ Listed 31 item(s). │
╰───────────────────────────────────────────────────────────────────────────────────────────╯
✦ That's a lot of files! It looks like each tool has its own file, along with a corresponding test
file. This is a good sign of a well-structured codebase.
To give you a better sense of how these tools are defined, I'll read the contents of a few of
them. I'll start with shell.ts, read-file.ts, and write-file.ts, as these seem like fundamental
tools.
╭─────────────────────────────────────────────────────────────────────────────────────────────────────╮
│ ✔ ReadManyFiles Will attempt to read and concatenate files using patterns: `/private/tmp/gemini-… │
│ │
│ ### ReadManyFiles Result (Target Dir: `/private/tmp/gemini-cli`) │
│ │
│ Successfully read and concatenated content from **3 file(s)**. │
│ │
│ **Processed Files:** │
│ - `packages/core/src/tools/read-file.ts` │
│ - `packages/core/src/tools/shell.ts` │
│ - `packages/core/src/tools/write-file.ts` │
╰─────────────────────────────────────────────────────────────────────────────────────────────────────╯
✦ I've retrieved the source code for the shell, read-file, and write-file tools. Here's a summary of
what I've found:
* `shell.ts`: This file defines the ShellTool, which is responsible for executing shell commands.
It includes logic for validating parameters, handling command execution, and capturing output.
It also has a security feature that requires user confirmation before executing a command for
the first time.
* `read-file.ts`: This file defines the ReadFileTool, which reads the contents of a file. It
includes logic for handling different file types, reading specific line ranges, and validating
file paths.
* `write-file.ts`: This file defines the WriteFileTool, which writes content to a file. It
includes logic for creating new files, overwriting existing files, and handling file
permissions. It also has a feature that shows a diff of the changes before writing to the file,
and asks for user confirmation.
These files provide a clear picture of how the Gemini CLI interacts with the user's file system and
executes commands. The tool definitions are well-structured and include robust error handling and
security features.
Would you like me to examine any of the other tool files?
> So it only has three tools that it uses? Read, write and shell?
✦ No, it has many more than just those three. I only showed you the source for read_file, write_file,
and run_shell_command as representative examples.
Based on the file listing of the packages/core/src/tools/ directory, the CLI also includes at least
the following tools:
* edit: To modify files programmatically.
* glob: To find files by pattern.
* grep: To search for content within files.
* ls: To list directory contents.
* memoryTool: To remember user-specific facts.
* read-many-files: To read multiple files at once.
* web-fetch: To get content from URLs.
* web-search: To perform a web search.
> /quit
╭──────────────────────────────────╮
│ │
│ Agent powering down. Goodbye! │
│ │
│ │
│ Cumulative Stats (4 Turns) │
│ │
│ Input Tokens 301,467 │
│ Output Tokens 1,897 │
│ Thoughts Tokens 2,343 │
│ ────────────────────────────── │
│ Total Tokens 305,707 │
│ │
│ Total duration (API) 1m 15s │
│ Total duration (wall) 17m 55s │
│ │
╰──────────────────────────────────╯
-
-
Save simonw/12c7b072e8e21ef1e040fb3b69c1da28 to your computer and use it in GitHub Desktop.
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment