Skip to content

Instantly share code, notes, and snippets.

@CGamesPlay
Created April 4, 2024 10:31
Show Gist options
  • Star 3 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save CGamesPlay/8c2a2882c441821e76bbe96809c41119 to your computer and use it in GitHub Desktop.
Save CGamesPlay/8c2a2882c441821e76bbe96809c41119 to your computer and use it in GitHub Desktop.
Plandex LLM Usage

Based on analyzing the code, here is a high-level overview of the main prompts used in this project and how they fit into the overall flow:

The core prompts are defined in the model/prompts package:

  1. SysCreate (create.go) - This is the main system prompt that defines the AI's identity as Plandex, an AI programming assistant. It provides detailed instructions on how to collaboratively create a 'plan' with the user to complete a programming task. The prompt covers things like assessing if there is a clear task, breaking down large tasks into subtasks, generating code blocks, using open source libraries, and ending responses.

  2. ListReplacementsFn (build.go) - Used when building code files from a plan. It analyzes proposed code updates and produces a structured list of changes to make.

  3. SysDescribe (describe.go) - Used to generate a commit message summarizing the changes made in a plan.

  4. SysExecStatusShouldContinue (exec_status.go) - Used to determine if a plan should automatically continue executing or is considered complete based on analyzing the latest AI response.

  5. SysPlanName (name.go) - Used to generate a brief file name for a plan based on its content.

  6. SysShortSummary and PlanSummary (summary.go) - Used to generate concise summaries of text related to a plan, including summarizing the conversation so far to reduce token usage.

These prompts are used in the core model/plan package that handles the main planning logic:

The main entry point is the Tell function (tell_exec.go). This kicks off the collaborative planning process between the user and AI. It loads the current plan state, breaks down the task, and iteratively generates replies via the OpenAI chat completion API.

As the AI generates replies (tell_stream.go), it parses out any generated code blocks and queues them up as 'builds' to update the corresponding code files (build_exec.go, build_stream.go).

After each AI reply is generated, it summarizes the conversation (tell_summary.go) to keep the context within the token limit for subsequent requests. It also assesses if the plan is complete or should continue (exec_status.go).

When code changes are ready to be applied, the build process (build_exec.go, build_result.go) uses the ListReplacements prompt to generate a structured changeset that can be applied to each code file.

Finally, a commit message is generated (commit_msg.go) to summarize all the changes, and the updated plan is committed to the repo.

So in summary, the main loop is:

  1. User provides a task
  2. System iteratively collaborates with user to break down task and generate code updates
  3. Proposed code changes are queued up as builds
  4. Conversation is summarized to manage token count
  5. Assess if plan is complete or should continue
  6. Apply code changes
  7. Generate commit message and commit updated plan

The various prompts are used throughout this process to guide the AI in understanding the task, generating appropriate code changes, keeping context manageable, and producing useful commit messages and file names.

@CGamesPlay
Copy link
Author

This was written using:

git ls-files app/server | xargs head -n 500 | llm -m claude-3-opus "Describe all of the prompts used in this project and how they are used. Work backwards from the prompts and provide a high-level overview of the 'main loop'."

With thanks to Simon Willison for the technique.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment