Skip to content

Instantly share code, notes, and snippets.

@chhoumann
Last active July 1, 2024 13:59
Show Gist options
  • Save chhoumann/0eb09f01152103e6bc4e075898861496 to your computer and use it in GitHub Desktop.
Save chhoumann/0eb09f01152103e6bc4e075898861496 to your computer and use it in GitHub Desktop.

Multi-Instruct for Obsidian

This script is a rudimentary clone of big-agi's beam mode, implemented for the QuickAdd plugin for Obsidian. It allows users to generate multiple AI responses and combine them in various ways.

Features

  • Generate multiple AI responses to a single prompt
  • Choose from different combination methods:
    • Fuse: Merge responses into a single, coherent answer
    • Guided: Select key components from responses to create a combined answer
    • Compare: Analyze responses based on criteria and synthesize a final answer
    • Custom: Provide custom instructions for combining responses
  • Configurable settings for model, temperature, and system prompt

Installation

  1. Install the QuickAdd plugin for Obsidian.
  2. Create a macro in QuickAdd.
  3. Add this script to your macro. You need to place the big-agi.js file somewhere in your vault to do so.
  4. Use the cog-wheel button to configure the script settings to your liking.

For detailed installation instructions, refer to the process described for this script.

Usage

  1. Run the QuickAdd macro containing this script.
  2. Enter your prompt when prompted.
  3. The script will generate multiple responses in separate files. You can customize the model & parameters in the frontmatter of these files. It will not consider updates to the prompt - only the system prompt.
  4. Click the "Run" button in the notice to proceed with combining the responses.
  5. Choose a combination method when prompted.
  6. Follow any additional prompts based on the chosen combination method.
  7. The final result will be saved in a new file named "Multi-Instruct-Result.md".

Configuration

The script allows you to configure the following settings:

  • Number of Generations: How many separate responses to generate
  • Model: The AI model to use for generating responses
  • Temperature: Controls the randomness of the AI's output
  • System Prompt: The initial prompt given to the AI
  • Merge Model: The AI model to use for merging responses

Contributing

Contributions or suggestions are welcome! Just leave a comment.

const TEXT_FIELD = "System Prompt";
const NUM_GENERATIONS = "Number of Generations";
const MODEL_FIELD = "Model";
const TEMPERATURE_FIELD = "Temperature";
const MERGE_MODEL_FIELD = "Merge Model";
module.exports = {
entry: async (params, settings) => {
const { quickAddApi, app } = params;
const numGenerations = Number.parseInt(settings[NUM_GENERATIONS]);
const mergeModel = settings[MERGE_MODEL_FIELD];
// Create and maintain a notice
const notice = new Notice("Initializing Multi-Instruct...", 0);
// Get the user prompt
const userPrompt = await quickAddApi.inputPrompt(
"Enter your prompt for the LLMs:",
);
// Generate new files
const files = [];
for (let i = 0; i < numGenerations; i++) {
const file = await app.vault.create(
`Multi-Instruct-${i + 1}.md`,
`${generateFrontMatter(settings)}\n\n## Prompt\n${userPrompt}\n`,
);
files.push(file);
// Open file in new leaf
await app.workspace.splitActiveLeaf().openFile(file);
}
const finishedPromise = new Promise((resolve, reject) => {
notice.setMessage(
"Use the Run button to generate responses and combine them.\n",
);
// Add a "Run" button to the notice
notice.noticeEl.innerHTML +=
'<button id="run-multi-instruct">Run</button>';
document
.getElementById("run-multi-instruct")
.addEventListener("click", () =>
runMultiInstruct(
files,
notice,
quickAddApi,
app,
userPrompt,
mergeModel,
)
.then(resolve)
.catch(reject)
.finally(() => {
notice.hide();
}),
);
});
await finishedPromise;
},
settings: {
name: "Multi-Instruct",
author: "Assistant",
options: {
[NUM_GENERATIONS]: {
type: "text",
defaultValue: "3",
placeholder: "Number of generations",
},
[MODEL_FIELD]: {
type: "dropdown",
defaultValue: "gpt-3.5-turbo",
options: ["gpt-3.5-turbo", "gpt-4", "gpt-4o"],
},
[TEMPERATURE_FIELD]: {
type: "text",
defaultValue: "0.7",
placeholder: "Temperature (0.0 - 1.0)",
},
[TEXT_FIELD]: {
type: "text",
defaultValue: "You are a helpful assistant.",
placeholder: "Enter system prompt",
},
[MERGE_MODEL_FIELD]: {
type: "text",
defaultValue: "gpt-3.5-turbo",
placeholder: "Model to use for merging",
},
},
},
};
function generateFrontMatter(settings) {
return `---
model: ${settings[MODEL_FIELD]}
temperature: ${Number.parseFloat(settings[TEMPERATURE_FIELD])}
system_prompt: ${settings[TEXT_FIELD]}
---
`;
}
async function runMultiInstruct(
files,
notice,
quickAddApi,
app,
userPrompt,
mergeModel,
) {
notice.setMessage("Generating responses...");
const responses = await Promise.all(
files.map((file) => generateResponse(file, quickAddApi, userPrompt)),
);
notice.setMessage("Responses generated. Choosing combination method...");
const combinationMethod = await quickAddApi.suggester(
["Fuse", "Guided", "Compare", "Custom"],
["fuse", "guided", "compare", "custom"],
);
let result;
switch (combinationMethod) {
case "fuse":
result = await fuseCombination(responses, quickAddApi, mergeModel);
break;
case "guided":
result = await guidedCombination(responses, quickAddApi, mergeModel);
break;
case "compare":
result = await compareCombination(responses, quickAddApi, mergeModel);
break;
case "custom":
result = await customCombination(responses, quickAddApi, mergeModel);
break;
}
// Create a new file with the combined result
const resultFile = await app.vault.create(
"Multi-Instruct-Result.md",
`# Result\n\n${result}`,
);
// Open the result file in a horizontal split
const leaf = app.workspace.splitActiveLeaf("horizontal");
await leaf.openFile(resultFile);
notice.hide();
}
async function generateResponse(file, quickAddApi, userPrompt) {
const frontmatter = app.metadataCache.getFileCache(file).frontmatter;
// Use the AI API to generate a response
const response = await quickAddApi.ai.prompt(userPrompt, frontmatter.model, {
systemPrompt: frontmatter.system_prompt,
temperature: frontmatter.temperature,
});
// Append the response to the file
await app.vault.append(file, `\n## Result\n${response.output}`);
return response.output;
}
async function fuseCombination(responses, quickAddApi, mergeModel) {
const fusionPrompt = `You are an AI assistant tasked with fusing multiple responses into a single, coherent answer. Here are the responses:\n\n${responses.map((r, i) => `Response ${i + 1}:\n${r}\n`).join("\n")}\n\nPlease analyze these responses and create a single, optimized answer that combines the best elements from each. Ensure the fused response is coherent, comprehensive, and addresses the original query effectively.`;
const fusedResponse = await quickAddApi.ai.prompt(fusionPrompt, mergeModel);
return fusedResponse.output;
}
async function guidedCombination(responses, quickAddApi, mergeModel) {
const analysisPrompt = `Analyze the following responses and identify the key components or ideas present in each:\n\n${responses.map((r, i) => `Response ${i + 1}:\n${r}\n`).join("\n")}\n\nList the main components or ideas, separating each with a newline.`;
const analysis = await quickAddApi.ai.prompt(analysisPrompt, mergeModel);
const components = analysis.output.split("\n").filter((c) => c.trim() !== "");
const selectedComponents = await quickAddApi.checkboxPrompt(components, []);
const fusionPrompt = `Create a coherent response using the following selected components:\n\n${selectedComponents.join("\n")}\n\nEnsure the response is well-structured and addresses the original query effectively.`;
const fusedResponse = await quickAddApi.ai.prompt(fusionPrompt, mergeModel);
return fusedResponse.output;
}
async function compareCombination(responses, quickAddApi, mergeModel) {
const criteriaPrompt =
"Suggest 3-5 criteria for comparing these responses, separating each with a comma:";
const criteriaResponse = await quickAddApi.ai.prompt(
criteriaPrompt,
mergeModel,
);
const criteria = criteriaResponse.output.split(",").map((c) => c.trim());
let comparisonTable = `| Criteria | ${responses.map((_, i) => `Response ${i + 1}`).join(" | ")} |\n`;
comparisonTable += `|${"-".repeat(criteria.length + 1)}:|\n`;
for (const criterion of criteria) {
comparisonTable += `| ${criterion} | ${responses.map(() => " ").join(" | ")} |\n`;
}
const analysisPrompt = `Analyze the following responses based on the given criteria:\n\n${comparisonTable}\n\nResponses:\n${responses.map((r, i) => `Response ${i + 1}:\n${r}\n`).join("\n")}\n\nProvide a detailed analysis of each response based on the criteria. Fill in the comparison table with brief evaluations.`;
const analysis = await quickAddApi.ai.prompt(analysisPrompt, mergeModel);
const filledComparisonTable = analysis.output.match(/\|[\s\S]*\|/)[0];
const conclusionPrompt = `Based on the following analysis and comparison table, provide a conclusion and synthesize the best elements from each response into a final, optimized answer:\n\n${analysis.output}\n\n${filledComparisonTable}`;
const conclusion = await quickAddApi.ai.prompt(conclusionPrompt, mergeModel);
return `## Comparison Table\n${filledComparisonTable}\n\n## Analysis\n${analysis.output}\n\n## Conclusion\n${conclusion.output}`;
}
async function customCombination(responses, quickAddApi, mergeModel) {
const customPrompt = `You have been given multiple responses to a query. Your task is to combine these responses in a custom way. Here are the responses:\n\n${responses.map((r, i) => `Response ${i + 1}:\n${r}\n`).join("\n")}\n\nPlease provide instructions on how you would like to combine these responses. Be specific about which elements to include, exclude, or modify.`;
const customInstructions = await quickAddApi.inputPrompt("Provide custom instructions");
const combinationPrompt = `Following the user's instructions, combine the given responses:\n\n${responses.map((r, i) => `Response ${i + 1}:\n${r}\n`).join("\n")}\n\nInstructions:\n${customInstructions}\n\nCreate a final, optimized response based on these instructions.`;
const combinedResponse = await quickAddApi.ai.prompt(
combinationPrompt,
mergeModel,
);
return combinedResponse.output;
}
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment