Skip to content

Instantly share code, notes, and snippets.

@simonw
Created August 27, 2024 23:24
Show Gist options
  • Save simonw/59ef2abf59029e80c1e713506cfe9cd3 to your computer and use it in GitHub Desktop.
Save simonw/59ef2abf59029e80c1e713506cfe9cd3 to your computer and use it in GitHub Desktop.

Fixing Duplicate Prompts in Chat History

human (Aug 27, 2024, 04:22 PM)

Pasted in copy of https://github.com/simonw/tools/blob/2f2bfd10d2ef829273d43a95e8a86b1ae0140668/gemini-chat.html

Fix the bug where it starts like this :[{"role":"user","parts":[{"text":"say hello in all of the languages"}]},{"role":"user","parts":[{"text":"say hello in all of the languages"}]} which is wrong because the prompt is sent twice

assistant (Aug 27, 2024, 04:22 PM)

To fix the bug where the prompt is being sent twice, we need to modify the sendMessage function. The issue appears to be that the user's message is being added to the chatHistory array before sending it to the model, and then the model's response is also being added. This results in duplicate entries.

Here's how we can fix this:

Updated Gemini Chat App with bug fix

<!DOCTYPE html>
<html lang="en">
<head>
    <meta charset="UTF-8">
    <meta name="viewport" content="width=device-width, initial-scale=1.0">
    <title>Gemini Chat App</title>
    <script type="module">
        import { GoogleGenerativeAI } from "https://esm.run/@google/generative-ai";
        import { marked } from "https://esm.run/marked";

        let chat;
        let currentModel = "gemini-1.5-pro";
        let chatHistory = [];

        function getApiKey() {
            let apiKey = localStorage.getItem("GEMINI_API_KEY");
            if (!apiKey) {
                apiKey = prompt("Please enter your Gemini API key:");
                if (apiKey) {
                    localStorage.setItem("GEMINI_API_KEY", apiKey);
                }
            }
            return apiKey;
        }

        async function getGenerativeModel(params) {
            const API_KEY = getApiKey();
            const genAI = new GoogleGenerativeAI(API_KEY);
            return genAI.getGenerativeModel(params);
        }

        async function initChat() {
            try {
                const model = await getGenerativeModel({ model: currentModel });
                chat = model.startChat({
                    history: chatHistory,
                });
                displayMessage("System", `Chatting with ${currentModel}`);
            } catch (error) {
                displayError("Failed to initialize chat: " + error.message);
            }
        }

        async function sendMessage() {
            const userInput = document.getElementById("user-input");
            const message = userInput.value.trim();
            if (message) {
                displayMessage("You", message);
                userInput.value = "";

                try {
                    const startTime = performance.now();
                    const result = await chat.sendMessageStream(message);
                    let fullResponse = "";
                    for await (const chunk of result.stream) {
                        const chunkText = chunk.text();
                        fullResponse += chunkText;
                        updateModelResponse(fullResponse);
                    }
                    const endTime = performance.now();
                    const duration = ((endTime - startTime) / 1000).toFixed(2);
                    
                    // Add the user's message and the model's response to chatHistory after the interaction
                    chatHistory.push({ role: "user", parts: [{ text: message }] });
                    chatHistory.push({ role: "model", parts: [{ text: fullResponse }] });
                    
                    let usage = (await result.response).usageMetadata;
                    console.log("Full result:", result);
                    updateUsageMetadata(usage);
                    updateDuration(duration);
                } catch (error) {
                    displayError("Error: " + error.message);
                }
            }
        }

        // ... (rest of the code remains the same)

    </script>
    <!-- ... (styles remain the same) -->
</head>
<body>
    <!-- ... (HTML structure remains the same) -->
</body>
</html>

I've made the following change to fix the bug:

  1. In the sendMessage function, I've moved the addition of messages to the chatHistory array to after the interaction with the model is complete.

Here's the key part that was changed:

async function sendMessage() {
    // ... (previous code remains the same)

    try {
        const startTime = performance.now();
        const result = await chat.sendMessageStream(message);
        let fullResponse = "";
        for await (const chunk of result.stream) {
            const chunkText = chunk.text();
            fullResponse += chunkText;
            updateModelResponse(fullResponse);
        }
        const endTime = performance.now();
        const duration = ((endTime - startTime) / 1000).toFixed(2);
        
        // Add the user's message and the model's response to chatHistory after the interaction
        chatHistory.push({ role: "user", parts: [{ text: message }] });
        chatHistory.push({ role: "model", parts: [{ text: fullResponse }] });
        
        // ... (rest of the function remains the same)
    } catch (error) {
        displayError("Error: " + error.message);
    }
}

This change ensures that the user's message and the model's response are added to the chatHistory only once, after the interaction is complete. This should resolve the issue of duplicate entries in the chat history.

The rest of the code remains the same. This modification should fix the bug where the prompt was being sent twice in the chat history.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment