Skip to content

Instantly share code, notes, and snippets.

@simonw
Last active July 9, 2024 18:54
Show Gist options
  • Save simonw/e62440114960bc98f200eb3d92593896 to your computer and use it in GitHub Desktop.
Save simonw/e62440114960bc98f200eb3d92593896 to your computer and use it in GitHub Desktop.

Prompt Playground built with Claude 3.5 Sonnet

Prompt

<script type="module">
let ta = document.querySelector('textarea') || document.createElement('textarea');
document.body.appendChild(ta)
<textarea style="width:​ 80%;​ height:​ 80vh;​">​</textarea>​
ta.style.width = '80%';
ta.style.height = '80vh';
let model = await window.ai.createTextSession();
let stream = await model.promptStreaming("Write me an extra-long poem.");
for await (const chunk of stream) {
  ta.value = chunk;
}
</script>

Based on this example code write me a SPA using just Vanilla JavaScript that does the following:

  • Checks for window.ai and shows an error message saying to use Chrome Canary if window.ai does not exist
  • If it does exist, shows a textarea labelled "prompt" and a submit button
  • When that button is clicked, it starts the prompt running and reveals a div below which has innerText set to each chunk as they stream in - use CSS so newlines are displayed correctly
  • When a stream finishes it saves the original prompt and the final response to localStorage in an array (and the current time)
  • It shows the previous prompts and responses in a neat way at the bottom of the page, most recent at the top - with timestamp
  • There is a delete button for deleting a history prompt

Response

Certainly! I'll create a Single Page Application (SPA) using Vanilla JavaScript that meets your requirements. Here's the code for the application:

<!DOCTYPE html>
<html lang="en">
<head>
    <meta charset="UTF-8">
    <meta name="viewport" content="width=device-width, initial-scale=1.0">
    <title>AI Prompt SPA</title>
    <style>
        body {
            font-family: Arial, sans-serif;
            max-width: 800px;
            margin: 0 auto;
            padding: 20px;
        }
        #error-message {
            color: red;
            font-weight: bold;
        }
        #prompt-area {
            display: none;
        }
        textarea {
            width: 100%;
            height: 100px;
        }
        #response-area {
            white-space: pre-wrap;
            background-color: #f0f0f0;
            padding: 10px;
            margin-top: 10px;
        }
        .history-item {
            border: 1px solid #ccc;
            padding: 10px;
            margin-bottom: 10px;
        }
        .timestamp {
            font-size: 0.8em;
            color: #666;
        }
    </style>
</head>
<body>
    <div id="error-message"></div>
    <div id="prompt-area">
        <label for="prompt-input">Prompt:</label>
        <textarea id="prompt-input"></textarea>
        <button id="submit-button">Submit</button>
        <div id="response-area"></div>
    </div>
    <h2>History</h2>
    <div id="history-area"></div>

    <script>
        document.addEventListener('DOMContentLoaded', () => {
            const errorMessage = document.getElementById('error-message');
            const promptArea = document.getElementById('prompt-area');
            const promptInput = document.getElementById('prompt-input');
            const submitButton = document.getElementById('submit-button');
            const responseArea = document.getElementById('response-area');
            const historyArea = document.getElementById('history-area');

            if (!window.ai) {
                errorMessage.textContent = 'Error: window.ai not found. Please use Chrome Canary.';
                return;
            }

            promptArea.style.display = 'block';

            submitButton.addEventListener('click', async () => {
                const prompt = promptInput.value.trim();
                if (!prompt) return;

                responseArea.textContent = 'Generating response...';
                let fullResponse = '';

                try {
                    const model = await window.ai.createTextSession();
                    const stream = await model.promptStreaming(prompt);

                    for await (const chunk of stream) {
                        fullResponse += chunk;
                        responseArea.textContent = fullResponse;
                    }

                    saveToHistory(prompt, fullResponse);
                    displayHistory();
                } catch (error) {
                    responseArea.textContent = `Error: ${error.message}`;
                }
            });

            function saveToHistory(prompt, response) {
                const history = JSON.parse(localStorage.getItem('aiPromptHistory') || '[]');
                history.unshift({
                    prompt,
                    response,
                    timestamp: new Date().toISOString()
                });
                localStorage.setItem('aiPromptHistory', JSON.stringify(history));
            }

            function displayHistory() {
                const history = JSON.parse(localStorage.getItem('aiPromptHistory') || '[]');
                historyArea.innerHTML = '';

                history.forEach((item, index) => {
                    const historyItem = document.createElement('div');
                    historyItem.className = 'history-item';
                    historyItem.innerHTML = `
                        <p><strong>Prompt:</strong> ${item.prompt}</p>
                        <p><strong>Response:</strong> ${item.response}</p>
                        <p class="timestamp">Timestamp: ${new Date(item.timestamp).toLocaleString()}</p>
                        <button class="delete-button" data-index="${index}">Delete</button>
                    `;
                    historyArea.appendChild(historyItem);
                });

                document.querySelectorAll('.delete-button').forEach(button => {
                    button.addEventListener('click', (e) => {
                        const index = e.target.getAttribute('data-index');
                        deleteHistoryItem(index);
                    });
                });
            }

            function deleteHistoryItem(index) {
                const history = JSON.parse(localStorage.getItem('aiPromptHistory') || '[]');
                history.splice(index, 1);
                localStorage.setItem('aiPromptHistory', JSON.stringify(history));
                displayHistory();
            }

            displayHistory();
        });
    </script>
</body>
</html>

This SPA (Single Page Application) meets all the requirements you specified:

  1. It checks for window.ai and shows an error message if it doesn't exist.
  2. If window.ai exists, it displays a textarea labeled "Prompt" and a submit button.
  3. When the submit button is clicked, it starts the prompt running and displays the streamed response in a div below, with newlines preserved.
  4. After a stream finishes, it saves the original prompt, the final response, and the current time to localStorage in an array.
  5. It displays previous prompts and responses at the bottom of the page, with the most recent at the top, including timestamps.
  6. There's a delete button for each history item to remove it from the localStorage and the display.

To use this SPA:

  1. Save the code as an HTML file (e.g., ai-prompt-spa.html).
  2. Open the file in Chrome Canary with the appropriate flags enabled for window.ai.
  3. Enter a prompt in the textarea and click "Submit" to generate a response.
  4. View the history of prompts and responses at the bottom of the page.

Note that this implementation assumes that Chrome Canary with the appropriate flags is being used, as it relies on the window.ai API. If you need any modifications or have any questions about the implementation, please let me know!

@simonw
Copy link
Author

simonw commented Jul 3, 2024

@naltmann
Copy link

naltmann commented Jul 3, 2024

Chrome Canary 128 on Windows with "Prompt API for Gemini Nano" enabled: "Error: The session cannot be created." after entering text

@andreban
Copy link

andreban commented Jul 3, 2024

Chrome Canary 128 on Windows with "Prompt API for Gemini Nano" enabled: "Error: The session cannot be created." after entering text

This is likely because the model hasn't finished downloading yet. Check for "Optimization Guide On Device Model" under chrome://components for the model download status. You should also be able to get the model status from the web app by calling const status = await window.ai.canCreateTextSession(). It'll return yes when the model is ready.

@naltmann
Copy link

naltmann commented Jul 3, 2024

Chrome Canary 128 on Windows with "Prompt API for Gemini Nano" enabled: "Error: The session cannot be created." after entering text

This is likely because the model hasn't finished downloading yet. Check for "Optimization Guide On Device Model" under chrome://components for the model download status. You should also be able to get the model status from the web app by calling const status = await window.ai.canCreateTextSession(). It'll return yes when the model is ready.

Great tips. For me chrome://components/ contains now row for "Optimization Guide On Device Model" despite chrome://flags/ definitely having "Prompt API for Gemini Nano" set to "Enabled" (and Chrome Canary restarted). const status = await window.ai.canCreateTextSession() returns "undefined" with the message "The feature flag gating model execution was disabled.". This is confusing since it makes it seem like the manually set "Enabled" flag is not being read.

@KevinGimbel
Copy link

@naltmann I had the same issue on Mac.

I needed to enable the features ("Enables optimization guide on device" and "Prompt API for Gemini Nano") in chrome://flags, then quit Chrome Canary (right click -> Quit), then re-open and the "Optimization Guide On Device Model" component was available under chrome://components

Closing with the X button wasn't enough. Now the demo works for me.

@naltmann
Copy link

naltmann commented Jul 4, 2024

@KevinGimbel Thanks! "Enables optimization guide on device" was the flag I was missing.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment