Skip to content

Instantly share code, notes, and snippets.

@chwan1
Created May 8, 2023 18:04
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save chwan1/d2e49d0411f595077cfbcddaaf213c79 to your computer and use it in GitHub Desktop.
Save chwan1/d2e49d0411f595077cfbcddaaf213c79 to your computer and use it in GitHub Desktop.
/*
# Chat with ChatGPT
## <span class="text-primary">👉 Note: LangChain is still in development. This script will keep updating to use the latest APIs</span>
Use `Kit` -> `Manage npm Packages` -> `Update a Package` -> `langchain` to update to install the latest version.
- Opens the `chat` component
- Type a message and press `enter` to send
- The message is sent to the OpenAI API
- The response from OpenAI is displayed in the chat
- Repeat!
*/
// Name: ChatGPT
// Description: Have a Conversation with an AI
import "@johnlindquist/kit";
const { CallbackManager } = await import("langchain/callbacks");
const { ChatOpenAI } = await import("langchain/chat_models");
const { ConversationChain } = await import("langchain/chains");
const { BufferWindowMemory } = await import("langchain/memory");
const {
ChatPromptTemplate,
HumanMessagePromptTemplate,
SystemMessagePromptTemplate,
MessagesPlaceholder,
} = await import("langchain/prompts");
const prompt = ChatPromptTemplate.fromPromptMessages([
SystemMessagePromptTemplate.fromTemplate(
"The following is a conversation with an AI assistant. The assistant is helpful, creative, clever, and very friendly.",
),
new MessagesPlaceholder("history"),
HumanMessagePromptTemplate.fromTemplate("{input}"),
]);
const llm = new ChatOpenAI({
modelName: "gpt-3.5-turbo",
// modelName: "gpt-4",
openAIApiKey: await env("OPENAI_API_KEY", {
hint: `Grab a key from <a href="https://platform.openai.com/account/api-keys">here</a>`,
}),
streaming: true,
callbackManager: CallbackManager.fromHandlers({
handleLLMStart: async () => {
chat.addMessage("");
},
handleLLMNewToken: async (token) => {
chat.pushToken(token);
},
handleLLMEnd: async () => {
log("Done!");
},
handleLLMError: async (err) => {
warn(err);
},
}),
});
const memory = new BufferWindowMemory({
returnMessages: true,
});
const chain = new ConversationChain({
llm,
prompt,
memory,
});
const messages = await chat({
ignoreBlur: true,
alwaysOnTop: true,
onSubmit: async (input) => {
await chain.call({ input });
},
});
// inspect(messages);
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment