Skip to content

Instantly share code, notes, and snippets.

@spencerkittleson
Last active June 26, 2023 23:21
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save spencerkittleson/3149dcee4e58b186043c2ead970ae62f to your computer and use it in GitHub Desktop.
Save spencerkittleson/3149dcee4e58b186043c2ead970ae62f to your computer and use it in GitHub Desktop.
quick
# https://python.langchain.com/docs/modules/model_io/models/llms/integrations/llamacpp
import os
from langchain import PromptTemplate, LLMChain
from langchain.llms import LlamaCpp
from langchain import PromptTemplate, LLMChain
from langchain.callbacks.manager import CallbackManager
from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler
template = """Question: {question}
Answer: Let's work this out in a step by step way to be sure we have the right answer."""
prompt = PromptTemplate(template=template, input_variables=["question"])
callback_manager = CallbackManager([StreamingStdOutCallbackHandler()])
llm = LlamaCpp(
model_path=r'C:\Users\spenc\.models\nous-hermes-13b.ggmlv3.q4_0.bin', callback_manager=callback_manager, verbose=True
)
llm_chain = LLMChain(prompt=prompt, llm=llm)
question = "What NFL team won the Super Bowl in the year Justin Bieber was born?"
llm_chain.run(question)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment