Skip to content

Instantly share code, notes, and snippets.

@heathermiller
Created January 11, 2024 18:01
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save heathermiller/cb3c2ecdbe09eaa469090fe8d93e9e87 to your computer and use it in GitHub Desktop.
Save heathermiller/cb3c2ecdbe09eaa469090fe8d93e9e87 to your computer and use it in GitHub Desktop.
LangChain
llm = OpenAI(temperature =0)
class Action(BaseModel):
action: str = Field(description="action to take")
action_input: str = Field(description="input to the action")
parser = PydanticOutputParser(pydantic_object=Action)
# the parser provides natural language formatting instruction for use in LLM.
# The process is centered around prompts.
prompt = PromptTemplate(template="Answer the user query.
\n{format_instructions}\n{query}\n",
input_variables=["query"],
partial_variables={"format_instructions": parser.get_format_instructions()}
)
prompt_value = prompt.format_prompt(query="who is leo di caprios gf?")
model_response = model(prompt_value.to_string())
# the parsing aspect and the querying aspect is detached in LangChain
parsed_model = parser.parse(model_response)
# if the parsing process results in an exception, users could use the retry parser that internally appends the parsing error and prompts the LM to retry it again
retry_parser = RetryWithErrorOutputParser.from_llm(parser=parser, llm=llm)
retry_parser.parse_with_prompt(model_response, prompt_value)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment