Created
February 24, 2023 12:51
-
-
Save assafelovic/7746354b021a1cc152db306aa457826d to your computer and use it in GitHub Desktop.
A Langchain Action driven agent demo of a weather conversational chatbot
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
from langchain.agents import Tool | |
from langchain.chains.conversation.memory import ConversationBufferMemory | |
from langchain import OpenAI | |
from langchain.utilities import SerpAPIWrapper | |
from langchain.agents import initialize_agent | |
import os | |
import config | |
# Step 1: Please add environment api keys for SerpAPI and OpenAI | |
os.environ["OPENAI_API_KEY"] = config.openai_key | |
os.environ["SERPAPI_API_KEY"] = config.serpapi_key | |
# Step 2: Create a JSON 'weather_agent.json' with the following metadata | |
weather_agent = { | |
"load_from_llm_and_tools": true, | |
"_type": "conversational-react-description", | |
"prefix": "Assistant is a large language model trained for forecasting weather.\n\nAssistant is designed to be able to assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. As a language model, Assistant is able to generate human-like text based on the input it receives.\nAssistant should only obtain knowledge and take actions from using tools and never on your own.\nTOOLS:\n------\n\nAssistant has access to the following tools: ", | |
"suffix": "Please make decisions based on the following policy: \n- If the user is asking for a weather forecast use the Weather Forecast tool\n- If the user does not provide a location, ask before checking for weather\n- Apologize if the user is angry or showing frustration\n- Answer with a friendly and professional tone.\n- Always end a response with a follow up question like 'what else can i help you with?', unless the user shows gratitude.\nBegin!\n\nPrevious conversation history:\n{chat_history}\n\nNew input: {input}\n{agent_scratchpad}", | |
"ai_prefix": "AI Agent", | |
"human_prefix": "Human" | |
} | |
search = SerpAPIWrapper() | |
tools = [ | |
Tool( | |
name="Weather Forecaster", | |
func=search.run, | |
description="useful for when you need to answer questions about current and future weather forecast" | |
), | |
] | |
memory = ConversationBufferMemory(memory_key="chat_history") | |
llm = OpenAI(temperature=0) | |
agent_chain = initialize_agent(tools, llm=llm, verbose=True, agent_path="./weather_agent.json", memory=memory) | |
while True: | |
query = input(">> ") | |
agent_chain.run(input=query) |
You can append a suffix in the prompt to mention something along the lines like:
As a customer service bot, you must avoid imaginative responses. When you are unsure, please say you don't understand.
This one would significantly lower down the chances of it happening. Another approach is to adjust top_p OR temp, not both.
Tried it. But along with my exact answer, it shows a lot of other unwanted data like QUESTION, Content,...etc.
How can u disable those and display only my required answer?
Tried verbose= False. But still getting some unwanted details(along with the exact answer) which makes the bot noisy.
How to prevent it?
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
If I use our own knowledge base index, instead of SerpAPIWrapper, it shows some misbehavior.
Whenever I am asking the questions related to my context, it returns the answers from out world. How to prevent it?