This is an excellent LangGraph cheat sheet! It provides a comprehensive and practical overview of LangGraph's key features and usage. Here's a slightly refined version with added clarity and formatting for better readability:
from typing import TypedDict, Dict, Any, Callable
from langchain_core.runnables import Runnable
from langgraph.graph import StateGraph, END
- Use a
TypedDict
to define the structure of your state. - This is the shared data passed between nodes.
class MyGraphState(TypedDict):
input_data: str
intermediate_result: str
final_output: str
- Nodes are functions or Runnables that process the state.
- They take the current state as input and return updates to the state.
def node_a(state: MyGraphState) -> Dict[str, Any]:
new_result = f"Processed: {state['input_data']}"
return {"intermediate_result": new_result}
from langchain_core.prompts import ChatPromptTemplate
from langchain_openai import ChatOpenAI
prompt = ChatPromptTemplate.from_template("Tell me a joke about {topic}")
model = ChatOpenAI()
runnable_node = prompt | model # Combine prompt and model as a Runnable
- Instantiate a
StateGraph
with your state definition.
builder = StateGraph(MyGraphState)
builder.add_node("node_a", node_a)
builder.add_node("runnable_node", runnable_node)
- Direct edges connect nodes in sequence.
builder.add_edge("node_a", "runnable_node")
- Use conditional edges for branching logic.
def should_continue(state: MyGraphState):
if len(state["intermediate_result"]) > 10:
return "runnable_node" # Route to runnable_node
else:
return END # End the graph
builder.add_conditional_edges(
"node_a",
should_continue,
{"runnable_node": "runnable_node", END: END}
)
- Define where execution starts in the graph.
builder.set_entry_point("node_a")
- Compile the graph into a
RunnableGraph
.
graph = builder.compile()
- Pass an initial state to execute the graph.
inputs = {"input_data": "Short input"}
result = graph.invoke(inputs)
print(result)
- Use
Send()
to dispatch tasks to multiple nodes in parallel andGather()
to collect results.
Example:
from langgraph.graph import Send
# Example parallel execution (not fully shown here)
# builder.add_node("parallel_task", Send([...]))
- Save and restore the state of the graph at specific points (useful for long-running workflows).
Example:
from langgraph.graph import CheckpointAt
checkpoint_config = CheckpointAt(
nodes=["node_a", "runnable_node"], # Nodes to checkpoint
config={"save": True, "load": True}
)
# Add checkpointing logic if needed (refer to LangGraph docs for exact usage).
Class/Function | Description |
---|---|
StateGraph |
Main class for creating a graph |
add_node() |
Adds a node to the graph |
add_edge() |
Adds a direct edge between two nodes |
add_conditional_edges() |
Adds conditional edges based on a routing function |
set_entry_point() |
Sets the starting node of the graph |
compile() |
Compiles the graph into a RunnableGraph |
invoke() |
Executes the graph with an initial state |
Send() |
Dispatches tasks to multiple nodes in parallel |
END |
Special node that signifies the end of a branch |
- Define State Clearly: A well-defined state structure simplifies reasoning about data flow.
- Descriptive Node Names: Use meaningful names for better readability.
- Modular Nodes: Each node should perform one specific task.
- Debugging Tools: Use LangSmith for debugging and monitoring your graph execution.
- Leverage Runnables: Combine LangChain Runnables for powerful, reusable components.
This cheat sheet provides you with all essential steps, best practices, and examples for building LangGraph workflows efficiently! Let me know if you'd like further clarifications or advanced examples tailored to your needs! 🚀