LangGraph is an exciting new framework that extends the capabilities of LangChain, allowing developers to create stateful, orchestrated AI applications. It's particularly useful for building complex conversational AI systems, multi-agent simulations, and other applications that require maintaining state and coordinating multiple AI components.
Let's explore the core concepts that make LangGraph so powerful.
At the heart of LangGraph are agents. These are AI entities that can perceive their environment, make decisions, and take actions. In LangGraph, agents are typically built using large language models (LLMs) and can be customized for specific tasks.
Here's a simple example of defining an agent in LangGraph:
from langgraph.agent import Agent from langchain.llms import OpenAI llm = OpenAI() agent = Agent(llm=llm, tools=[...])
In this example, we create an agent using OpenAI's language model and provide it with a set of tools (which we'll discuss next).
Tools in LangGraph are functions that agents can use to interact with their environment or perform specific tasks. These can range from simple calculators to complex API calls or database queries.
Let's add a simple calculator tool to our agent:
from langchain.tools import Tool from langchain.utilities import SerpAPIWrapper search = SerpAPIWrapper() calculator = Tool.from_function( func=lambda x: eval(x), name="Calculator", description="Useful for when you need to answer questions about math" ) agent = Agent(llm=llm, tools=[calculator, search])
Now our agent can perform calculations and web searches as part of its decision-making process.
One of the key features of LangGraph is its ability to maintain state across interactions. This is achieved through various memory components. Let's add a simple conversation memory to our agent:
from langchain.memory import ConversationBufferMemory memory = ConversationBufferMemory(return_messages=True) agent = Agent(llm=llm, tools=[calculator, search], memory=memory)
With this memory component, our agent can now remember previous interactions and use that context in future decisions.
LangGraph shines when it comes to orchestrating complex workflows involving multiple agents and tools. The framework provides a graph-based approach to defining these workflows.
Here's a simple example of a two-agent workflow:
from langgraph.graph import Graph def route(state): if "math" in state["input"].lower(): return "math_agent" else: return "general_agent" graph = Graph() graph.add_node("router", route) graph.add_node("math_agent", math_agent) graph.add_node("general_agent", general_agent) graph.add_edge("router", "math_agent") graph.add_edge("router", "general_agent") workflow = graph.compile()
In this example, we create a simple routing mechanism that directs math-related queries to a specialized math agent and all other queries to a general-purpose agent.
Let's see how these concepts come together in a more complete example:
from langgraph.graph import Graph from langgraph.agent import Agent from langchain.llms import OpenAI from langchain.tools import Tool from langchain.memory import ConversationBufferMemory # Set up LLM and tools llm = OpenAI() calculator = Tool.from_function( func=lambda x: eval(x), name="Calculator", description="Useful for when you need to answer questions about math" ) # Create agents math_agent = Agent(llm=llm, tools=[calculator], memory=ConversationBufferMemory()) general_agent = Agent(llm=llm, tools=[], memory=ConversationBufferMemory()) # Define routing logic def route(state): if "math" in state["input"].lower(): return "math_agent" else: return "general_agent" # Create and compile graph graph = Graph() graph.add_node("router", route) graph.add_node("math_agent", math_agent) graph.add_node("general_agent", general_agent) graph.add_edge("router", "math_agent") graph.add_edge("router", "general_agent") workflow = graph.compile() # Use the workflow result = workflow.run("What's 2 + 2?") print(result) result = workflow.run("Who was the first president of the United States?") print(result)
This example demonstrates how LangGraph allows us to create a simple but powerful AI system that can handle different types of queries, maintain conversation context, and use specialized tools when needed.
LangGraph offers a flexible and powerful framework for building stateful AI applications in Python. By understanding and leveraging its core concepts of agents, tools, memory, and orchestration, you can create sophisticated AI systems that can handle complex, multi-step tasks while maintaining context and state.
15/11/2024 | Python
14/11/2024 | Python
06/12/2024 | Python
26/10/2024 | Python
06/10/2024 | Python
06/10/2024 | Python
22/11/2024 | Python
17/11/2024 | Python
15/11/2024 | Python
22/11/2024 | Python
15/11/2024 | Python
08/11/2024 | Python