Chains are one of the core building blocks in LangChain, allowing developers to create complex workflows by combining multiple components in a sequential manner. They provide a structured way to process inputs, interact with language models, and generate outputs.
At its core, a Chain is a series of steps that are executed in order. Each step can be a simple operation, a call to a language model, or even another Chain. This flexibility allows for the creation of sophisticated pipelines tailored to specific tasks.
Let's start with a basic example:
from langchain import LLMChain from langchain.llms import OpenAI from langchain.prompts import PromptTemplate llm = OpenAI(temperature=0.7) prompt = PromptTemplate( input_variables=["topic"], template="Write a short poem about {topic}." ) chain = LLMChain(llm=llm, prompt=prompt) result = chain.run("artificial intelligence") print(result)
In this example, we create a simple Chain that takes a topic as input, formats it into a prompt, and then sends it to an OpenAI language model to generate a poem.
LangChain offers various types of Chains to suit different needs:
Let's explore a SimpleSequentialChain:
from langchain import SimpleSequentialChain # First chain: Generate a movie title title_chain = LLMChain(llm=llm, prompt=PromptTemplate( input_variables=["genre"], template="Create a movie title for a {genre} film." )) # Second chain: Write a synopsis synopsis_chain = LLMChain(llm=llm, prompt=PromptTemplate( input_variables=["title"], template="Write a brief synopsis for a movie titled '{title}'." )) # Combine the chains movie_chain = SimpleSequentialChain(chains=[title_chain, synopsis_chain]) # Run the chain result = movie_chain.run("science fiction") print(result)
This example demonstrates how we can chain two LLMChains together to first generate a movie title and then create a synopsis based on that title.
As you become more comfortable with Chains, you can start exploring more advanced techniques:
Chains can be equipped with memory to maintain context across multiple interactions:
from langchain import ConversationChain from langchain.memory import ConversationBufferMemory conversation = ConversationChain( llm=llm, memory=ConversationBufferMemory() ) response1 = conversation.predict(input="Hi, I'm Alice.") response2 = conversation.predict(input="What's my name?") print(response2) # The model should remember that your name is Alice
You can create custom Chains by subclassing the Chain
class:
from langchain.chains.base import Chain from typing import Dict, List class MyCustomChain(Chain): prompt: PromptTemplate llm: BaseLLM @property def input_keys(self) -> List[str]: return self.prompt.input_variables @property def output_keys(self) -> List[str]: return ["text"] def _call(self, inputs: Dict[str, str]) -> Dict[str, str]: prompt = self.prompt.format(**inputs) response = self.llm(prompt) return {"text": response}
This allows you to define custom behavior and integrate it seamlessly with other LangChain components.
When working with Chains, consider these tips for optimal performance:
Chains are versatile and can be applied to various tasks:
By mastering Chains in LangChain, you'll be able to create powerful, flexible, and efficient natural language processing pipelines that can tackle a wide range of complex tasks.
25/09/2024 | Python
05/11/2024 | Python
22/11/2024 | Python
15/10/2024 | Python
26/10/2024 | Python
25/09/2024 | Python
14/11/2024 | Python
15/10/2024 | Python
25/09/2024 | Python
26/10/2024 | Python
05/10/2024 | Python
15/11/2024 | Python