RunnableSequence in LangChain is a sequence of Runnable objects, where the output of one is the input of the next. You can use it by using the pipe (|) operator where either the left or right operands (or both) must be a Runnable or RunnableSequence can be instantiated directly.
LangChain RunnableSequence Example
Let's say you have been given a task of creating an automated workflow to analyze a contract snippet, identify risks, and summarize them to be used for legal or finance department.
You want to create a workflow where-
- First you send the contract to the model to analyze it for any risks.
- Then send that risk analysis to the model to create a summarized memo.
Required Packages
- python-dotenv
- langchain
- langchain-google-genai
- langchain-ollama
For Gemini model you need the set the GEMINI_API_KEY in the .env file which can then be loaded as an environment variable using load_dotenv() function.
First let see the example where pipe operator (|) is used to create a sequence of actions.
Example 1: Using piped workflow
from langchain_core.prompts import PromptTemplate
from langchain_core.runnables import RunnableSequence
from langchain_core.output_parsers import StrOutputParser
from langchain_ollama import ChatOllama
from langchain_google_genai import ChatGoogleGenerativeAI
from dotenv import load_dotenv
load_dotenv()
# 1. Define LLM
#llm = ChatOllama(model="llama3.1", temperature=0)
llm = ChatGoogleGenerativeAI(model="gemini-3.1-flash-lite-preview", temperature=0)
# Define the prompts
prompt_1 = PromptTemplate.from_template(
"Analyze the following contract clause for any hidden financial risks: {clause}"
)
prompt_2 = PromptTemplate.from_template(
"Summarize the following risk analysis in a professional, concise, bulleted memo for the financial head: {analysis}"
)
#Parsing to String ensures clean text outputs
parser = StrOutputParser()
# 4. Create Sequence: Clause -> Risk Analysis -> Financial Team
# (gets analysis as input and produces memo for financial head as output)
# Sequence is: Prompt1 -> LLM -> Parser -> Prompt2 -> LLM -> Parser
chain = (
{"analysis": prompt_1 | llm | parser}
| prompt_2
| llm
| parser
)
# Execute with contract
contract_clauses = """
Scope of Work (SOW):Vendor shall deliver required work with in one month of initiation and provide support services as detailed in Exhibit A.
Payment Terms: Client shall pay Invoice within 30 days of receipt. Late payments will incur a 1.5% monthly interest fee.
Termination Clause: Either party may terminate this agreement with 10 calendar days' written notice if the other party breaches material terms.
Limitation of Liability: The vendor will not be liable for any damages unless they exceed 300% of the total contract value, and notice is provided within 1 day.
"""
result = chain.invoke({"clause": contract_clauses})
print(result)
Which gives memorandum as output, here are few lines of the output.
Output
**MEMORANDUM** **TO:** Head of Finance **FROM:** [Your Name/Department] **DATE:** October 26, 2023 **SUBJECT:** Risk Assessment: Proposed Vendor Contract I have completed a risk analysis of the proposed vendor contract. The current draft is heavily skewed in favor of the vendor and contains several provisions that expose the company to significant, avoidable financial liability. ### **Key Risk Areas** * **Scope of Work (SOW):** The lack of defined support parameters in "Exhibit A" creates an open-ended financial liability, leaving us vulnerable to aggressive "out-of-scope" billing. .... ....
Example 2: Using RunnableSequence Explicitly
You can explicitly use RunnableSequence that makes the workflow more transparent and modular, which is useful when debugging or extending complex pipelines. The above example can be modified to use RunnableSequence instead.
from langchain_core.prompts import PromptTemplate
from langchain_core.runnables import RunnableSequence
from langchain_core.output_parsers import StrOutputParser
from langchain_ollama import ChatOllama
from langchain_google_genai import ChatGoogleGenerativeAI
from dotenv import load_dotenv
load_dotenv()
# 1. Define LLM
llm = ChatOllama(model="llama3.1", temperature=0)
#llm = ChatGoogleGenerativeAI(model="gemini-3.1-flash-lite-preview", temperature=0)
# Define the prompts
prompt_1 = PromptTemplate.from_template(
"Analyze the following contract clause for any hidden financial risks: {clause}"
)
prompt_2 = PromptTemplate.from_template(
"Summarize the following risk analysis in a professional, concise, bulleted memo for the financial head: {analysis}"
)
#Parsing to String ensures clean text outputs
parser = StrOutputParser()
# Step A: Clause -> Risk Analysis
risk_analysis_chain = RunnableSequence(prompt_1, llm, parser)
# Step B: Risk Analysis -> Financial Memo
financial_memo_chain = RunnableSequence(prompt_2, llm, parser)
# Step C: Full pipeline
chain = RunnableSequence(
{"analysis": risk_analysis_chain}, # feed clause into risk analysis
financial_memo_chain # feed analysis into financial memo
)
# Execute with Corporate Input
contract_clauses = """
Scope of Work (SOW):Vendor shall deliver required work with in one month of initiation and provide support services as detailed in Exhibit A.
Payment Terms: Client shall pay Invoice within 30 days of receipt. Late payments will incur a 1.5% monthly interest fee.
Termination Clause: Either party may terminate this agreement with 10 calendar days' written notice if the other party breaches material terms.
Limitation of Liability: The vendor will not be liable for any damages unless they exceed 300% of the total contract value, and notice is provided within 1 day.
"""
result = chain.invoke({"clause": contract_clauses})
print(result)
That's all for this topic RunableSequence in LangChain With Examples. If you have any doubt or any suggestions to make please drop a comment. Thanks!
Related Topics
You may also like-
No comments:
Post a Comment