Thursday, April 2, 2026

RunnableLambda in LangChain With Examples

RunnableLambda in LangChain is one of the core components of the LangChain Expression Language (LCEL). What RunnableLambda offers is that it can convert a Python (or JavaScript) function or a lambda expression into a Runnable object. This allows custom functions to be seamlessly integrated into the chain ecosystem of LangChain alongside other LangChain components like models, prompt templates, and retrievers.

LangChain RunnableLambda example

One of the prominent usecase depicting the usage of RunnableLambda is the implementation of routing in LangChain.

In LangChain there is a dedicated class called LLMRouterChain, which lets you use an LLM-powered chain to decide how inputs should be routed to different destinations. But this class is deprecated in latest versions. RunnableLambda is one of the way now for implementing routing as it provides several other benefits like streaming and batch support.

1. In the first example, we’ll have two chains, one which handles geography related queries and another which handles history related queries. Using RunnableLambda we can route to one of these chains based on whether the question is history based or geography based. That classification of the query is also done using the LLM. Note that Groq inference provider is used here which requires langchain-groq package installation and setting up the “GROQ_API_KEY” value.

from langchain_core.runnables import RunnableLambda
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.output_parsers import StrOutputParser
from langchain_groq import ChatGroq
from typing_extensions import TypedDict
from operator import itemgetter
from typing import Literal

history_prompt = ChatPromptTemplate.from_messages(
    [
        ("system", "You are a history expert."),
        ("human", "{query}"),
    ]
)
geography_prompt = ChatPromptTemplate.from_messages(
    [
        ("system", "You are a geography expert."),
        ("human", "{query}"),
    ]
)

model = ChatGroq(model="qwen/qwen3-32b", temperature=0.5)

chain_history = history_prompt | model | StrOutputParser()
chain_geography = geography_prompt | model | StrOutputParser()

route_system = "Classify the user's query to either the history "
"or geography related. Answer with one word only: 'history' or 'geography'."
route_prompt = ChatPromptTemplate.from_messages(
    [
        ("system", route_system),
        ("human", "{query}"),
    ]
)

#Class to enforce structured output from LLMs
class RouteQuery(TypedDict):
    """Schema for LLM output for routing queries."""
    destination: Literal["history", "geography"]

route_chain = (
    route_prompt
    | model.with_structured_output(RouteQuery)
    | itemgetter("destination")
)

final_chain = {
    "destination": route_chain,  
    "query": lambda x: x["query"],  # pass through input query
} | RunnableLambda(
    # if history, chain_history. otherwise, chain_geography.
    lambda x: (
        # display the routing decision for clarity
        print(f"Routing to destination: {x['destination']}") or
        (chain_history if x["destination"] == "history" else chain_geography)
    )
)

result = final_chain.invoke({"query": "List 5 temples built by the Cholas. Only list the temples, no other information. "})

print(result)

Output

Routing to destination: history
1. Brihadeeswara Temple, Thanjavur
2. Brihadeeswara Temple, Gangaikonda Cholapuram
3. Airavatesvara Temple, Darasuram
4. Nellaiappar Temple, Tirunelveli
5. Siva Temple, Kudavai

Points to note here:

  • The Literal type in Python from typing module, is a type hint that indicates a variable or function parameter must have one of a specific set of fixed, concrete values. Since the output of the LLM should be one of 'history' or 'geography' so that is enforced by passing these fixed values.
  • TypedDict is also used to enforce a structured output format.
  • The first part of the final chain
    {
        "destination": route_chain,  
        "query": lambda x: x["query"],  # pass through input query
    }
    
    is a dictionary that defines a mapping of inputs to outputs for the chain. The key "destination" will be populated by running route_chain.
  • In the second part of the chain, RunnableLambda wraps a lambda expression that selects the appropriate chain (chain_history or chain_geography) based on the "destination" value.

2. Second example also shows how to use RunnableLambda for routing. In the first example classification of the query is done by the LLM but in this one it is done by a Python function, which is wrapped into a RunnableLambda. Routing is done to one of the 3 agents based on the output of the Python function.

from langchain_core.runnables import RunnableLambda
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.output_parsers import StrOutputParser
from langchain_ollama import ChatOllama

support_template = "You are a support agent. User input: {query}. Provide 3 reasons for the issue."
sales_template = "You are a sales specialist. User input: {query}. Provide 3 reasons to buy the product and 3 for not buying."
general_template = "You are a general assistant. User input: {query}. Provide a helpful response."
# 1. Define specialized chains
support_chain = ChatPromptTemplate.from_template(support_template) | ChatOllama(model="llama3.1") | StrOutputParser()
sales_chain = ChatPromptTemplate.from_template(sales_template) | ChatOllama(model="llama3.1") | StrOutputParser()
general_chain = ChatPromptTemplate.from_template(general_template) | ChatOllama(model="llama3.1") | StrOutputParser()

# 2. Define routing logic using RunnableLambda
def route_query(input: dict):
    print(f"Routing input: {input}")
    query = input["query"].lower()
    print(f"Routing query: {query}")
    if "support" in query or "issue" in query:
        return support_chain
    elif "price" in query or "buy" in query:
        return sales_chain
    else:
        return general_chain

router_node = RunnableLambda(route_query)

print("Router node created.", type(router_node))

# 3. Create the final routing chain
# The router_node returns a chain, which is then invoked with the input
final_chain = router_node 

# Example usage
result = final_chain.invoke({"query": "Request to buy a 3D printer."})

print(result)

Here’s what happens internally:

  • router_node runs route_query(input).
  • That function returns the right chain (support_chain, sales_chain, or general_chain).
  • Because router_node is a runnable, it automatically invokes the returned chain with the same input.

That's all for this topic RunnableLambda in LangChain With Examples. If you have any doubt or any suggestions to make please drop a comment. Thanks!


Related Topics

  1. RunnableParallel in LangChain With Examples
  2. LangChain PromptTemplate + Streamlit - Code Generator Example
  3. Messages in LangChain
  4. Chain Using LangChain Expression Language With Examples

You may also like-

  1. String in Java Tutorial
  2. Array in Java
  3. Count Number of Words in a String Java Program
  4. Ternary Operator in Java With Examples
  5. Java Multithreading Interview Questions And Answers
  6. Java Exception Handling Tutorial
  7. ConcurrentHashMap in Java With Examples
  8. TreeMap in Java With Examples

No comments:

Post a Comment