Wednesday, April 1, 2026

Chain Using LangChain Expression Language With Examples

In this tutorial we’ll see how you can create chains in LangChain using LangChain Expression Language (LCEL) and what are the benefits of creating such chains.

Pipelines using LCEL

When we are creating a generative app using LangChain, it is essentially a sequence of events which can be executed as a pipeline (events separated by | symbol), where each step is represented as a runnable, automatically supporting synchronous, asynchronous, streaming, and even parallel execution when steps are independent.

For example, a pipeline can be as simple as-

  1. Prompt
  2. Model
  3. Parsed response

Which in terms of LCEL can be expressed as a pipeline like this-

chain = prompt | model | output_parser

Composing chain like this using pipe (|) symbol signifies the flow of information from left to right. In the chain, we have composed above, output of prompt flows to model, output of that in turn goes to output_parser.

You can also compose complex pipelines like-

  • Reformulate the user’s query by passing it to LLM.
  • Retrieving relevant data from Vector DB using the search query.
  • Passing that data to LLM to summarize it.
  • Passing this summary and reformulated user’s query to LLM to get the final answer.

Composed LCEL chain for this may look like as given below-

# LCEL Chain Composition
chain = (
    {"question": RunnablePassthrough()}  # Input passthrough
    | {"search_query": query_prompt | llm | StrOutputParser()}  # Step 1
    | {"docs": lambda x: retrieve_docs(x["search_query"])}      # Step 2
    | {"summary": summary_prompt | llm | StrOutputParser()}     # Step 3
    | {"final_answer": answer_prompt | llm | StrOutputParser()} # Step 4
    | format_prompt | llm | StrOutputParser()                   # Step 5
)

Features of LCEL

Some of the key features of LangChain Expression Language are as given below.

  • In LCEL, a Runnable is the fundamental building block of a pipeline. Runnable acts as a unified interface providing support for multiple execution modes (sync, async, streaming, batch).

    Key Methods-

    • invoke/ainvoke: Transforms a single input into an output.
    • batch/abatch: Efficiently transforms multiple inputs into outputs.
    • stream/astream: Streams output from a single input as it's produced.
    • astream_log: Streams output and selected intermediate results from an input.
    Methods with 'a' prefix are asynchronous.
  • Uses pipe symbol (|)- Chains are created by connecting components (e.g., prompts, models, output parsers) linearly using pipe | operator, creating an explicit left-to-right data flow.
  • Asynchronous support- Normally, when you run code, you wait until the whole task finishes before you get the result. With async, tasks don’t block each other. All LCEL chains are asynchronous by default meaning LCEL chains can run in the background, letting other work continue while waiting for the model’s response.
  • Streaming- Instead of waiting for the entire answer, LCEL chains can send pieces of the response as they’re generated. This makes apps feel faster and more interactive, especially when dealing with LLMs that take time to generate long outputs.
  • Parallelization- Automatically runs independent steps in parallel, significantly reducing latency.

Simple Chatbot using LCEL and streamlit

Let’s try to see a LCEL chain in action by creating a simple chatbot with Streamlit used to create UI for the same.

Required packages

python-dotenv

langchain

# package as per the LLM used

langchain-ollama

# for streamlit

streamlit

from langchain_core.prompts import ChatPromptTemplate, HumanMessagePromptTemplate
from langchain_ollama import ChatOllama

import streamlit as st
from langchain_core.messages import SystemMessage
from langchain_core.output_parsers import StrOutputParser

# Define system and human message templates
system_message = SystemMessage(content="You are a helpful assistant that responds to user queries.")    
# Define the output parser
parser = StrOutputParser()

def generate_response(user_input: str) -> str:

    # Create a ChatPromptTemplate object
    prompt = ChatPromptTemplate.from_messages([system_message, 
                               HumanMessagePromptTemplate.from_template("{user_input}")]) 
    # Initialize the model
    model = ChatOllama(model="llama3.1", temperature=0.5)            

    #chain the prompt, model, and parser together
    chatbot_chain = prompt | model | parser

    # Generate the response using the model
    response = chatbot_chain.invoke({"user_input": user_input})
    #print(response)
    #Return the generated response

    return response
#Streamlit app to demonstrate the simple chain

st.set_page_config(page_title="Simple Chatbot", layout="centered")
st.title("🤖 Simple Chatbot")
# Initialize session state
if "chat_history" not in st.session_state:
    st.session_state.chat_history = []
#show previous messages  
for message in st.session_state.chat_history:
    with st.chat_message(message["role"]):
        st.markdown(message["content"])
        
user_input = st.chat_input("Enter your query:")  
if user_input:
    st.session_state.chat_history.append( {"role": "user", "content": user_input})
    with st.chat_message("user"):
        st.markdown(user_input)
    response = generate_response(user_input)
    st.session_state.chat_history.append({"role": "assistant", "content": response})
    with st.chat_message("assistant"):
        st.markdown(f"**Chatbot Response:** {response}")   
else:
    st.warning("Please enter a query to get a response.")

Points to note

Here are some key points to note about the code:

  • ChatPromptTemplate is used to create a template with a system message to set the context and a human message.
  • StrOutputParser() extracts text content from model outputs as a string.
  • Prompt, model, and parser are chained to create a chain which is then invoked by passing the value for the placeholder in the prompt template.
  • Streamlit provides chat elements to help you build conversational apps. Some of the chat elements used in the example are st.chat_input and st.chat_message.
  • Streamlit apps rerun from top to bottom whenever the user interacts (e.g., sends a new message, presses a button). If you didn’t store and replay the conversation, only the latest message would show up.
  • Note here that chat history here is limited to the context of Streamlit, LLM is still stateless (Not aware of the chat history).That is evident from the following image of the chatbot conversation.
  • chatbot in langchain

That's all for this topic Chain Using LangChain Expression Language With Examples. If you have any doubt or any suggestions to make please drop a comment. Thanks!


Related Topics

  1. What is LangChain - An Introduction
  2. First LangChain Program: Ask Me Anything
  3. Prompt Templates in LangChain With Examples
  4. LangChain PromptTemplate + Streamlit - Code Generator Example
  5. Messages in LangChain

You may also like-

  1. String in Java Tutorial
  2. Array in Java
  3. Count Number of Words in a String Java Program
  4. Ternary Operator in Java With Examples
  5. Java Multithreading Interview Questions And Answers
  6. Java Exception Handling Tutorial
  7. ConcurrentHashMap in Java With Examples
  8. TreeMap in Java With Examples

No comments:

Post a Comment