Tuesday, April 7, 2026

return Statement in Java With Examples

In previous tutorials we have already seen continue statement which is used to continue the iteration of a loop and break statement which is used to break out of a loop, in this post, we’ll explore the return statement in Java, a fundamental control flow element which is used to explicitly return from a method.

What is the Return Statement in Java

When a return statement is encountered in a method that method’s execution immediately terminates and control transfers back to the calling method. Depending on the method’s signature, a return statement may return a value or simply exit without returning anything.

Rules for using Java return statement

  1. Void methods and return
    • If a method does not return a value, its signature must declare void.
    • Example: void methodA() { ... }
    • In such methods, a return; statement is optional and can be used to exit early under certain conditions.
  2. Methods returning a value
    • If a method returns a value, its signature must specify the return type.
    • Example: int methodB() { return 5; }
    • The type of the returned value must be compatible with the declared return type.
  3. Type compatibility
    • The return type of the method and actual value returned should be compatible.
    • The compiler enforces strict type checking. For instance, a method declared to return double cannot return a String.

Java return keyword example

1- A method returning int value.

public class ReturnExample {

 public static void main(String[] args) {
  ReturnExample obj = new ReturnExample();
  int sum = obj.add(6,  7);
  System.out.println("Sum is- " + sum);
 }
 
 int add(int a, int b) {
  int sum = a + b;
  return sum;
 }
}

Output

Sum is- 13

2- A void method with return statement as a control statement to terminate method execution when the condition is satisfied.

public class ReturnExample {
  public static void main(String[] args) {
    ReturnExample obj = new ReturnExample();
    obj.display();
    System.out.println("After method call...");
  }
    
  void display() {
    for(int i = 1; i <= 10; i++) {
      // method execution terminates when this 
      //condition is true
      if (i > 5)
        return;
      System.out.println(i);
    }
  }
}

Output

1
2
3
4
5
After method call...

In the example, there is a for loop in method with a condition to return from the method. When the condition is true, method execution is terminated and the control returns to the calling method.

One point to note here is that in a method return statement should be the last statement or it should be with in a condition. Since method execution terminates as soon as return statement is encountered so having any statements after return results in “Unreachable code” error.

public class ReturnExample {

 public static void main(String[] args) {
  ReturnExample obj = new ReturnExample();
  obj.display();
  System.out.println("After method call...");
 }
 
 void display() {
  int i;
  return;
  i++; // error
 }
}

In the example there is code after the return statement which is never going to be executed thus the “Unreachable code” compile time error.

That's all for this topic return Statement in Java With Examples. If you have any doubt or any suggestions to make please drop a comment. Thanks!

>>>Return to Java Basics Tutorial Page


Related Topics

  1. Java do-while Loop With Examples
  2. Conditional Operators in Java
  3. Why Class Name And File Name Should be Same in Java
  4. Object Creation Using new Operator in Java
  5. Literals in Java

You may also like-

  1. Method Overriding in Java
  2. Interface in Java With Examples
  3. Java String Search Using indexOf(), lastIndexOf() And contains() Methods
  4. try-catch Block in Java Exception Handling
  5. Split a String in Java Example Programs
  6. Java Program to Reverse a Number
  7. Serialization and Deserialization in Java
  8. Nested Class And Inner Class in Java

Structured Output In LangChain

When interacting with LLMs the response you get is a free-form text. In many scenarios you may need a structured response which follows a pre-defined format. Some of those scenarios are listed below-

  • API calling- You may need to pass the response of the model to API calls. REST APIs are built to consume structured data such as JSON or XML. If LLM returns structured output you can directly send it to API methods.
  • Storing to DB- If LLM response is compatible to the DB table structure where that data has to be stored then it is easy to directly save the LLM response into the DB table.
  • Agents- In a multi-agent workflow, communication between agent becomes more reliable and efficient by enforcing a structured output. With free-form text there is a greater chance of next agent in the workflow misinterpreting the response.

Structured output in LangChain

In LangChain, structured output is a process used to force LLMs to return their responses in a specific, pre-defined format (such as JSON or a Pydantic object) instead of free-form text. This ensures predictability and allows for direct integration with downstream applications and APIs.

Structured output in LangChain, frequently reduces token usage too by eliminating unnecessary conversational text, whitespace, and formatting tokens.

How to achieve formatted output

  • with_structured_output()- This is a helper method that lets you wrap a model call so its response is automatically parsed into a structured schema.
  • Output parsers- Using output parsers is another way to format the output. It is used for models that do not natively support structured output. With structured output, formatting is done by the model itself and you get the formatted response where as with output parsers, the parser performs the actual formatting by transforming that text into a structured programmatic object after the LLM returns its raw text response.

In this article we’ll stick to structured output in LangChain for formatting output.

Supported Schema Formats for Structured Output

1. TypedDict

A Pythonic way to define dictionary structures where we specify which key and value pairs should form the output. One drawback of using TypedDict is that it does not validate the data at runtime.

Structured Output with TypedDict example

Suppose you want to send a prompt to the model to list 3 books of the given author. You want response in a specific format with only 3 fields title, year first published and main characters in the book. That schema structure can be defined using TypedDict. Note that TypedDict in Python is always defined by creating a class that inherits from TypedDict.

Inference provider used in the example is Groq, so you need to define GROQ_API_KEY. Also ensure installation of langchain-groq package.

from typing import List, Optional, TypedDict
from langchain_groq import ChatGroq
from langchain_core.prompts import HumanMessagePromptTemplate, ChatPromptTemplate
from dotenv import load_dotenv

load_dotenv()

# Define a TypedDict for a single book
class BookModel(TypedDict):
    title: str
    year_first_published: int
    main_characters: Optional[list[str]]

# Define a TypedDict for multiple books (container)
class BooksResponse(TypedDict):
    books: List[BookModel]

prompt = ChatPromptTemplate.from_messages([
    {"role": "system", "content": "You are a helpful assistant that provides information about books."},
    HumanMessagePromptTemplate.from_template("List 3 books of author {author}.")
])

model = ChatGroq(model="qwen/qwen3-32b", temperature=0.2)

# Wrap with structured output using TypedDict
structured_model = model.with_structured_output(BooksResponse)

chain = prompt | structured_model

response = chain.invoke({"author": "Agatha Christie"})

print(response)

Output

{'books': [{'main_characters': ['Hercule Poirot', 'Detective Inspector Japp', 'Countess Andrenyi'], 
'title': 'Murder on the Orient Express', 'year_first_published': 1934}, 
{'main_characters': ['Hercule Poirot', 'Dr. Sheppard', 'Roger Ackroyd'], 
'title': 'The Murder of Roger Ackroyd', 'year_first_published': 1926}, 
{'main_characters': ['Justice Wargrave', ' Vera Claythorne', 'Philip Lombard'], 
'title': 'And Then There Were None', 'year_first_published': 1939}]}

Points to note here-

  • Since list of books is needed as response so two separate TypedDict classes are defined, one of them acts as a container of books.
  • Main characters field is kept optional. That makes it a non-mandatory field in response.
  • You have to wrap model in with_structured_output() method, passing the output schema as a TypedDict class in this case.
    structured_model = model.with_structured_output(BooksResponse)
    

Pydantic Models

Pydantic model provides robust data validation which is missing in TypedDict. With Pydantic you can define a schema structure which guarantees type safety. When you instantiate a Pydantic model, it checks that the provided values match the declared types. If they don’t, it raises a ValidationError.

Structured output with Pydantic example

If we take the same example of listing 3 books of the given author. You want response in specific format with only 3 fields title, year first published and main characters in the book. That schema structure can be defined using Pydantic.

Gemini model is used in the example, so you need to define GEMINI_API_KEY. Also ensure installation of langchain_google_genai package.

  from pydantic import BaseModel
  from typing import Optional, List
  from langchain_google_genai import ChatGoogleGenerativeAI
  from langchain_core.prompts import HumanMessagePromptTemplate, ChatPromptTemplate
  from dotenv import load_dotenv

  load_dotenv()

  # Define a Pydantic model for a single book
  class BookModel(BaseModel):
      title: str
      year_first_published: int
      main_characters: Optional[list[str]]

  # Define a Pydantic model for multiple books (container)
  class BooksResponse(BaseModel):
      books: List[BookModel]

  prompt = ChatPromptTemplate.from_messages([
      {"role": "system", "content": "You are a helpful assistant that provides information about books."},
      HumanMessagePromptTemplate.from_template("List 3 books of author {author}.")
  ])

  model = ChatGoogleGenerativeAI(model="gemini-3.1-flash-lite-preview", temperature=0.2)

  # Wrap with structured output using Pydantic
  structured_model = model.with_structured_output(BooksResponse)

  chain = prompt | structured_model

  response = chain.invoke({"author": "Agatha Christie"})
  print(response)

Output

books=[BookModel(title='The Murder of Roger Ackroyd', year_first_published=1926, 
main_characters=['Hercule Poirot', 'James Sheppard']), 
BookModel(title='Murder on the Orient Express', year_first_published=1934, 
main_characters=['Hercule Poirot']), 
BookModel(title='And Then There Were None', year_first_published=1939, 
main_characters=['Vera Claythorne', 'Philip Lombard', 'Lawrence Wargrave'])]

{'books': [{'title': 'The Murder of Roger Ackroyd', 'year_first_published': 1926, 
'main_characters': ['Hercule Poirot', 'James Sheppard']}, 
{'title': 'Murder on the Orient Express', 'year_first_published': 1934, 
'main_characters': ['Hercule Poirot']}, 
{'title': 'And Then There Were None', 'year_first_published': 1939, 
'main_characters': ['Vera Claythorne', 'Philip Lombard', 'Lawrence Wargrave']}]}

JSON Schema

You can also define structured output using JSON Schema which is a standard, language-agnostic format useful for cross-system interoperability. If you have API calls which are written in different programming languages then JSON Schema is a better option. It is a universal standard, any system (Python, Java, Node.js, etc.) can validate against it, making it ideal for cross platform workflows.

Structured output with JSONSchema example

  from langchain_google_genai import ChatGoogleGenerativeAI
  from langchain_core.prompts import HumanMessagePromptTemplate, ChatPromptTemplate
  from dotenv import load_dotenv

  load_dotenv()

  # Define a JSON Schema for a single book
  book_schema = {
      "title": "BookModel",
      "type": "object",
      "properties": {
          "title": {"type": "string"},
          "year_first_published": {"type": "integer"},
          "main_characters": {
              "type": "array",
              "items": {"type": "string"},
              "maxItems": 3   # restrict to 3 characters
          }
      },
      "required": ["title", "year_first_published"]
  }

  # Define a JSON Schema for multiple books (container)
  books_response_schema = {
      "title": "BooksResponse",
      "type": "object",
      "properties": {
          "books": {
              "type": "array",
              "items": book_schema,
              "minItems": 3,
              "maxItems": 3
          }
      },
      "required": ["books"]
  }

  prompt = ChatPromptTemplate.from_messages([
      {"role": "system", "content": "You are a helpful assistant that provides information about books."},
      HumanMessagePromptTemplate.from_template("List 3 books of author {author}.")
  ])

  model = ChatGoogleGenerativeAI(model="gemini-3.1-flash-lite-preview", temperature=0.2)

  # Wrap with structured output using JSON Schema
  structured_model = model.with_structured_output(books_response_schema)

  chain = prompt | structured_model

  response = chain.invoke({"author": "Agatha Christie"})

  print(response)

Output

{'books': [{'title': 'The Murder of Roger Ackroyd', 'year_first_published': 1926, 
'main_characters': ['Hercule Poirot', 'James Sheppard']}, 
{'title': 'Murder on the Orient Express', 'year_first_published': 1934, 
'main_characters': ['Hercule Poirot']}, 
{'title': 'And Then There Were None', 'year_first_published': 1939, 
'main_characters': ['Vera Claythorne', 'Philip Lombard', 'Lawrence Wargrave']}]}

Points to note here-

  • book_schema defines the structure for one book.
  • books_response_schema wraps a list of exactly 3 books.
  • Main characters is restricted to max 3 items using maxItems: 3.
  • With JSON Schema you can enforce rules like maxItems, minItems, pattern, enum, or numeric ranges directly in the schema.

That's all for this topic Structured Output In LangChain. If you have any doubt or any suggestions to make please drop a comment. Thanks!


Related Topics

  1. What is LangChain - An Introduction
  2. Messages in LangChain
  3. Prompt Templates in LangChain With Examples
  4. LangChain PromptTemplate + Streamlit - Code Generator Example
  5. Chatbot With Chat History - LangChain MessagesPlaceHolder

You may also like-

  1. return Statement in Java With Examples
  2. Array in Java
  3. Count Number of Words in a String Java Program
  4. Ternary Operator in Java With Examples
  5. Java Multithreading Interview Questions And Answers
  6. Java Exception Handling Tutorial
  7. ConcurrentHashMap in Java With Examples
  8. TreeMap in Java With Examples

Chatbot With Chat History - LangChain MessagesPlaceHolder

In the post Chain Using LangChain Expression Language With Examples we created a simple chatbot using Streamlit UI. As pointed out there itself Large Language Models (LLMs) are stateless. By design, they do not retain memory of past interactions, treating each API call as a fresh, independent request. In this tutorial we’ll see how to create a chatbot that stores the chat history using MessagesPlaceholder and RunnableWithMessageHistory in LangChain. Sometimes it is required to keep chat history in order to provide contextual information to the model.

MessagesPlaceholder in LangChain

MessagesPlaceholder in LangChain is a prompt component which is used to inject a dynamic list of messages, such as chat history, directly into a ChatPromptTemplate.

  • When user interacts with a model, the messages sent by user are classified as human messages whereas the replies from the model are classified as AI messages.
  • You can store the previous human/AI messages and inject it as previous chat history, when sending a query to model, using MessagesPlaceholder.

That way you can provide context to the model. You need to pass a key with MessagesPlaceholder by which it can identify which variable to use as messages.

Here is a simple example where some of the messages are manually setup as human and AI messages.

from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder

prompt = ChatPromptTemplate.from_messages(
    [
        ("system", "You are a helpful assistant."),
        MessagesPlaceholder("history"),
        ("human", "{question}"),
    ]
)
prompt.invoke(
    {
        "history": [("human", "what's 5 + 2"), ("ai", "5 + 2 is 7")],
        "question": "now multiply that by 4",
    }
)

As you can see, MessagesPlaceholder is passed as one of the prompt components. The key passed to it is "history" which is same as the key "history" used in the dictionary passed with prompt.invoke().

So essentially what is the sent to the model is as given below-

ChatPromptValue(messages=[
     SystemMessage(content="You are a helpful assistant."),
     HumanMessage(content="what's 5 + 2"),
     AIMessage(content="5 + 2 is 7"),
    HumanMessage(content="now multiply that by 4"),
])

The latest human message is "now multiply that by 4", but previous messages are also sent to the model to retain the context.

RunnableWithMessageHistory in LangChain

RunnableWithMessageHistory class in LangChain is used to manage chat message history for another Runnable. The highlight of using RunnableWithMessageHistory is the session support. It uses a session_id to look up or create a specific chat message history, allowing it to handle multiple concurrent users or conversations.

Configuring RunnableWithMessageHistory

To use RunnableWithMessageHistory, you must provide the following:

  • get_session_history: A factory function that takes a single positional argument session_id of type string and returns the chat history instance associated with the passed session_id.
  • input_messages_key: It specifies which key contains the current user message. Required if the wrapped chain accepts a dictionary as input;
  • history_messages_key: Specifies the key where historical messages should be inserted in the prompt template.
  • output_messages_key: it identifies the key containing the model's response, required if the wrapped chain returns a dictionary.

For example-

with_message_history = RunnableWithMessageHistory(
        chatbot_chain,
        get_session_history,
        input_messages_key="user_input",
        history_messages_key="chat_history",
    )
How to use RunnableWithMessageHistory

When invoking a wrapped chain, you must pass the session ID in the configuration:

with_message_history.invoke(
   {"user_input": user_input},
    config = {"configurable": {"session_id": "user_123"}}
)

Using chat_message_histories module

chat_message_histories module in LangChain provides standardized way to store history of the message interactions in a chat. This module provides many classes to seamlessly integrate with, file system (FileChatMessageHistory), various databases (Cassandra, Postgres, Redis), In-Memory Storage (ChatMessageHistory).

Chatbot with chat history using LangChain and StreamLit

Using the above mentioned classes of LangChain, it becomes very easy to store conversational history.

In the nutshell role of the classes is as given below-

MessagesPlaceholder- To inject the chat history so that model has the contextual information.

RunnableWithMessageHistory- To manage chat history and provide session support. That way you can have many instances of the chatbot maintaining their own sessions.

ChatMessageHistory- Which acts as a in-memory storage of the chat history. Good for initial demo but for production use a DB backed chatbot.

Chatbot first asks for user’s ID, that is done to use the passed userID as the sessionID.

Chatbot using MessagesPlaceHolder

Streamlit UI related code is kept in a separate file.

app.py

import streamlit as st
from chatbot import generate_response

# Streamlit app to demonstrate the simple chain
st.set_page_config(page_title="Chatbot", layout="centered")
st.title(🤖 Chatbot With Context")
# Initialize session state
if "user_id" not in st.session_state:
    st.session_state.user_id = None
if "chat_history" not in st.session_state:
    st.session_state.chat_history = []

#Ensure user_id is set before allowing chat interactions
if st.session_state.user_id is None:
    user_id_input = st.text_input("Enter your User ID:")
    if st.button("Confirm User ID"):
        if user_id_input.strip() == "":
            st.warning("User ID is required to continue.")
        else:
            st.session_state.user_id = user_id_input.strip()
            st.success(f"User ID set: {st.session_state.user_id}")

if st.session_state.user_id:
    for message in st.session_state.chat_history:
        with st.chat_message(message["role"]):
            st.markdown(message["content"])

    user_input = st.chat_input("Enter your query:")  

    if user_input:
        st.session_state.chat_history.append( {"role": "user", "content": user_input})
        with st.chat_message("user"):
            st.markdown(user_input)
        response = generate_response(user_input, st.session_state.user_id)
        st.session_state.chat_history.append({"role": "assistant", "content": response})
        with st.chat_message("assistant"):
            st.markdown(f"**Chatbot Response:** {response}")   
    else:
        st.warning("Please enter a query to get a response.")

chatbot.py

from langchain_core.prompts import ChatPromptTemplate, HumanMessagePromptTemplate
from langchain_core.prompts import MessagesPlaceholder
from langchain_ollama import ChatOllama

from langchain_core.messages import SystemMessage
from langchain_core.output_parsers import StrOutputParser
from langchain_community.chat_message_histories import ChatMessageHistory
from langchain_core.runnables.history import RunnableWithMessageHistory

# Define system and human message templates
system_message = SystemMessage(content="You are a helpful assistant that responds to user queries.")
# Define the output parser
parser = StrOutputParser()

# In-Memory Store to hold chat histories for different sessions 
# (not suitable for production, just for demo purposes)
store = {}

# Function to retrieve chat history for a given session_id, or create a new one if it doesn't exist
def get_session_history(session_id: str):
    print(f"Retrieving chat history for session_id: {session_id}")
    if session_id not in store:
        store[session_id] = ChatMessageHistory()
    return store[session_id]

def generate_response(user_input: str, session_id: str) -> str:
    session_id = session_id or "default_session"
    # Create a ChatPromptTemplate object with MessagePlaceholder for conversation history
    prompt = ChatPromptTemplate.from_messages([
        system_message, 
        MessagesPlaceholder(variable_name="chat_history"),
        HumanMessagePromptTemplate.from_template("{user_input}")
    ]) 

    # Initialize the model
    model = ChatOllama(model="llama3.1", temperature=0.1)            

    # Chain the prompt, model, and parser together using RunnableWithMessageHistory
    chatbot_chain = prompt | model | parser

    with_message_history = RunnableWithMessageHistory(
        chatbot_chain,
        get_session_history,
        input_messages_key="user_input",
        history_messages_key="chat_history",
    )
    response = with_message_history.invoke(
        {"user_input": user_input},
        config={"configurable": {"session_id": session_id}}
    )
    
    return response
Chatbot With Chat History

Drawbacks of using MessagesPlaceholder, RunnableWithMessageHistory, ChatMessageHistory

Using MessagesPlaceholder and ChatMessageHistory along with RunnableWithMessageHistory in LangChain provides powerful capability to store chat history, but they come with notable drawbacks regarding complexity, performance, and maintainability.

  1. Complexity in Debugging: Because the content is inserted dynamically in MessagesPlaceholder, it can be difficult to visualize the final prompt sent to the LLM, making debugging tricky.
  2. Context Management Overhead: MessagesPlaceholder only acts as a placeholder. It does not automatically manage the history, meaning the developer is responsible for passing the correct history list every time, increasing code complexity.
  3. Context Window Limits (Token Exhaustion): Without any trimming, chat history grows indefinitely. This leads to exceeding the LLL's token limit (context window) and increasing API costs.
  4. In-Memory Persistence: By default, in-memory history is lost when the application restarts. Storing conversations in memory can become a performance bottleneck for large-scale applications.

That's all for this topic Chatbot With Chat History - LangChain MessagesPlaceHolder. If you have any doubt or any suggestions to make please drop a comment. Thanks!


Related Topics

  1. LangChain PromptTemplate + Streamlit - Code Generator Example
  2. Prompt Templates in LangChain With Examples
  3. RunablePassthrough in LangChain With Examples
  4. RunableSequence in LangChain With Examples
  5. RunnableBranch in LangChain With Examples

You may also like-

  1. How ArrayList Works Internally in Java
  2. How HashSet Works Internally in Java
  3. Why wait(), notify() And notifyAll() Must be Called Inside a Synchronized Method or Block
  4. Synchronization in Java - Synchronized Method And Block
  5. Best Practices For Exception Handling in Java
  6. Java Abstract Class and Abstract Method
  7. Just In Time Compiler (JIT) in Java
  8. Circular Dependency in Spring Framework

Monday, April 6, 2026

removeIf() Method in Java Collection With Examples

In this post, we’ll explore how the removeIf() Method in Java can be used to efficiently remove elements from a Collection based on a given condition. Introduced in Java 8 as part of the java.util.Collection interface, the removeIf() method leverages functional programming by accepting a Predicate- a powerful way to define conditions in a clean, declarative style.

Since removeIf() is defined in the Collection interface, you can use it with Collections like ArrayList, HashSet that implements Collection interface. To use it with HashMap you will have to get a collection view of the Map, since Map doesn't implement Collection interface.


Syntax of removeIf() Method in Java

boolean removeIf(Predicate<? super E> filter)

Parameter: A Predicate functional interface that evaluates each element and returns true if the element should be removed.

Return Value: Returns true if any elements were removed, otherwise false.

Removing elements from ArrayList using removeIf() method

In this example we'll have a list of cities and we'll remove elements from this ArrayList using removeIf() method. In passed Predicate if the condition holds true for any of the elements, then that element is removed from the list.

import java.util.ArrayList;
import java.util.List;

public class RemoveIf {

  public static void main(String[] args) {
    List<String> cityList = new ArrayList<String>();
    cityList.add("Delhi");
    cityList.add("Mumbai");
    cityList.add("Kolkata");
    cityList.add("Hyderabad");
    cityList.add("Bangalore");
    cityList.add("Mumbai");
    System.out.println("*** List Initially ***");
    System.out.println(cityList);
    cityList.removeIf(p -> p.equalsIgnoreCase("Hyderabad") || 
          p.equalsIgnoreCase("Bangalore"));
    System.out.println("After Removal " + cityList);
  }
}

Output

*** List Initially ***
[Delhi, Mumbai, Kolkata, Hyderabad, Bangalore, Mumbai]
After Removal [Delhi, Mumbai, Kolkata, Mumbai]

Removing elements from HashSet using removeIf() method

You can use removeIf() method with HashSet also to remove elements from the Set based on the passed condition. In the given example condition is to remove cities having name of length more than 6.

import java.util.HashSet;
import java.util.Set;

public class RemoveIf {

  public static void main(String[] args) {
    // creating a HashSet
    Set<String> citySet = new HashSet<String>();
    // Adding elements
    citySet.add("London");        
    citySet.add("Tokyo");
    citySet.add("New Delhi");
    citySet.add("Beijing");
    citySet.add("Nairobi");
    System.out.println("*** Set Initially ***");
    System.out.println(citySet);
    
    // Remove all cities having length more than 6
    citySet.removeIf(e -> e.length() > 6);
    System.out.println("After Removal " + citySet);
  }
}

Output

*** Set Initially ***
[Beijing, New Delhi, Nairobi, Tokyo, London]
After Removal [Tokyo, London]

Removing elements from HashMap using removeIf() method

To use removeIf() method with a Map you have to get Collection view (like entrySet(), keySet(), or values()) of a Map. After getting the Collection view of a Map removeIf() method can be used.

import java.util.HashMap;
import java.util.Map;


public class RemoveIf {

  public static void main(String[] args) {
    Map<String, String> cityMap = new HashMap<String, String>();
    // Adding elements
    cityMap.put("1","New York City" );
    cityMap.put("2", "New Delhi");
    cityMap.put("3", "Mumbai");
    cityMap.put("4", "Beijing");
    cityMap.put("5", "Berlin");

    System.out.println("*** Map Initially ***");
    System.out.println(cityMap);
      
    // Use entrySet to get Set view of all Map entries. 
    // Remove entry from Map based on the condition for value.
    cityMap.entrySet().removeIf(entry -> entry.getValue().equals("Beijing"));
    System.out.println("*** Map After removal ***");
    System.out.println(cityMap);
  }
}

Output

*** Map Initially ***
{1=New York City, 2=New Delhi, 3=Mumbai, 4=Beijing, 5=Berlin}
*** Map After removal ***
{1=New York City, 2=New Delhi, 3=Mumbai, 5=Berlin}

That's all for this topic removeIf() Method in Java Collection With Examples. If you have any doubt or any suggestions to make please drop a comment. Thanks!


Related Topics

  1. How to Remove Entry From HashMap in Java
  2. How to Remove Elements From an ArrayList in Java
  3. How to Remove Duplicate Elements From an ArrayList in Java
  4. Java Map replace() With Examples
  5. HashSet Vs LinkedHashSet Vs TreeSet in Java

You may also like-

  1. Difference Between Comparable and Comparator in Java
  2. CopyOnWriteArraySet in Java With Examples
  3. Switch Expressions in Java 12
  4. Java Nested Class And Inner Class
  5. Reading File in Java Using Files.lines And Files.newBufferedReader
  6. Angular Cross Component Communication Using Subject Observable
  7. Angular HttpClient - Set Response Type as Text
  8. Spring Web MVC Tutorial

Sunday, April 5, 2026

Java Lambda Expression as Method Parameter

In this guide, we’ll explore how to use a Java Lambda Expression as Method Parameter. Lambda expressions provide a concise way to implement the abstract method of a functional interface, making code cleaner and more expressive. Since the target type of a lambda is always a functional interface, you can pass a lambda wherever such an interface is expected, including as a method argument. To pass a lambda expression as a method parameter in Java, the type of the method argument, which receives the lambda expression as a parameter, must be of functional interface type.

Why Pass Lambda Expressions as Parameters

Using a Java Lambda Expression as Method Parameter allows developers to write flexible and reusable methods. Instead of hard‑coding behavior, you can pass custom logic directly into a method, reducing boilerplate and improving readability. This is especially useful in APIs like Java’s Collections framework and Stream API, where lambdas are frequently used for filtering, mapping, and sorting.

Counting Sort Program in Java

In this tutorial, we’ll walk through how to implement a Counting Sort Program in Java. Counting Sort is a linear‑time sorting algorithm with a time complexity of O(N + K), making it faster than comparison‑based algorithms such as Merge sort and Quick sort when the input range is limited. It belongs to the family of non‑comparison sorting techniques, alongside Radix Sort and Bucket Sort.

Advantages and Limitations

Though counting sort is one of the fastest sorting algorithm but it has certain drawbacks too. One of them is that the algorithm requires prior knowledge of the input range. Counting sort also needs auxiliary space as it needs to store the frequency of elements. This makes it less memory‑efficient for datasets with very large ranges.

How does counting sort work

Counting sort works by counting the frequency of each element to create a frequency array or count array. Then these counts are used to compute the index of an element in the sorted array. The process can be broken down into following steps:

  1. Initialize Count Array
    Create a count array to store the frequency count of each element. If the range of elements is 0 to k then count array should be of length k+1. For example, if max element in the array is 15 then count array should be of length 16.
  2. Populate Frequency Counts
    Iterate through the elements in input array and for each element increment its count in the corresponding index in the count array.
    For example, if input array is- [0, 4, 2, 6, 5, 4, 8, 9, 8, 6]

    Then the count array would be like below-

    Counting sort
  3. Compute Prefix Sums
    Transform the count array so that each index stores the sum of element at current index + element at previous index (prefix sum array).
    Counting sort Java
  4. Build Sorted Output
    Using this modified count array you need to construct the sorted array. For that you take an element from the input array and go to that index in the modified count array to get a value. That value is the place, of the element picked from the input array, in the sorted array.
    In the count array decrement the value by 1 too.

    For example if input array and modified count array are as follows-

    First element in the input array is 0, so consult the 0th index in the count array which is 1. That means 0 goes at the place 1 (i.e. index 0) in the sorted array. Decrement the value in the count array too. In this step value at index 0 in the count array will be decremented by 1 so that it becomes 0.

    Second element in the input array is 4, so consult the 4th index in the count array which is 4. That means 4 goes at the place 4 (i.e. index 3) in the sorted array. With in input array an element may be repeated and those should be grouped together. For that we need to decrement the value in the count array. In this step value at index 4 in the count array will be decremented by 1 so that it becomes 3.

    When the next 4 is encountered in the input array value at the 4th index in the count array will be 3. That means next 4 goes at the place 3 (i.e. index 2) in the sorted array.

    Counting Sort Java Program

Counting Sort Java program

public class CountingSort {

  public static void main(String[] args) {
    int[] arr = {0, 4, 2, 6, 5, 4, 8, 9, 8, 6};
    // max element + 1
    int range = 10;
    System.out.println("Original Array- " + Arrays.toString(arr));
    arr = countingSort(arr, range);
    System.out.println("Sorted array after counting sort- " + Arrays.toString(arr));
  }
    
  private static int[] countingSort(int[] arr, int range){
    int[] output = new int[arr.length];
    int[] count = new int[range];
    //count number of times each element appear
    for(int i = 0; i < arr.length; i++){
      count[arr[i]]++;
    }
    System.out.println("Count array- " + Arrays.toString(count));
    // each element stores (sum of element at current index 
    //+ element at previous index)
    for(int i = 1; i < range; i++){
      count[i] = count[i] + count[i-1];
    }
    System.out.println("Modified count- " + Arrays.toString(count));
    for(int i = 0; i < arr.length; i++){
      output[count[arr[i]] - 1] = arr[i];
      count[arr[i]]--;
    }
    return output;
  }
}

Output

Original Array- [0, 4, 2, 6, 5, 4, 8, 9, 8, 6]
Count array- [1, 0, 1, 0, 2, 1, 2, 0, 2, 1]
Modified count- [1, 1, 2, 2, 4, 5, 7, 7, 9, 10]
Sorted array after counting sort- [0, 2, 4, 4, 5, 6, 6, 8, 8, 9]

Performance of Counting Sort

If the number of elements to be sorted is n and the range of elements is 0-k then the time complexity of Counting sort can be calculated as-

Loop to create count array takes O(n) time. Second loop where count array is modified takes O(k) time and creating sorted output array again takes O(n) time. Thus the time complexity with all these combined comes as O(2n+k). Since constants are not taken into consideration so the time complexity of Counting sort is O(n+k).

Auxiliary space requirement is also (n+k). Count array takes k space and the output array n. Thus the space complexity of Counting sort is O(n+k).

That's all for this topic Counting Sort Program in Java. If you have any doubt or any suggestions to make please drop a comment. Thanks!

>>>Return to Java Programs Page


Related Topics

  1. Shell Sort Program in Java
  2. Heap Sort Program in Java
  3. Insertion Sort Program in Java
  4. How to Read Input From Console in Java
  5. How to Create Password Protected Zip File in Java

You may also like-

  1. Check if Given String or Number is a Palindrome Java Program
  2. Remove Duplicate Elements From an Array in Java
  3. Difference Between Two Dates in Java
  4. Java Program to Check Prime Number
  5. TreeSet in Java With Examples
  6. Java CyclicBarrier With Examples
  7. Java Variable Types With Examples
  8. Circular Dependency in Spring Framework

RunableSequence in LangChain With Examples

RunnableSequence in LangChain is a sequence of Runnable objects, where the output of one is the input of the next. You can use it by using the pipe (|) operator where either the left or right operands (or both) must be a Runnable or RunnableSequence can be instantiated directly.

LangChain RunnableSequence Example

Let's say you have been given a task of creating an automated workflow to analyze a contract snippet, identify risks, and summarize them to be used for legal or finance department.

You want to create a workflow where-

  1. First you send the contract to the model to analyze it for any risks.
  2. Then send that risk analysis to the model to create a summarized memo.

Required Packages

  • python-dotenv
  • langchain
  • langchain-google-genai
  • langchain-ollama

For Gemini model you need the set the GEMINI_API_KEY in the .env file which can then be loaded as an environment variable using load_dotenv() function.

First let see the example where pipe operator (|) is used to create a sequence of actions.

Example 1: Using piped workflow

from langchain_core.prompts import PromptTemplate
from langchain_core.runnables import RunnableSequence
from langchain_core.output_parsers import StrOutputParser
from langchain_ollama import ChatOllama
from langchain_google_genai import ChatGoogleGenerativeAI
from dotenv import load_dotenv

load_dotenv()

# 1. Define LLM
#llm = ChatOllama(model="llama3.1", temperature=0)

llm = ChatGoogleGenerativeAI(model="gemini-3.1-flash-lite-preview", temperature=0)

# Define the prompts
prompt_1 = PromptTemplate.from_template(
    "Analyze the following contract clause for any hidden financial risks: {clause}"
)
prompt_2 = PromptTemplate.from_template(
    "Summarize the following risk analysis in a professional, concise, bulleted memo for the financial head: {analysis}"
)

#Parsing to String ensures clean text outputs
parser = StrOutputParser()

# 4. Create Sequence: Clause -> Risk Analysis -> Financial Team 
# (gets analysis as input and produces memo for financial head as output)
# Sequence is: Prompt1 -> LLM -> Parser -> Prompt2 -> LLM -> Parser
chain = (
    {"analysis": prompt_1 | llm | parser} 
    | prompt_2 
    | llm 
    | parser
)

# Execute with contract 
contract_clauses = """
Scope of Work (SOW):Vendor shall deliver required work with in one month of initiation and provide support services as detailed in Exhibit A.
Payment Terms: Client shall pay Invoice within 30 days of receipt. Late payments will incur a 1.5% monthly interest fee.
Termination Clause: Either party may terminate this agreement with 10 calendar days' written notice if the other party breaches material terms.
Limitation of Liability: The vendor will not be liable for any damages unless they exceed 300% of the total contract value, and notice is provided within 1 day.
"""
result = chain.invoke({"clause": contract_clauses})
print(result)

Which gives memorandum as output, here are few lines of the output.

Output

**MEMORANDUM**

**TO:** Head of Finance
**FROM:** [Your Name/Department]
**DATE:** October 26, 2023
**SUBJECT:** Risk Assessment: Proposed Vendor Contract

I have completed a risk analysis of the proposed vendor contract. The current draft is heavily skewed in favor of the vendor and contains several provisions that expose the company to significant, avoidable financial liability.

### **Key Risk Areas**

*   **Scope of Work (SOW):** The lack of defined support parameters in "Exhibit A" creates an open-ended financial liability, leaving us vulnerable to aggressive "out-of-scope" billing.
....
....

Example 2: Using RunnableSequence Explicitly

You can explicitly use RunnableSequence that makes the workflow more transparent and modular, which is useful when debugging or extending complex pipelines. The above example can be modified to use RunnableSequence instead.

from langchain_core.prompts import PromptTemplate
from langchain_core.runnables import RunnableSequence
from langchain_core.output_parsers import StrOutputParser
from langchain_ollama import ChatOllama
from langchain_google_genai import ChatGoogleGenerativeAI
from dotenv import load_dotenv

load_dotenv()

# 1. Define LLM
llm = ChatOllama(model="llama3.1", temperature=0)

#llm = ChatGoogleGenerativeAI(model="gemini-3.1-flash-lite-preview", temperature=0)

# Define the prompts
prompt_1 = PromptTemplate.from_template(
    "Analyze the following contract clause for any hidden financial risks: {clause}"
)
prompt_2 = PromptTemplate.from_template(
    "Summarize the following risk analysis in a professional, concise, bulleted memo for the financial head: {analysis}"
)

#Parsing to String ensures clean text outputs
parser = StrOutputParser()

# Step A: Clause -> Risk Analysis
risk_analysis_chain = RunnableSequence(prompt_1, llm, parser)

# Step B: Risk Analysis -> Financial Memo
financial_memo_chain = RunnableSequence(prompt_2, llm, parser)
# Step C: Full pipeline
chain = RunnableSequence(
        {"analysis": risk_analysis_chain},  # feed clause into risk analysis
        financial_memo_chain                # feed analysis into financial memo    
)

# Execute with Corporate Input
contract_clauses = """
Scope of Work (SOW):Vendor shall deliver required work with in one month of initiation and provide support services as detailed in Exhibit A.
Payment Terms: Client shall pay Invoice within 30 days of receipt. Late payments will incur a 1.5% monthly interest fee.
Termination Clause: Either party may terminate this agreement with 10 calendar days' written notice if the other party breaches material terms.
Limitation of Liability: The vendor will not be liable for any damages unless they exceed 300% of the total contract value, and notice is provided within 1 day.
"""
result = chain.invoke({"clause": contract_clauses})

print(result)

That's all for this topic RunableSequence in LangChain With Examples. If you have any doubt or any suggestions to make please drop a comment. Thanks!


Related Topics

  1. What is LangChain - An Introduction
  2. LangChain PromptTemplate + Streamlit - Code Generator Example
  3. RunablePassthrough in LangChain With Examples
  4. RunnableLambda in LangChain With Examples
  5. RunnableBranch in LangChain With Examples

You may also like-

  1. String in Java Tutorial
  2. Array in Java
  3. Count Number of Words in a String Java Program
  4. Ternary Operator in Java With Examples
  5. Java Multithreading Interview Questions And Answers
  6. Java Exception Handling Tutorial
  7. ConcurrentHashMap in Java With Examples
  8. TreeMap in Java With Examples