Thursday, April 16, 2026

Constructor in Python – Learn How init() Works

In Python, a constructor is a special method used to initialize objects when a class instance is created. The constructor ensures that the object’s data members are assigned appropriate values right at the time of instantiation. The method responsible for this is called __init__() in Python, which is automatically invoked whenever you create a new object.

Syntax of init() in Python

The first argument of the __init__() method is always self, which refers to the current instance of the class. Constructor may or may not have other input parameters i.e. other input parameters are optional.

class MyClass:
    def __init__(self, input_parameters):
        # initialization code
        self.value = input_parameters

CopyOnWriteArraySet in Java With Examples

Java 5 added several concurrent collection classes as a thread safe alternative to their normal collection counterparts which are not thread safe. For example, ConcurrentHashMap as a thread safe alternative to HashMap, CopyOnWriteArrayList as a thread safe alternative to ArrayList. In the same way, CopyOnWriteArraySet in Java is added as a thread safe alternative to HashSet in Java.

CopyOnWriteArraySet class in Java

CopyOnWriteArraySet is a part of the java.util.concurrent package. It extends AbstractSet and implements the Set interface, ensuring that only unique elements can be stored. Internally, it is backed by a CopyOnWriteArrayList, which means all operations are delegated to this underlying list.

Vector Stores in LangChain With Examples

In the post Embeddings in LangChain With Examples we saw how you can convert your documents into embeddings (high-dimensional vectors) which represent the semantic meaning of the text. Now the next step is where to store these embeddings so that you can later retrieve the relevant documents by doing the semantic search. That’s where vector stores come into picture.

What are Vector Stores

Vector stores are special kind of databases that can store embeddings.

Embeddings in vector store

Unlike traditional databases that search for exact keyword matches, vector stores enable semantic search, that allows applications to retrieve information based on conceptual similarity (similarity search).

Vector Stores in LangChain

How Vector Stores Work

When you query these vector databases, those queries are also converted into high-dimensional vectors. These vector stores use mathematical distance metrics (like Cosine Similarity or Euclidean distance) to find vectors closest to the query vector.

This retrieval is made more efficient by indexing the data in the vector store. Without indexing, similarity search requires a brute force linear scan across millions of vectors, which is computationally expensive.

Indexing in a vector store means organizing high-dimensional embeddings into smart data structures so that searches are fast and efficient. Instead of scanning every vector one by one (which is slow), indexing narrows down the search space. Listed below are some of the algorithms used for indexing.

  • Tree-based structures: break the space into hierarchical partitions.
  • Clustering (IVF – Inverted File Index): group vectors into clusters, then search only the most relevant ones.
  • Graph-based approaches (HNSW – Hierarchical Navigable Small World graphs): connect vectors in a graph so nearest neighbors can be found quickly by traversing edges.

Commonly Used Vector Stores

Here is a list of some of the most commonly used vector stores.

  1. Chroma- Lightweight, open-source vector DB with local persistence and easy LangChain integration.
  2. Pinecone- Fully managed cloud vector database designed for large-scale, production-ready similarity search.
  3. Weaviate- Open-source vector search engine with hybrid search (text + vectors) and schema support.
  4. Milvus- High-performance, distributed vector database optimized for massive datasets.
  5. FAISS- Facebook’s library for efficient similarity search and clustering of dense vectors, often used locally.
  6. Qdrant- Open-source vector DB with focus on high-performance ANN search and filtering.

If you want local development, create a quick prototype local persistence like Chroma, FAISS are good choices. For scalable cloud deployments go with Pinecone, Weaviate, Milvus, Qdrant.

Vector Stores in LangChain

LangChain provides a unified interface for integrating with several vector stores. Common methods are-

  • add_documents- Add documents to the store.
  • delete- Remove stored documents by ID.
  • similarity_search- Query for semantically similar documents.

LangChain provides support for InMemoryVectorStore, Chroma, ElasticsearchStore, FAISS, MongoDBAtlasVectorSearch, PGVectorStore (uses PostgreSQL with the pgvector extension), PineconeVectorStore and many more.

LangChain Vector Store Example

Here is a full-fledged example of loading and splitting the document, creating embeddings and storing it in Chroma vector store.

There is a util class with utility methods to load and split the document.

util.py

from langchain_community.document_loaders import PyPDFLoader, DirectoryLoader
from langchain_text_splitters import RecursiveCharacterTextSplitter
from langchain_ollama import OllamaEmbeddings

def load_documents(dir_path):
    
    """
    loading the documents in a specified directory
    """
    pdf_loader = DirectoryLoader(dir_path, glob="*.pdf", loader_cls=PyPDFLoader)
    documents = pdf_loader.load()
    return documents

def create_splits(extracted_data):
    """
    splitting the document using text splitter
    """
    text_splitter = RecursiveCharacterTextSplitter(chunk_size=1000, chunk_overlap=100)
    text_chunks = text_splitter.split_documents(extracted_data)
    return text_chunks

def getEmbeddingModel():
    """
    Configure the embedding model used
    """
    embeddings = OllamaEmbeddings(model="nomic-embed-text")
    return embeddings

Then there is a dbutil class that uses Chroma DB to store the embeddings.

dbutil.py

from langchain_chroma import Chroma
from util import load_documents, create_splits, getEmbeddingModel

def get_chroma_store():
    embeddings = getEmbeddingModel()
    vector_store = Chroma(
        collection_name="data_collection",
        embedding_function=embeddings,
        persist_directory="./chroma_langchain_db",  # Where to save data locally
    )
    return vector_store

def load_data():
    # Access the underlying Chroma client
    #client = get_chroma_store()._client

    # Delete the collection
    #client.delete_collection("data_collection")

    documents = load_documents("./langchaindemos/resources")
    text_chunks = create_splits(documents)
    vector_store = get_chroma_store()
    #add documents
    vector_store.add_documents(text_chunks)

load_data()

Run this file once to do the process of loading the documents, splitting it and storing the chunks in the DB.

Points to note here-

  1. Needs the langchain_chroma package installation.
  2. Create a chroma client using the Chroma class, parameters passed are the collection name (identifier for where vectors are stored), embedding_function (Embedding class object), persist_directory (directory where your vector database is saved locally).

Once the embeddings are stored in the vector store it can be queried to do a similarity search and return the relevant chunks. I have loaded a health insurance document so queries are related to that document.

chromaapp.py

from langchain_chroma import Chroma
from dbutil import get_chroma_store

vector_store = get_chroma_store()

#search documents
result = vector_store.similarity_search(
  query='What is the waiting period for the pre-existing diseases',
  k=3 # number of outcomes 
)

#displaying the results
for i, res in enumerate(result):
    print(f"Result {i+1}: {res.page_content[:500]}...")
print("Another Query")
query = "What is the condition for getting cumulative bonus"
result = vector_store.similarity_search(query, k=3)

for i, res in enumerate(result):
    print(f"Result {i+1}: {res.page_content[:500]}...")

print("Another Query")
query = "What are the co-pay rules"
result = vector_store.similarity_search(query, k=3)

for i, res in enumerate(result):
    print(f"Result {i+1}: {res.page_content[:500]}...")

Points to note here-

  1. In similarity_search() method, k parameter is used to configure the number of results to return.
  2. similarity_search() method returns the list of documents most similar to the query text.
  3. By looping that list, we can get the content of each result.
    for i, res in enumerate(result):
        print(f"Result {i+1}: {res.page_content[:500]}...")
    

That's all for this topic Vector Stores in LangChain With Examples. If you have any doubt or any suggestions to make please drop a comment. Thanks!


Related Topics

  1. Document Loaders in LangChain With Examples
  2. Text Splitters in LangChain With Examples
  3. RunnableBranch in LangChain With Examples
  4. Chain Using LangChain Expression Language With Examples
  5. RunablePassthrough in LangChain With Examples

You may also like-

  1. Structured Output In LangChain
  2. Messages in LangChain
  3. PreparedStatement Interface in Java-JDBC
  4. Pre-defined Functional Interfaces in Java
  5. Counting Sort Program in Java
  6. Python String join() Method
  7. Signal in Angular With Examples
  8. Circular Dependency in Spring Framework

Wednesday, April 15, 2026

Pre-defined Functional Interfaces in Java

In our earlier post on Functional Interfaces in Java we saw how you can create custom functional interfaces and annotate them with the @FunctionalInterface Annotation. However, you don’t always need to define your own functional interface for every scenario. Java has introduced a new package java.util.function that defines many general purpose pre-defined functional interfaces.

These built-in interfaces are widely used across the JDK, including the Collections framework, Java Stream API and in user defined code as well.

In this guide, we’ll dive into these built-in functional interfaces in Java so you have a good idea which functional interface to use in which context while using with Lambda expressions in Java.


Pre-defined functional interfaces categorization

Functional interfaces defined in java.util.function package can be categorized into five types-

  1. Consumer- Consumes the passed argument and no value is returned.
  2. Supplier- Takes no argument and supplies a result.
  3. Function- Takes argument and returns a result.
  4. Predicate- Evaluates a condition on the passed argument and returns a boolean result (true or false).
  5. Operators- A specialized form of Function where both input and output are of the same type.

Consumer functional interface

Consumer<T> represents a function that accepts a single input argument and returns no result. Consumer functional interface definition is as given below consisting of an abstract method accept() and a default method andThen().

@FunctionalInterface
public interface Consumer<T> {
  void accept(T t);
  default Consumer<T> andThen(Consumer<? super T> after) {
    Objects.requireNonNull(after);
    return (T t) -> { accept(t); after.accept(t); };
  }
}

Following pre-defined Consumer functional interfaces are categorized as Consumer as all of these interfaces have the same behavior of consuming the passed value(s) and returning no result. You can use any of these based on number of arguments or data type.

  • BiConsumer<T,U>- Represents an operation that accepts two input arguments and returns no result.
  • DoubleConsumer- Represents an operation that accepts a single double-valued argument and returns no result.
  • IntConsumer- Represents an operation that accepts a single int-valued argument and returns no result.
  • LongConsumer- Represents an operation that accepts a single long-valued argument and returns no result.
  • ObjDoubleConsumer<T>- Represents an operation that accepts an object-valued and a double-valued argument, and returns no result.
  • ObjIntConsumer<T>- Represents an operation that accepts an object-valued and a int-valued argument, and returns no result.
  • ObjLongConsumer<T>- Represents an operation that accepts an object-valued and a long-valued argument, and returns no result.

Consumer functional interface Java example

In the example elements of List are displayed by using an implementation of Consumer functional interface.

import java.util.Arrays;
import java.util.List;
import java.util.function.Consumer;

public class ConsumerExample {
  public static void main(String[] args) {
    Consumer<String> consumer = s -> System.out.println(s);
    List<String> alphaList = Arrays.asList("A", "B", "C", "D");
    for(String str : alphaList) {
      // functional interface accept() method called
      consumer.accept(str);
    }
  }
}

Output

A
B
C
D

Supplier functional interface

Supplier<T> represents a function that doesn't take argument and supplies a result. Supplier functional interface definition is as given below consisting of an abstract method get()-

@FunctionalInterface
public interface Supplier<T> {
  T get();
}

Following pre-defined Supplier functional interfaces are categorized as Supplier as all of these interfaces have the same behavior of supplying a result.

  • BooleanSupplier- Represents a supplier of boolean-valued results.
  • DoubleSupplier- Represents a supplier of double-valued results.
  • IntSupplier- Represents a supplier of int-valued results.
  • LongSupplier- Represents a supplier of long-valued results.

Supplier functional interface Java example

In the example Supplier functional interface is implemented as a lambda expression to supply current date and time.

import java.time.LocalDateTime;
import java.util.function.Supplier;

public class SupplierExample {
  public static void main(String[] args) {
    Supplier<LocalDateTime> currDateTime = () -> LocalDateTime.now();
    System.out.println(currDateTime.get());
  }
}

Function functional interface

Function<T,R> represents a function that accepts one argument and produces a result. Function functional interface definition is as given below consisting of an abstract method apply(), two default methods compose(), andThen() and a static method identity().

@FunctionalInterface
public interface Function<T, R> {

  R apply(T t);

  default <V> Function<V, R> compose(Function<? super V, ? extends T> before) {
    Objects.requireNonNull(before);
    return (V v) -> apply(before.apply(v));
  }

  default <V> Function<T, V> andThen(Function<? super R, ? extends V> after) {
    Objects.requireNonNull(after);
    return (T t) -> after.apply(apply(t));
  }
  static <T> Function<T, T> identity() {
    return t -> t;
  }
}

Following pre-defined Function functional interfaces are categorized as Function as all of these interfaces have the same behavior of accepting argument(s) and producing a result.

  • BiFunction<T,U,R>- Represents a function that accepts two arguments and produces a result.
  • DoubleFunction<R>- Represents a function that accepts a double-valued argument and produces a result.
  • DoubleToIntFunction- Represents a function that accepts a double-valued argument and produces an int-valued result.
  • DoubleToLongFunction- Represents a function that accepts a double-valued argument and produces a long-valued result.
  • IntFunction<R>- Represents a function that accepts an int-valued argument and produces a result.
  • IntToDoubleFunction- Represents a function that accepts an int-valued argument and produces a double-valued result.
  • IntToLongFunction- Represents a function that accepts an int-valued argument and produces a long-valued result.
  • LongFunction<R>- Represents a function that accepts a long-valued argument and produces a result.
  • LongToDoubleFunction- Represents a function that accepts a long-valued argument and produces a double-valued result.
  • LongToIntFunction- Represents a function that accepts a long-valued argument and produces an int-valued result.
  • ToDoubleBiFunction<T,U>- Represents a function that accepts two arguments and produces a double-valued result.
  • ToDoubleFunction<T>- Represents a function that produces a double-valued result.
  • ToIntBiFunction<T,U>- Represents a function that accepts two arguments and produces an int-valued result.
  • ToIntFunction<T>- Represents a function that produces an int-valued result.
  • ToLongBiFunction<T,U>- Represents a function that accepts two arguments and produces a long-valued result.
  • ToLongFunction<T>- Represents a function that produces a long-valued result.

Function functional interface Java example

In the example a Function interface is implemented to return the length of the passed String.

import java.util.function.Function;

public class FunctionExample {
  public static void main(String[] args) {
    Function<String, Integer> function = (s) -> s.length();
    System.out.println("Length of String- " + function.apply("Interface"));
  }
}

Output

Length of String- 9

Predicate functional interface

Predicate<T> represents a function that accepts one argument and produces a boolean result. Abstract method in the Predicate functional interface is boolean test(T t).

Following pre-defined Predicate functional interfaces are categorized as Predicate as all of these interfaces have the same behavior of accepting argument(s) and producing a boolean result.

  • BiPredicate<T,U>- Represents a predicate (boolean-valued function) of two arguments.
  • DoublePredicate- Represents a predicate (boolean-valued function) of one double-valued argument.
  • IntPredicate- Represents a predicate (boolean-valued function) of one int-valued argument.
  • LongPredicate- Represents a predicate (boolean-valued function) of one long-valued argument.

Predicate functional interface Java Example

In the example a number is passed and true is returned if number is even otherwise odd is retuned.

import java.util.function.Predicate;

public class PredicateExample {
  public static void main(String[] args) {
    Predicate<Integer> predicate = (n) -> n%2 == 0;
    boolean val = predicate.test(6);
    System.out.println("Is Even- " + val);    
    System.out.println("Is Even- " + predicate.test(11));
  }
}

Output

Is Even- true
Is Even- false

Operator functional interfaces

Operator functional interfaces are specialized Function interfaces that always return the value of same type as the passed arguments. Operator functional interfaces extend their Function interface counterpart like UnaryOperator extends Function and BinaryOperator extends BiFunction.

Following pre-defined Operator functional interfaces are there that can be used in place of Function interfaces if returned value is same as the type of the passed argument(s).

  • BinaryOperator<T>- Represents an operation upon two operands of the same type, producing a result of the same type as the operands.
  • DoubleBinaryOperator- Represents an operation upon two double-valued operands and producing a double-valued result.
  • DoubleUnaryOperator- Represents an operation on a single double-valued operand that produces a double-valued result.
  • IntBinaryOperator- Represents an operation upon two int-valued operands and producing an int-valued result.
  • IntUnaryOperator- Represents an operation on a single int-valued operand that produces an int-valued result.
  • LongBinaryOperator- Represents an operation upon two long-valued operands and producing a long-valued result.
  • LongUnaryOperator- Represents an operation on a single long-valued operand that produces a long-valued result.
  • UnaryOperator<T>- Represents an operation on a single operand that produces a result of the same type as its operand.

UnaryOperator functional interface Java example

In the example UnaryOperator is implemented to return the square of the passed integer.

import java.util.function.UnaryOperator;

public class UnaryOperatorExample {
  public static void main(String[] args) {
    UnaryOperator<Integer> unaryOperator = (n) -> n*n;
    System.out.println("4 squared is- " + unaryOperator.apply(4));
    System.out.println("7 squared is- " + unaryOperator.apply(7));
  }
}

Output

4 squared is- 16
7 squared is- 49

That's all for this topic Pre-defined Functional Interfaces in Java. If you have any doubt or any suggestions to make please drop a comment. Thanks!


Related Topics

  1. Exception Handling in Java Lambda Expressions
  2. Method Reference in Java
  3. How to Fix The Target Type of This Expression Must be a Functional Interface Error
  4. Java Stream API Tutorial
  5. Java Lambda Expressions Interview Questions And Answers

You may also like-

  1. Java Stream flatMap() Method
  2. Java Lambda Expression Callable Example
  3. Invoke Method at Runtime Using Java Reflection API
  4. LinkedHashMap in Java With Examples
  5. java.lang.ClassCastException - Resolving ClassCastException in Java
  6. Java String Search Using indexOf(), lastIndexOf() And contains() Methods
  7. BeanFactoryAware Interface in Spring Framework
  8. Angular Two-Way Data Binding With Examples

Embeddings in LangChain With Examples

We have already gone through two of the building blocks of creating a RAG pipeline, document loaders and text splitters in LangChain. In this article, we’ll explore how LangChain embeddings transform raw text into meaningful vectors that truly capture its semantic essence.

Embeddings in LangChain

In LangChain, embeddings are numerical representations of text that capture the inherent semantic meaning. This enables machines to perform semantic search, where comparisons are driven by meaning and concepts rather than mere keyword matches.

For creating such embeddings, embedding models (like OpenAIEmbeddings, GoogleGenerativeAIEmbeddings, OllamaEmbeddings) are used which transform raw text, such as a sentence, paragraph, or tweet, into a fixed-length vector of numbers that captures its semantic meaning.

What is semantic meaning

Now, the first question is what exactly is this "semantic meaning"? Consider the following four sentences.

  • I am running to the market.
  • I am heading to the market in a hurry.
  • I am on my way to the market.
  • I am rushing off to the market.

If you notice all of the four sentences convey the same meaning- sense of motion and urgency. So, in terms of embeddings, each version would produce embeddings that sit close together in semantic space, since they all express the same core intent: you’re moving toward the market.

This closeness is exactly what makes semantic search powerful, queries with slightly different wording but similar meaning will retrieve the same or related results.

How does embedding model work

Let’s break down how an embedding model transforms raw text into vectors that capture its meaning. If we take the simple raw text "I am running to the market" as example.

  1. Text input
  2. You start with the raw text: "I am running to the market".

  3. Tokenization
  4. The text is split into smaller units (tokens). Depending on the embedding model, these could be word by word (I, am, running...) or subwords ("run", "ning").

    For example, produced tokens may look like this- ["I", "am", "run", "ning", "to", "the", "market"]

  5. Mapping Tokens To IDs
  6. Each token is mapped to a unique integer ID using the model’s pre-defined vocabulary.

    For example, I - 101, am - 202, run - 305, ning - 402, etc.

    This id acts as an index in an embedding matrix.

  7. Embedding Lookup
  8. Each token ID is mapped to a dense vector from the model’s embedding matrix. These vectors are usually high-dimensional (e.g., 768 or 1536 dimensions).

    So, each word will get its own full dimension vector. For our example, tokens we'll have vectors for Vi, Vam, Vrun and so on.

    These vectors already exist in the models; magic is in how these vectors are trained. During training-

    • Words that appear in similar contexts get vectors that are close together.
    • Relationships between words are encoded as vector arithmetic.

    Here is a simple program to show the embedding using GoogleGenerativeAIEmbeddings

    from langchain_google_genai import GoogleGenerativeAIEmbeddings
    
    from dotenv import load_dotenv
    
    load_dotenv()
    
    embeddings = GoogleGenerativeAIEmbeddings(model="gemini-embedding-2-preview")
    query = "I am running to the market"
    vector = embeddings.embed_query(query)
    
    # vector dimensions
    print(len(vector)) 
    # first 10 values
    print(vector[:10])   
    

    Output

    3072
    [0.013388703, -0.0026265276, -0.0013064864, 0.013196219, -0.0071006925, 0.0008229259, -0.009015757, 0.00064084254, 0.005457073, -0.0643481]
    

    Note that models don't return separate vectors for each word. The model processes the entire sentence and produces one unified vector that represents the meaning of the whole sentence. That is the Pooling / Final Representation step in the embedding model that combines token-level embeddings into a single sentence-level embedding.

    Embedding models give you a ready-to-use representation of the entire query because embedding API is designed for semantic search and comparison at the sentence or document level.

  9. How does semantic meaning emerge
  10. Words that appear in similar contexts get vectors that are close together. If we take the often used example of "king", "queen", "man", "woman".

    \[v_{\text{king}} - v_{\text{man}} + v_{\text{woman}} \approx v_{\text{queen}}\]

    The model has already learnt the concept of royalty which is already encoded into the vector of king. Man is, well just a common man!

    When we do \(v_{\text{king}} - v_{\text{man}}\), difference is the concept of royalty.

    When vector of woman is added to it, the result lands near the vector for "queen". Concept of royalty is already encoded into the vector of queen.

    Similar analogies hold for geography (Paris – France + Italy \(\approx\) Rome) or verb tense (walk – walking + running \(\approx\) run). It shows embeddings capture a wide range of semantic relationships.

    Here’s a simple geometric visualization of how embeddings capture meaning with king, queen, man, woman. Imagine a 2D plane where one axis represents gender and the other represents royalty.

    how embeddings capture meaning
  11. Semantic proximity
  12. If two sentences share nearly identical structure and meaning, for example

    • I am running to the market
    • I am walking to the market

    As you can see both sentences share nearly identical structure and meaning:

    • Subject: “I”
    • Verb: movement toward a destination
    • Object: “the market”

    Since both verbs describe locomotion, their embeddings are near each other in the model’s learned space. That is the concept of Semantic proximity.

    The distance between these two vectors (often measured by cosine similarity) would be very small (provided they are embedded using the same model).

    cosine similarity

Metrics for comparing embeddings

Several metrics are commonly used to compare embeddings:

  1. Cosine similarity- Measures the angle between two vectors.
  2. Euclidean distance- Measures the straight-line distance between points.
  3. Dot product- Measures how much one vector projects onto another.

We can check this programmatically in LangChain using numpy to calculate cosine similarity.

from langchain_ollama import OllamaEmbeddings
import numpy as np

embeddings = OllamaEmbeddings(model="nomic-embed-text")

v1 = np.array(embeddings.embed_query("I am running to the market"))
v2 = np.array(embeddings.embed_query("I am walking to the market"))

# cosine similarity using numpy
cos_sim = np.dot(v1, v2) / (np.linalg.norm(v1) * np.linalg.norm(v2))

print("Similarity is", cos_sim)

Output

Similarity is 0.82145

LangChain Embedding Interface

LangChain provides a standard interface for text embedding models (like OpenAIEmbeddings, GoogleGenerativeAIEmbeddings, OllamaEmbeddings) through the Embeddings interface.

Two main methods are:

  1. embed_documents(texts: List[str]): Embeds a list of documents. Returns a List[List[float]]
  2. embed_query(text: str): Embeds a single query. Returns a List[float]

What is the next step?

You can now store embeddings, which are high-dimensional numerical representations of data, in a vector database (like Pinecone, FAISS, Weaviate, ChromaDB) for semantic search or similarity matching.

Refer this post- Vector Stores in LangChain With Examples to know more about using vectore stores in LangChain.

That's all for this topic Embeddings in LangChain With Examples. If you have any doubt or any suggestions to make please drop a comment. Thanks!


Related Topics

  1. RunnableBranch in LangChain With Examples
  2. Chatbot With Chat History - LangChain MessagesPlaceHolder
  3. Structured Output In LangChain
  4. Messages in LangChain
  5. Chain Using LangChain Expression Language With Examples

You may also like-

  1. Java Program to Display Prime Numbers
  2. Java Map computeIfAbsent() With Examples
  3. Remove Duplicate Elements From an Array in Java
  4. Difference Between Two Dates in Java
  5. TreeSet in Java With Examples
  6. Java CyclicBarrier With Examples
  7. Java Variable Types With Examples
  8. Circular Dependency in Spring Framework

Monday, April 13, 2026

Armstrong Number or Not Java Program

Checking whether a number is an Armstrong number in Java program is a classic fresher‑level interview question that tests both logical thinking and coding skills. An Armstrong number is a number that is equal to the sum of its digits each raised to the power of the total number of digits.

For Example-

  • 371 is an Armstrong number because it has 3 digits, and
    371 = 33 + 73 + 13 = 27 + 343 + 1 = 371
  • 9474 is also an Armstrong number since it has 4 digits, and
    9474 = 94 + 44 + 74 + 44 = 6561 + 256 + 2401 + 256 = 9474
  • By definition, 0 and 1 are considered Armstrong numbers too.

Check given number Armstrong number or not

So let's write a Java program to check whether a given number is an Armstrong number or not. We'll break down how the logic works step by step later.

import java.util.Scanner;

public class ArmstrongNumber {
  public static void main(String[] args) {
    System.out.println("Please enter a number : ");
    Scanner scanIn = new Scanner(System.in);
    int scanInput = scanIn.nextInt();
    boolean isArmstrong = checkForArmstrongNo(scanInput);
    if(isArmstrong){
     System.out.println(scanInput + "  is an Armstrong number");
    }else{
     System.out.println(scanInput + " is not an Armstrong number"); 
    }
    scanIn.close();
  }
 
  private static boolean checkForArmstrongNo(int number){
    // convert number to String
    String temp = number + "";
    int numLength = temp.length();
    int numCopy = number;
    int sum = 0;
    while(numCopy != 0 ){
      int remainder = numCopy % 10;
      // using Math.pow to get digit raised to the power
      // total number of digits
      sum = sum + (int)Math.pow(remainder, numLength);
      numCopy = numCopy/10;
    }
    System.out.println("sum is " + sum );
    return (sum == number) ? true : false;
  }
}

Some outputs-

Please enter a number : 
125
sum is 134
125 is not an Armstrong number

Please enter a number : 
371
sum is 371
371  is an Armstrong number

Please enter a number : 
54748
sum is 54748
54748  is an Armstrong number

Armstrong number Java program explanation

In an Armstrong number Java program, the input number is first taken from the user. To determine the number of digits, the simplest approach is to convert the number into a string and use its length. This gives us the power to which each digit must be raised.

The logic works as follows:

  1. Extract digits one by one
    • Start from the last digit using the modulus operator (num % 10).
    • Raise this digit to the power of the total number of digits.
  2. Accumulate the sum
    • Add the powered value to a running total.
    • Reduce the number by one digit using integer division (num / 10).
  3. Repeat until all digits are processed
    • Continue the loop until the number becomes zero.
  4. Compare with the original number
    • If the accumulated sum equals the original number, it is an Armstrong number.
    • Otherwise, it is not.

That's all for this topic Armstrong Number or Not Java Program. If you have any doubt or any suggestions to make please drop a comment. Thanks!

>>>Return to Java Programs Page


Related Topics

  1. Check if Given String or Number is a Palindrome Java Program
  2. How to Display Pyramid Patterns in Java
  3. Java Program to Display Prime Numbers
  4. Factorial program in Java
  5. Write to a File in Java

You may also like-

  1. Find Duplicate Elements in an Array Java Program
  2. Difference Between Two Dates in Java
  3. How to Create Password Protected Zip File in Java
  4. Spring Component Scan Example
  5. Java Collections Interview Questions And Answers
  6. Java Abstract Class and Abstract Method
  7. Switch Case Statement in Java With Examples
  8. Java SynchronousQueue With Examples

Text Splitters in LangChain With Examples

When you are creating a Retrieval-Augmented Generation (RAG) pipeline first step is to load the data and split it. In the post Document Loaders in LangChain With Examples we saw different types of document loaders provided by LangChain. In this article we’ll see different text splitters provided by LangChain to break the loaded documents into smaller, manageable chunks.

Why do we need Text Splitters

The documents you load using document loaders may be very large in size and it is quite impractical to send the content of the whole document to the LLM to get relevant answers. Text splitters in LangChain help in breaking large documents into smaller, manageable chunks that models can process efficiently without losing context. They help overcome context window limits, improve retrieval accuracy, and enable better indexing and semantic understanding. Here are some of the benefits of splitting the documents.

  • Context window limit- LLMs have a maximum token limit. If you feeding an entire book or long document that will exceed this limit. By splitting documents into smaller, semantically coherent chunks, you can select only the relevant chunks to send to the LLM instead of the entire document.
  • Token Efficiency- If you send the entire document (without any splitting), the LLM has to process every token, even irrelevant ones. That inflates cost and slows response time. With splitting + retrieval, only the relevant chunks are injected into the prompt. This means fewer tokens are consumed, lowering the overall cost.
  • Efficient Retrieval in RAG Pipelines- One of the steps in creating a RAG pipeline is to store the loaded documents in vector databases. By splitting documents into smaller chunks and storing those chunks (not the whole document as is) improves search granularity and ensures the right passage is retrieved from the vector DB.
  • Maintaining Semantic Coherence- There are TextSplitter classes in LangChain that don’t just cut text arbitrarily, they try to preserve contextual meaning. For example, splitting by paragraphs or semantic boundaries avoids breaking sentences mid-thought.

    Splitting at natural boundaries (sentences, paragraphs, sections) keeps ideas intact. That ultimately helps LLM to interpret the context correctly without guessing missing parts. This reduces hallucinations and increases factual accuracy.

Text splitters in LangChain

LangChain offers a variety of text splitters, each designed to serve different functionalities.

1. CharacterTextSplitter

One of the simplest text-splitting utilities in LangChain. It divides text using a specified character sequence (default: "\n\n" meaning paragraph), with chunk length measured by the number of characters.

Text is split using a given character separator (which is paragraph by default). Instead of cutting arbitrarily at the exact character count, the splitter looks for the nearest separator before the limit. This ensures chunks end at natural boundaries (paragraphs, sentences, etc.), preserving meaning. The chunk size is the maximum number of characters allowed in each chunk. For example, if chunk_size=1000, each chunk will contain up to 1000 characters. The splitter tries to fill the chunk up to this limit, but will break at the nearest separator to avoid cutting mid-paragraph or mid-sentence.

CharacterTextSplitter is best for documents with a consistent and predictable structure, such as logs or lists where a single separator (like a newline) clearly defines boundaries.

text_splitter = CharacterTextSplitter(
    separator="\n\n",
    chunk_size=1000,
    chunk_overlap=200,
    length_function=len,
)

Parameters-

  • separator: Used to identify split points. The default is "\n\n" (double newline), which aims to preserve paragraph integrity.
  • chunk_size: The maximum number of characters allowed in a single chunk.
  • chunk_overlap: The number of characters that consecutive chunks should share. This helps maintain semantic context across splits.
  • length_function: A function used to calculate the length of the chunks, defaulting to the standard Python len().

Methods that you can use-

  • .split_text- when you just have raw strings (plain text), it returns plain string chunks.
  • .split_documents- when you already have your text wrapped inside LangChain Document objects. If you have used Document loader in LangChain to load document you will have them as Document objects. In that case, you use split_document to break them into smaller Document chunks while preserving metadata.

LangChain CharacterTextSplitter Example

In the code, space (" ") is used as the separator not the default.

from langchain_text_splitters import CharacterTextSplitter

# Sample text to split
text = """
Generative AI is a type of artificial intelligence that creates new, original content—such as text, images, video, audio, or code—by learning patterns from existing data. Unlike traditional AI that classifies or analyzes data, GenAI uses deep learning models to generate novel outputs that resemble the training data.
 
Key Aspects of Generative AI:

How it Works: These models (e.g., GANs, Transformers) are trained on massive datasets to understand underlying structures and probabilities. When prompted, they predict and generate new, human-like content.
"""

# Create a CharacterTextSplitter instance
text_splitter = CharacterTextSplitter(chunk_size=100, chunk_overlap=20, separator=" ")

# Split the text into chunks
chunks = text_splitter.split_text(text)

print(f"Total chunks created: {len(chunks)}\n")

# Print the resulting chunks    
for i, chunk in enumerate(chunks):
    print(f"Chunk {i+1}:\n{chunk}\n")

Output

Total chunks created: 7

Chunk 1:
Generative AI is a type of artificial intelligence that creates new, original content—such as text,

Chunk 2:
as text, images, video, audio, or code—by learning patterns from existing data. Unlike traditional

Chunk 3:
Unlike traditional AI that classifies or analyzes data, GenAI uses deep learning models to generate

Chunk 4:
models to generate novel outputs that resemble the training data.

Key Aspects of Generative

Chunk 5:
of Generative AI:

How it Works: These models (e.g., GANs, Transformers) are trained on massive

Chunk 6:
trained on massive datasets to understand underlying structures and probabilities. When prompted,

Chunk 7:
When prompted, they predict and generate new, human-like content.

2. RecursiveCharacterTextSplitter

The RecursiveCharacterTextSplitter is the recommended default text splitter for generic text in LangChain. It splits documents by recursively checking a list of characters until the resulting chunks are within a specified size limit. The default list of separator is ["\n\n", "\n", " ", ""]

  • "\n\n": double newline (paragraphs)
  • "\n": single newline (lines)
  • " ": space (words)
  • "": empty string (individual characters)

How RecursiveCharacterTextSplitter Works

Instead of using a single separator, it uses a hierarchical list to preserve semantic context (paragraphs -> lines -> words -> characters):

  • It first attempts to split the text by the first character in its list (default is double newline \n\n for paragraphs).
  • Recursive Fallback: If any resulting chunk still exceeds the chunk_size, it moves to the next separator (e.g., single newline \n) and tries again only on that chunk.
  • Continue in the hierarchy: It repeats this process through the list (e.g., spaces then finally individual characters "") until the size requirement is met.

LangChain RecursiveCharacterTextSplitter Example

In this example first a PDF document is loaded using PyPDFLoader, then RecursiveCharacterTextSplitter is used to split it. Code assumes that the PDF document is inside the resources folder which resides in project root.

from langchain_community.document_loaders import PyPDFLoader
from langchain_text_splitters import RecursiveCharacterTextSplitter
import os

def get_file_path(file_name):
    # Current script directory
    script_dir = os.path.dirname(os.path.abspath(__file__))

    # Project root is one level above
    project_root = os.path.dirname(script_dir)

    #print(f"Project root directory: {project_root}")
    file_path = os.path.join(project_root, "resources", file_name)
    return file_path

def load_documents(file_name):
    file_path = get_file_path(file_name)
    loader = PyPDFLoader(file_path)
    documents = loader.load()
    print(f"Number of Documents: {len(documents)}")
    return documents

def split_documents(documents):
    text_splitter = RecursiveCharacterTextSplitter(
        chunk_size=1000,
        chunk_overlap=200,
    )

    chunks = text_splitter.split_documents(documents)
    print(f"length of chunks {len(chunks)}")
    for i, chunk in enumerate(chunks[:3]):  # first 3 chunks
        # Chunk Lengths
        print(f"Chunk {i+1} length: {len(chunk.page_content)}")
        # Chunk Content
        #print(f"Chunk {i+1}:\n{chunk.page_content}...\n") 
        # Chunk Metadata
        #print(f"Chunk {i+1} metadata: {chunk.metadata}")

if __name__ == "__main__":
    documents = load_documents("Health Insurance Policy Clause.pdf")
    split_documents(documents)

Output

Output
Number of Documents: 41
length of chunks 139
Chunk 1 length: 914
Chunk 2 length: 913
Chunk 3 length: 983

3. Code Text Splitter

Though LangChain provides specific code text splitter classes like PythonCodeTextSplitter for Python but the recommended approach is to use RecursiveCharacterTextSplitter.from_language() method. Supported languages are stored in the langchain_text_splitters.Language enum. You need to pass a value from the enum into RecursiveCharacterTextSplitter.from_language() method to instantiate a splitter that is tailored for a specific language. Here’s an example using the PythonTextSplitter:

from langchain_text_splitters import RecursiveCharacterTextSplitter
from langchain_text_splitters import Language

PYTHON_CODE = """
def hello_world():
    print("Hello, World!")

# Call the function
hello_world()
"""

python_splitter = RecursiveCharacterTextSplitter.from_language(
    language=Language.PYTHON, chunk_size=50, chunk_overlap=0
)
python_docs = python_splitter.create_documents([PYTHON_CODE])
print(python_docs)

Output

[Document(metadata={}, page_content='def hello_world():\n    print("Hello, World!")'), Document(metadata={}, page_content='# Call the function\nhello_world()')]

Note that in the above example, create_documents() method is used. This method does both tasks, raw text to Document objects and splitting the documents in one go.

4. TokenTextSplitter

TokenTextSplitter class in LangChain is used to divide text into smaller chunks based on a specific number of tokens rather than characters.

LLMs have strict token-based context window limit this class ensures chunks don’t exceed the model’s max token limit.

How TokenTextSplitter Works

Raw text to tokens

The splitter first converts your text into tokens using the model’s tokenizer (e.g., GPT-3.5, GPT-4, or embedding models).

Chunking by token count

You specify chunk_size and chunk_overlap in terms of tokens. The splitter groups tokens into chunks of the given size, with overlap applied at the token level.

Convert tokens back

Each chunk of tokens is decoded back into a string. The result is a list of text chunks that align with token boundaries. By tokenizing first, the splitter ensures each chunk is within the desired token budget.

from langchain_text_splitters import TokenTextSplitter

text = """
Generative AI is a type of artificial intelligence that creates new, original content—such as text, images, video, audio, or code—by learning patterns from existing data. Unlike traditional AI that classifies or analyzes data, GenAI uses deep learning models to generate novel outputs that resemble the training data.
 
Key Aspects of Generative AI:

How it Works: These models (e.g., GANs, Transformers) are trained on massive datasets to understand underlying structures and probabilities. When prompted, they predict and generate new, human-like content.
"""

#cl100k_base is a tokenizer encoding provided by OpenAI’s tiktoken library.
text_splitter = TokenTextSplitter(
    encoding_name="cl100k_base",
    chunk_size=100,
    chunk_overlap=20
)

chunks = text_splitter.split_text(text)

print(f"total chunks {len(chunks)}")

for i, chunk in enumerate(chunks):
    print(f"Chunk {i+1}:\n{chunk}\n")

Apart from these classes LangChain has some specialized classes for splitting specific documents.

  1. Splitting JSON- RecursiveJsonSplitter splits json data while allowing control over chunk sizes.
  2. Splitting Markdown- MarkdownTextSplitter attempts to split the text along Markdown-formatted headings.
  3. Splitting HTML- LangChain provides three different text splitters that you can use to split HTML content effectively:
    • HTMLHeaderTextSplitter- Splits HTML text based on header tags (e.g., <h1>, <h2>, <h3>, etc.), and adds metadata for each header relevant to any given chunk.
    • HTMLSectionSplitter- Splitting HTML into sections based on specified tags.
    • HTMLSemanticPreservingSplitter- Splits HTML content into manageable chunks while preserving the semantic structure of important elements like tables, lists, and other HTML components.

That's all for this topic Text Splitters in LangChain With Examples. If you have any doubt or any suggestions to make please drop a comment. Thanks!


Related Topics

  1. Structured Output In LangChain
  2. Output Parsers in LangChain With Examples
  3. Chatbot With Chat History - LangChain MessagesPlaceHolder
  4. Chain Using LangChain Expression Language With Examples
  5. RunablePassthrough in LangChain With Examples

You may also like-

  1. Prompt Templates in LangChain With Examples
  2. LangChain PromptTemplate + Streamlit - Code Generator Example
  3. Python String isdigit() Method
  4. Python Exception Handling - try,except,finally
  5. How to Sort ArrayList in Java
  6. Difference Between Abstract Class And Interface in Java
  7. Matrix Addition Java Program
  8. Spring MVC Exception Handling - @ExceptionHandler And @ControllerAdvice Example