Tuesday, March 31, 2026

Messages in LangChain

In this tutorial we’ll see what are messages in LangChain and what are the different message types available in LangChain.

What are Messages in LangChain?

When you interact with LLMs programmatically using LangChain every input and output has an associated message type, structure and metadata. Knowing these message types will help you in structuring your prompt, setting context for each message in a better way which in turn will get you a better response from LLM.

Attributes of a Message in LangChain

Each message is an object which has following attributes-

  • Role- Which identifies the type of the message (System, Human etc.)
  • Content- Actual content of the message (prompt by user, response from model).
  • Metadata- Optional data like token usage, message ID etc.

Types of Messages in LangChain

List of the different message classes in LangChain.

  1. SystemMessage- Used to set how model should behave and for setting the context for interaction.
  2. HumanMessage- Represents the user input.
  3. AIMessage- Represents the response generated by the model.
  4. ToolsMessage- Represents the output of the tool calls.
  5. ChatMessage- Messages that can be assigned an arbitrary role other than the predefined ones.
  6. FunctionMessage- Message for passing the result of executing a tool back to a model. This is a legacy class succeeded by ToolsMessage.

SystemMessage Class

SystemMessage class is used to provide high-level instructions that guide model's behavior, tone, or style.

from langchain_core.messages import SystemMessage
SystemMessage(content="You are an experience Python programmer")
  

HumanMessage Class

A HumanMessage represents user input and interactions. They can contain text, images, audio, files, and any other amount of multimodal content.

from langchain_core.messages import HumanMessage
HumanMessage (content="Write binary search program in Python")
  

AIMessage Class

This is the response from the model. They can include multimodal data, tool calls, and provider-specific metadata that you can later access.

Here is an example how a message list may look like with system, human and ai messages.

from langchain.messages import AIMessage, SystemMessage, HumanMessage

# Add to conversation history
messages = [
SystemMessage("You are a helpful assistant"),
HumanMessage("Can you help me?"),
# Create an AI message manually (for conversation history purpose)
AIMessage("I'd be happy to help you with that question!"), # Insert as if it came from the model
HumanMessage("Great! What's agentic AI?")
]
  

ToolMessage Class

When models make tool calls, they’re included as ToolMessage in the returned AIMessage.

Suppose there is a tool call to extract some information from the message and this tool is bound to the model.

@tool
def extract_info(message:str) -> str:
	…
	…

Then the model knows it has to call this function for extracting information and that is also conveyed in the AIMessage which includes a ToolsMessage.

AIMessage(content='', additional_kwargs={}, response_metadata={'model': 'llama3.1', 'created_at': '2026-03-30T06:33:19.9566618Z', 
'done': True, 'done_reason': 'stop', 'total_duration': 75089402400, 'load_duration': 184685700, 'prompt_eval_count': 755, 
'prompt_eval_duration': 68079985500, 'eval_count': 42, 'eval_duration': 6618086400, 'logprobs': None, 'model_name': 'llama3.1', 
'model_provider': 'ollama'}, id='lc_run--019d3d71-3dfe-7b11-b5c8-8bedf1f1870e-0', 
tool_calls=[{'name': 'extract_info', 'args': {'message': 'Extract relevant information from this message, Name: Test, skills: Java, Python, Spring Boot'}, 'id': 'ccd14d41-41ca-47fc-b89e-68dfcdb0fae2', 'type': 'tool_call'}], invalid_tool_calls=[], usage_metadata={'input_tokens': 755, 'output_tokens': 42, 'total_tokens': 797}), ToolMessage(content='Extracted information successfully. Let me check if I have all the details.', name='extract_info', id='93988e12-e267-409e-8efe-7494193d6451', tool_call_id='ccd14d41-41ca-47fc-b89e-68dfcdb0fae2')

ChatMessage Class

The ChatMessage class in LangChain is a flexible message type that allows you to specify an arbitrary role for the speaker.

messages = [
    SystemMessage(content="You are a helpful assistant."),
    HumanMessage(content="What is agentic AI?"),
    AIMessage(content=" Agentic AI is artificial intelligence that can autonomously perceive, reason, and act toward achieving goals without constant human intervention."),
    ChatMessage(role="developer", content="Ensure your next response is very technical."),
    HumanMessage(content="Explain the role of agentic AI."),
]
  

That's all for this topic Messages in LangChain. If you have any doubt or any suggestions to make please drop a comment. Thanks!


Related Topics

  1. What is LangChain - An Introduction
  2. First LangChain Program: Ask Me Anything
  3. Prompt Templates in LangChain With Examples
  4. LangChain PromptTemplate + Streamlit - Code Generator Example
  5. Chain Using LangChain Expression Language With Examples

You may also like-

  1. RunnablePassthrough in LangChain With Examples
  2. Simple RAG Application in LangChain
  3. Count Number of Words in a String Java Program
  4. Ternary Operator in Java With Examples
  5. Python assert Statement
  6. Operator Overloading in Python
  7. Spring Expression Language (SpEL) With Examples
  8. Angular - Call One Service From Another

Monday, March 30, 2026

raise Statement in Python Exception Handling

In Python Exception Handling - try,except,finally we saw some examples of exception handling in Python but exceptions, in all the examples there, were raised automatically. You can trigger an exception manually too using raise statement in Python.

Python raise statement usage

For throwing an exception using raise there are the following options-

1. Re‑raising the current exception

You can simply use raise without any other expression inside an except block. If no expressions are present, raise re-raises the last exception that was active in the current scope. This is useful when you want to handle an error partially but still propagate it upward. If no exception is active in the current scope, a RuntimeError exception is raised indicating that this is an error.

def divide_num(num1, num2):
  try:
    print('Result-',num1/num2)
  except ZeroDivisionError as error:
    print('Zero is not a valid argument here')
    #raise with no expression
    raise

divide_num(10, 0)

Output

Zero is not a valid argument here
Traceback (most recent call last):
  File "F:/NETJS/NetJS_2017/Python/Programs/Test.py", line 10, in <module>
    divide_num(10, 0)
  File "F:/NETJS/NetJS_2017/Python/Programs/Test.py", line 3, in divide_num
    print('Result-',num1/num2)
ZeroDivisionError: division by zero

As you can see raise statement re-raises the last exception again.

2. Raising a specific exception

If raise is used with an expression then raise evaluates the first expression as the exception object. It must be either a subclass or an instance of BaseException.

  • If you provide a class (e.g., raise ValueError), Python instantiates it automatically with no arguments.
  • If you provide an instance (e.g., raise ValueError("Invalid input")), that exact exception object is raised.
def divide_num(num1, num2):
  try:
    print('Result-',num1/num2)
  except ZeroDivisionError as error:
    print('Zero is not a valid argument here')
    raise RuntimeError("An error occurred")

divide_num(10, 0)

Output

Zero is not a valid argument here
Traceback (most recent call last):
  File "F:/NETJS/NetJS_2017/Python/Programs/Test.py", line 3, in divide_num
    print('Result-',num1/num2)
ZeroDivisionError: division by zero

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "F:/NETJS/NetJS_2017/Python/Programs/Test.py", line 9, in <module>
    divide_num(10, 0)
  File "F:/NETJS/NetJS_2017/Python/Programs/Test.py", line 6, in divide_num
    raise RuntimeError("An error occurred")
RuntimeError: An error occurred

As you can see in this case previous exception is attached as the new exception’s __context__ attribute.

3. raise with from clause

raise can also be used with from clause for exception chaining. With the from clause another exception class or instance is given, which will then be attached to the raised exception as the __cause__ attribute.

In the example there are two functions, from function func() there is a call to function divide_num with arguments that cause ZeroDivisionError.

def divide_num(num1, num2):
  try:
    print('Result-',num1/num2)
  except ZeroDivisionError as error:
    print('Zero is not a valid argument here')
    raise RuntimeError("An error occurred") from error

def func():
  try:
    divide_num(10, 0)
  except RuntimeError as obj:
    print(obj)
    print(obj.__cause__)

func()

Output

Zero is not a valid argument here
An error occurred
division by zero

As you can see using __cause__ attribute you can get the attached exception.

That's all for this topic raise Statement in Python Exception Handling. If you have any doubt or any suggestions to make please drop a comment. Thanks!

>>>Return to Python Tutorial Page


Related Topics

  1. User-defined Exceptions in Python
  2. Passing Object of The Class as Parameter in Python
  3. Encapsulation in Python
  4. Interface in Python
  5. Python break Statement With Examples

You may also like-

  1. Magic Methods in Python With Examples
  2. Check if String Present in Another String in Python
  3. Default Arguments in Python
  4. Python Program to Display Armstrong Numbers
  5. HashMap in Java With Examples
  6. this Keyword in Java With Examples
  7. Introduction to Hadoop Framework
  8. Spring Web Reactive Framework - Spring WebFlux Tutorial

LangChain PromptTemplate + Streamlit - Code Generator Example

In this article we’ll see how to create an AI code generator using LangChain’s ChatPromptTemplate class and Streamlit for UI. What we are going to built is a UI with a dropdown to select the programming language and an area to state the coding problem. On the click of the "generate code" button, programming language and the problem are inserted as actual values in the prepared prompt template and sent to the model to get an appropriate response.

How the UI looks like

What we are going to built in the UI

  • A dropdown to select the programming language
  • An area to state the coding problem
  • A display area where the generated code is displayed
StreamLit UI for code generator

Prompt templates used in the code are kept in the separate file prompt.py

code_system_prompt_template = """
    You are an expert {language} programmer. Write code as per the given instructions. The code should be efficient, well-structured, and properly commented. Use appropriate variable names and follow best practices for coding in the specified programming language.
    """

code_human_prompt_template = """
***Instructions***: Write code to solve the following programming problem: {problem}.
Make sure to include any necessary imports and handle edge cases appropriately.
"""

codegenerator.py

In the code there is a function generate_code() that has two arguments-

  • problem (str): The programming problem to solve.
  • language (str): The programming language to use for the code generation.

This function is invoked from the streamlit code after selecting the programming language from the given option, stating the coding problem and clicking the button "Generate Code"

from langchain_core.prompts import ChatPromptTemplate, SystemMessagePromptTemplate, HumanMessagePromptTemplate
from langchain_ollama import ChatOllama
from prompt import code_system_prompt_template, code_human_prompt_template
import streamlit as st

# Define system and human message templates
system_message = SystemMessagePromptTemplate.from_template(code_system_prompt_template)
human_message = HumanMessagePromptTemplate.from_template(code_human_prompt_template)    

def generate_code(problem: str, language: str) -> str:

    # Create a ChatPromptTemplate object
    prompt = ChatPromptTemplate.from_messages([system_message, human_message]) 
    # Initialize the model
    model = ChatOllama(model="llama3.1",    
                        temperature=0.7)            
    # Format the prompt with the problem and language
    formatted_prompt = prompt.format(problem=problem, language=language)
    print(f"Formatted prompt: {formatted_prompt}")
    # Generate the code using the model
    response = model.invoke(formatted_prompt)
    # Return the generated code
    return response.content

# Streamlit app to demonstrate code generation
st.set_page_config(page_title="AI Code Generator", layout="centered")
st.title("🤖 Code Generator ")
st.markdown("Select **programming language** and state your coding problem to generate code!")
# Create a dropdown with a default value
option = st.selectbox(
    'Choose your favorite programming language:',
    ('Python', 'JavaScript', 'Java', 'C++'),
    index=0  # sets the default value to the first option
)

#print(f"Selected programming language: {option}")
problem = st.text_area("Enter the programming problem you want to solve:")

# Generate code when the button is clicked
if st.button("Generate Code"):
    if problem.strip() == "":
        st.warning("Please enter a programming problem to generate code.")
    else:
        with st.spinner("Generating code..."):
            generated_code = generate_code(problem, option)
            st.code(generated_code, language=option.lower())

You can run the code using the following command.

streamlit run codegenerator.py

If there is no error then you should see a message telling how to access your streamlit web app.

  You can now view your Streamlit app in your browser.
  Local URL: http://localhost:8501

The prompt after the insertion of actual values looks like as given below-

Formatted prompt: System:
    You are an expert Java programmer. Write code as per the given instructions. The code should be
    efficient, well-structured, and properly commented. Use appropriate variable names and follow
    best practices for coding in the specified programming language.

Human:
***Instructions***: Write code to solve the following programming problem: For given input array find contiguous array of size k whose sum is equal to given s
Input - [2,2,1,2,3]
s = 5
k = 3
Output- [2,2,1] [2,1,2]
s=4
k=2
Output - [2,2]
If no contiguous array found return "Not Present".
Make sure to include any necessary imports and handle edge cases appropriately.

That's all for this topic LangChain PromptTemplate + Streamlit - Code Generator Example. If you have any doubt or any suggestions to make please drop a comment. Thanks!


Related Topics

  1. What is LangChain - An Introduction
  2. First LangChain Program: Ask Me Anything
  3. Messages in LangChain
  4. RunnablePassthrough in LangChain With Examples
  5. Vector Stores in LangChain With Examples

You may also like-

  1. Embeddings in LangChain With Examples
  2. Chatbot With Chat History - LangChain MessagesPlaceHolder
  3. Constructor in Python – Learn How init() Works
  4. Check if String Present in Another String in Python
  5. java.lang.ClassNotFoundException - Resolving ClassNotFoundException in Java
  6. Quick Sort Program in Java
  7. Spring JdbcTemplate Select Query Example
  8. Creating New Component in Angular

Prompt Templates in LangChain With Examples

This article shows how to use prompt template in LangChain to create reusable prompts for models. In the post First LangChain Program: Ask Me Anything we saw how to connect to different models using LangChain standard interfaces and pass user’s prompt. But what limits the prompt there is that it is hardcoded. What if you want to give user a chance to create dynamic prompts where certain values can be passed at runtime? For example-

"Write a 2-page blog post on the topic {topic}"

Here topic is a placeholder which can be replaced by the actual value later.

Or

"Write a {programming_language} program for {topic}"

Where user can supply the choice of language and the topic later to be replaced in the prompt before sending it to the model.

Creating prompt templates in LangChain

LangChain gives two options for creating such dynamic prompt templates.

  1. PromptTemplate- This class is best used for single-message, text-completion-style prompts. A prompt template consists of a string template with placeholders and formats it with input values.
  2. ChatPromptTemplate- ChatPromptTemplate is best for multi-role conversations. It uses a list of messages, each with a defined role (system, human, ai, etc).

PromptTemplate in LangChain

PromptTemplate is a utility for creating structured prompts for LLMs. It lets you define a template string with placeholders which can be replaced by actual values later.

LangChain PromptTemplate example

Let’s say you want to create an AI blog post generator which can write a blog post for the given topic. For this example, I have used a locally deployed "llama3.1" model using Ollama.

from langchain_core.prompts import PromptTemplate
from langchain_ollama import ChatOllama 
# Define a prompt template for generating a blog post
template = """
    You are an expert technical content writer. Write a detailed blog post as per the given instructions. The blog post should be engaging, informative, and well-structured. Include an introduction, main body, and conclusion. Use subheadings where appropriate and provide examples to illustrate key points.
    ***Instructions***: Write a {no_of_paras} paragraphs blog post about the following topic: {topic}.
"""
# Create a PromptTemplate object
prompt = PromptTemplate.from_template(template)
# Initialize the Ollama model
model = ChatOllama(model="llama3.1", 
                    temperature=0.7)
# Define the topic and number of paragraphs for the blog post   
topic = "Dictionary in Python"
no_of_paras = 6
# Format the prompt with the topic and number of paragraphs
formatted_prompt = prompt.format(topic=topic, no_of_paras=no_of_paras)   
# Print the formatted prompt to verify its correctness
print("Formatted Prompt:\n", formatted_prompt)  
# Generate the blog post using the model
response = model.invoke(formatted_prompt)
# Print the generated blog post 
print("\nGenerated Blog Post:\n", response.content)

Formatter prompt output is as given below-

“You are an expert technical content writer. Write a detailed blog post as per the given instructions. The blog post should be engaging, informative, and well-structured. Include an introduction, main body, and conclusion. Use subheadings where appropriate and provide examples to illustrate key points.
***Instructions***: Write a 6 paragraphs blog post about the following topic: Dictionary in Python.

ChatPromptTemplate in LangChain

ChatPromptTemplate in LangChain is a class for building structured prompts for chat models. You define messages with roles like system, human, and ai.

For example-

template = ChatPromptTemplate(
    [
        ("system", "You are a helpful AI bot. Your name is {name}."),
        ("human", "Hello, how are you doing?"),
        ("ai", "I'm doing well, thanks!"),
        ("human", "{user_input}"),
    ]
)

There are specific prompt template utility classes also to be used for creating different kind of messages.

  • AIMessagePromptTemplate- The AIMessagePromptTemplate class in LangChain is used to create a template for messages that are assumed to be from an AI within a chat conversation.
  • HumanMessagePromptTemplate- The HumanMessagePromptTemplate class in LangChain is used to create a template for messages that are from user.
  • SystemMessagePromptTemplate- The SystemMessagePromptTemplate in LangChain is a class used to define a system-level instruction (set context) for a language model within a chat application.

Refer this post- Messages in LangChain to know about the different Message classes available in LangChain.

LangChain ChatPromptTemplate example

We’ll take the previous example of the blog generation itself with one change. In the previous example context for the system is always set as "technical content writer". In this example, we’ll set this expertise later; in the beginning it is also a placeholder now. Let’s see how to do it.

For creating templates separate file prompt.py is used.

prompt.py

system_prompt_template = """
    You are an expert {expertise} content writer. Write a detailed blog post as per the given 
    instructions. The blog post should be engaging, informative, and well-structured. Include an introduction, main body, and conclusion. Use subheadings where appropriate and provide examples to illustrate key points.
"""

human_prompt_template = """    
    ***Instructions***: Write a {no_of_paras} paragraphs blog post about the following topic: {topic}.
"""

As you can see, we have two separate templates for system and human respectively with total three placeholders {expertise}, {no_of_paras}, {topic}

chattemplatedemo.py

This is the class where both of the templates are imported from prompt.py file and used to create messages for System and Human roles. I have used a locally deployed “llama3.1” model using Ollama.

from langchain_core.prompts import ChatPromptTemplate, SystemMessagePromptTemplate, HumanMessagePromptTemplate
from langchain_ollama import ChatOllama 
from prompt import system_prompt_template, human_prompt_template
system_message = SystemMessagePromptTemplate.from_template(system_prompt_template)
human_message = HumanMessagePromptTemplate.from_template(human_prompt_template)

# Create a ChatPromptTemplate object
prompt = ChatPromptTemplate.from_messages([system_message, human_message])

# Initialize the Ollama model
model = ChatOllama(model="llama3.1", 
                    temperature=0.7)

# Define the topic and number of paragraphs for the blog post   
topic = "What is an ideal duration to keep any stock in your portfolio?"
no_of_paras = 3
expertise = "financial"

# Format the prompt with the topic and number of paragraphs
formatted_prompt = prompt.format(topic=topic, no_of_paras=no_of_paras, expertise=expertise)
# Print the formatted prompt to verify its correctness
print("Formatted Prompt:\n", formatted_prompt)  

# Generate the blog post using the model
response = model.invoke(formatted_prompt)
# Print the generated blog post 
print("\nGenerated Blog Post:\n", response.content)

That’s how the formatted prompt with inserted values looks like.

Formatted Prompt:
 System:
  You are an expert financial content writer. Write a detailed blog post as per the given
  instructions. The blog post should be engaging, informative, and well-structured. Include an introduction, main body, and conclusion. Use subheadings where appropriate and provide examples to illustrate key points.

  Human: ***Instructions***: Write a 3 paragraphs blog post about the following topic: What is an ideal duration to keep any stock in your portfolio?.

That's all for this topic Prompt Templates in LangChain With Examples. If you have any doubt or any suggestions to make please drop a comment. Thanks!


Related Topics

  1. What is LangChain - An Introduction
  2. LangChain PromptTemplate + Streamlit - Code Generator Example
  3. Chain Using LangChain Expression Language With Examples
  4. RunnableParallel in LangChain With Examples
  5. RunnableLambda in LangChain With Examples

You may also like-

  1. Retrievers in LangChain With Examples
  2. Simple RAG Application in LangChain
  3. Java Program - Sieve of Eratosthenes to Find Prime Numbers
  4. Ternary Operator in Java With Examples
  5. ConcurrentHashMap in Java With Examples
  6. Python String isnumeric() Method
  7. Interface in Python
  8. JavaScript filter Method With Examples

Sunday, March 29, 2026

How to Create User-defined Exceptions in Python

In this Python Exception Handling Tutorial series we'll see how to create user defined exceptions in Python which allow developers to create custom error types tailored to specific scenarios.

While Python provides a wide range of built‑in exceptions (like ValueError, TypeError, and IndexError), sometimes you as a user would want to create your own exception for a specific scenario in order to make error messages more relevant to the context. Such exceptions are called user-defined exceptions or custom exceptions. Defining your own exception makes error handling more meaningful and easier to debug.

User-defined exception in Python

To create a custom exception in Python, you simply define a new class that inherits from the built‑in Exception class (directly or indirectly).

Since your exception class is a class it can theoretically be defined to do anything any other class can do, but usually user-defined exception classes are kept simple, providing just enough information for handlers to understand the error.

As a convention, custom exception names should end with "Error", following the style of Python’s standard exceptions. This makes your code more readable and consistent with community practices.

User-defined exception Python example

Suppose you have a Python function that take age as a parameter and tells whether a person is eligible to vote or not. Voting age is 18 or more.

If person is not eligible to vote you want to raise an exception using raise statement, for this scenario you want to write a custom exception named “InvalidAgeError”.

# Custom exception
class InvalidAgeError(Exception):
  """Raised when the age provided is invalid."""
  def __init__(self, arg):
    self.msg = arg

def vote_eligibility(age):
  if age < 18:
    raise InvalidAgeError("Person not eligible to vote, age is " + str(age))
  else:
    print('Person can vote, age is', age)

# calling function
try:
  vote_eligibility(22)
  vote_eligibility(14)
except InvalidAgeError as error:
  print(error)

Output

Person can vote, age is 22
Person not eligible to vote, age is 14

Hierarchical custom exceptions

When creating a module that can raise several distinct errors, a common practice is to create a base class for exceptions defined by that module, and subclass that to create specific exception classes for different error conditions:

class Error(Exception):
  """Base class for exceptions in this module."""
  pass

class InputError(Error):
  """Exception raised for errors in the input.

  Attributes:
    expression -- input expression in which the error occurred
    message -- explanation of the error
  """

  def __init__(self, expression, message):
    self.expression = expression
    self.message = message

That's all for this topic How to Create User-defined Exceptions in Python. If you have any doubt or any suggestions to make please drop a comment. Thanks!

>>>Return to Python Tutorial Page


Related Topics

  1. Python Exception Handling - try,except,finally
  2. Python assert Statement
  3. Passing Object of The Class as Parameter in Python
  4. Name Mangling in Python
  5. Strings in Python With Method Examples

You may also like-

  1. Magic Methods in Python With Examples
  2. Getting Substring in Python String
  3. Variable Length Arguments (*args), Keyword Varargs (**kwargs) in Python
  4. Python Program to Display Prime Numbers
  5. HashSet in Java With Examples
  6. Java Multithreading Interview Questions And Answers
  7. Switch-Case Statement in Java
  8. Spring Email Scheduling Example Using Quartz Scheduler

Array Rotation Java Program

Write a Java program to rotate an array to the left or right by n steps is a frequently asked Java interview question because it tests both problem-solving skills and understanding of array manipulation.

For example, if your array is– {1,2,3,4,5,6,7,8}

  • Rotating the array 2 steps to the right results in {7,8,1,2,3,4,5,6}.
  • Rotating the array 2 steps to the left gives {3,4,5,6,7,8,1,2}.

Array rotation program- Solution

This tutorial provides two efficient solutions for the Array Rotation Java Program.

  1. Using a temporary array and System.arraycopy() for faster copying. See example.
  2. Rotating elements one by one using loops, which is simple but less efficient for large arrays. See example.

Array rotation program- Using temp array

Solution using temporary array works as follows-

  1. Create a temporary array having the same length as the original array.
  2. If you have to rotate by 2 steps i.e. n=2 then copy n elements to a temporary array.
  3. Shift rest of the elements to the left or right based on rotation requirement.
  4. Copy elements from the temp array to the original array in the space created by shifting the elements.

In the program we actually copy all the elements to the temp array and then copy back to original array.

public class ArrayRotation {
  public static void main(String[] args) {
    int[] numArr={1,2,3,4,5,6,7,8};
    //rotateLeft(numArr, 4);
    rotateRight(numArr, 3);
  }
    
  private static void rotateLeft(int[] numArr, int steps){
    // create temp array
    int[] temp = new int[numArr.length];
    // copy elements to the temp array
    for(int i = 0; i < steps; i++){
      temp[(numArr.length-steps)+ i] = numArr[i];
    }
    // copy rest of the elements from the original array
    int i = 0;
    for(int j = steps; j < numArr.length; j++, i++){
      temp[i] = numArr[j];
    }
    //copy from temp to original 
    System.arraycopy(temp, 0, numArr, 0, numArr.length);    
    System.out.println("Array after left rotation- " + Arrays.toString(numArr));
  }
    
  private static void rotateRight(int[] numArr, int steps){
    // create temp array
    int[] temp = new int[numArr.length];
    // copy elements to the temp array
    for(int i = 0; i < steps; i++){
      temp[i] = numArr[(numArr.length-steps)+ i];
    }
    // copy rest of the elements from the original array
    int i = steps;
    for(int j = 0; j < numArr.length - steps; j++, i++){
      temp[i] = numArr[j];
    }
    System.out.println("Array after right rotation- " + Arrays.toString(temp));
  }
}

Output

Array after right rotation- [6, 7, 8, 1, 2, 3, 4, 5]

Time and space complexity

With this approach you need a temporary array of size n (same size as original array) so the space complexity is O(n).

Time Complexity is sum of the following actions

  • Creating the temporary array: O(n), where n is the size of the original array.
  • Copying elements into the temporary array: O(n).
  • Copying back into the original array using System.arraycopy(): O(n).

Thus the overall time complexity is O(n).

Array rotation program- using loops

This Java program for array rotation uses inner and outer for loops for shifting and copying elements.

Solution using loops works as follows-
  1. In an outer loop, copy the first element (in case of left rotation) or last element (in case of right rotation) in a temporary variable.
  2. Shift elements to the left or right as per rotation requirement in an inner loop one step at a time.
  3. Once out of inner loop copy the element stored in temp variable to its final position.
  4. Repeat the process for the total rotation steps.
public class ArrayRotation {
  public static void main(String[] args) {
    int[] numArr={1,2,3,4,5,6,7,8};
    rotateLeft(numArr, 2);
    //rotateRight(numArr, 3);
  }
    
  private static void rotateLeft(int[] numArr, int steps){
    for(int i = 0; i < steps; i++){
      // store the first element
      int temp = numArr[0];
      for(int j = 0; j < numArr.length - 1; j++){
        // shift element to the left by 1 position
        numArr[j] = numArr[j + 1];
      }
      // copy stored element to the last
      numArr[numArr.length - 1] = temp;
    }
    System.out.println("Array after left rotation- " + Arrays.toString(numArr));
  }
    
  private static void rotateRight(int[] numArr, int steps){
    for(int i = 0; i < steps; i++){
      int temp = numArr[numArr.length-1];
      for(int j = numArr.length-1; j > 0; j--){
        numArr[j] = numArr[j -1];
      }
      // copy stored element to the beginning
      numArr[0] = temp;
    }
    System.out.println("Array after right rotation- " + Arrays.toString(numArr));
  }
}

Output

Array after left rotation- [3, 4, 5, 6, 7, 8, 1, 2]

Time and space complexity

With this approach, if you need to rotate by n steps, and the array length is m, each step requires O(m) operations. So, the total number of operations = n * m. Thus the time complexity is O(n*m).

There is no extra space requirement so the space Complexity is O(1).

That's all for this topic Array Rotation Java Program. If you have any doubt or any suggestions to make please drop a comment. Thanks!

>>>Return to Java Programs Page


Related Topics

  1. How to Find Common Elements Between Two Arrays Java Program
  2. Matrix Subtraction Java Program
  3. Java Lambda Expression Callable Example
  4. Producer-Consumer Java Program Using volatile
  5. Fibonacci Series Program in Java

You may also like-

  1. Difference Between Two Dates in Java
  2. How to Compile Java Program at Runtime
  3. Quick Sort Program in Java
  4. Write to a File in Java
  5. Map.Entry Interface in Java
  6. Synchronization in Java - Synchronized Method And Block
  7. Marker Interface in Java
  8. Spring Bean Life Cycle

Saturday, March 28, 2026

Support Vector Regression With Example

In this post we'll see how to use Support Vector Regression (SVR) which extends Support Vector Machine (SVM) to regression tasks. Since SVR is a regression model that means it is used for predicting continuous values where as SVM is a classification model which predicts a category or class label.

SVR is particularly useful when relationship between features and target may be non-linear or complex, when traditional linear regression struggles.

How does SVR work

Support vector regression works on the concept of hyperplane and support vectors. It also has a margin of error, called epsilon (ε). Let's try to understand these concepts.

1. Hyperplane- SVR model works by finding a function (hyperplane) that fits most data points within a defined margin of error (ε-tube).

In the case of linear relationship (Linear SVR) this hyperplane can be thought of as a straight line in 2D space.

In case of non-linear relationship this hyperplane can be thought of as existing in a higher dimensional space but in 2D space it manifests as a best fitting curved function.

2. Kernel trick- This ability of SVR to work in higher dimensional spaces is achieved by "kernel trick". A non-linear relationship is very difficult (actually impossible) to represent with a straight line. By mapping the data into a higher-dimensional feature space, the relationship may become linear in that space. That transformation is not done explicitly as it can be very expensive.

Using kernel trick this computation becomes easier, this lets SVR work in high-dimensional spaces implicitly, without ever explicitly constructing the transformed features.

Common kernel functions used in SVR are linear, polynomial, radial basis function (RBF), and sigmoid.

3. Margin of error- SVR has some tolerance for error. It gives a margin of error (ε) and data points that reside with in that margin are considered to have no error. You can think of this margin as a tube that goes on both sides of the fitted line (or curve). That tube is known as ε-insensitive tube because it makes the model insensitive to minor fluctuations.

SVR tries to fit a function such that most points lie within an ε-tube around the regression line.

4. Support vectors- These are the points that fall outside the ε-tube or lie on the edge of it. These support vectors determine the position of the hyperplane. These are the only points that influence the regression function because points falling with in the ε-tube are considered to have no error.

5. Slack Variables- Numerical values assigned to the variables representing the distance of the points outside the tube from the ε-tube. These slack variables are represented using the symbol \(\xi\).

Following images try to clarify all the above nomenclatures.

Support Vector Regression

SVR equation

In SVR there is an objective function and the goal is to minimize that function-

$$\frac{1}{2}\| w\| ^2+C\sum _{i=1}^n(\xi _i+\xi _i^*) $$

1. Here w is the weight vector which is computed from the support vectors:

$$w=\sum _{i=1}^n(\alpha _i -\alpha _i^*)x_i$$

where

  • xi are the support vectors (training points that lie outside the ε-tube)
  • \(\alpha _i, \alpha _i^*\) are Lagrange multipliers from the optimization problem

Note that Lagrange multipliers are used here in finding the maximum or minimum of a function when certain conditions (constraints) must be satisfied.

Constraints here are; for each data point (xi,yi):

$$y_i-(w\cdot x_i+b)\leq \varepsilon +\xi _i$$ $$(w\cdot x_i+b)-y_i\leq \varepsilon +\xi _i^*$$ $$\xi _i,\xi _i^*\geq 0$$

Note that in linear SVR, w is the weight vector defining the regression hyperplane.
In non-linear SVR, the kernel trick replaces direct inner products (xi, xj) with a kernel function K(xi, xj) and w is not computed explicitly.

minimizing \(\frac{1}{2}\| w\| ^2\) (norm of the weight vector) ensures the function is as flat as possible. Benefit of doing that is improved generalization of the model to new data and increased robustness against outliers.

2. \((\xi _i, \xi _i^*)\) are slack variables that measure deviations outside the ε-insensitive tube.

3. C is the regularization parameter controlling trade-off between flatness and total deviations outside the ε-insensitive tube. A large C value fits data closely which may risk overfitting, a smaller C value means simpler model which may mean risk of underfitting.

SVR tries to minimize the objective function during training. It finds the best \(w, b, \alpha _i, \alpha _i^*\) that minimize this objective function.

Once the optimized values are calculated, the regression function is defined as:

$$f(x)=\sum _{i=1}^n(\alpha _i-\alpha _i^*)K(x_i,x)+b$$

This is the actual function you use to make predictions.

Where:

  • xi = training data points
  • K(xi, x) = kernel function (linear, polynomial, RBF, etc.)
  • \(\alpha _i,\alpha _i^*\) = Lagrange multipliers from optimization
  • b = bias term

If you use a linear kernel, the equation simplifies to:

$$f(x)=w\cdot x+b$$

In case kernel is RBF (Radial Basis Function), the SVR prediction equation takes the form:

$$f(x)=\sum _{i=1}^n(\alpha _i-\alpha _i^*)\, \exp \left( -\gamma \| x-x_i\| ^2\right) +b$$

When SVR with Polynomial Kernel is used then the regression function becomes:

$$f(x)=\sum _{i=1}^n(\alpha _i-\alpha _i^*)\, (\gamma \cdot (x_i\cdot x)+r)^d+b$$

Support Vector Regression using scikit-learn Python library

Dataset used here can be downloaded from- https://www.kaggle.com/datasets/mariospirito/position-salariescsv

Goal is to predict the salary based on the position level.

In the implementation code is broken into several smaller units with some explanation in between for data pre-processing steps.

1. Importing libraries and reading CSV file

import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
df = pd.read_csv('./Position_Salaries.csv')

Position_Salaries.csv file is in the current directory.

2. Getting info about the data.

print(df.info())

Output


RangeIndex: 10 entries, 0 to 9
Data columns (total 3 columns):
 #   Column    Non-Null Count  	  Dtype 
---  ------    -------------- 	 ----- 
 0   Position 	10 non-null     object
 1   Level      10 non-null     int64 
 2   Salary     10 non-null     int64 

As you can see count of records is 10 only. Since dataset is already small so splitting is not done. Also, it is evident from one look at the small number of records that there are no duplicates and null values.

3. Feature and label selection

X = df.iloc[:, 1:-1]
y = df.iloc[:, -1]

Explanation- X = df.iloc[:, 1:-1]

  • : means "select all rows."
  • 1:-1 means "from column index 1 up to (but not including) the last column"

y = df.iloc[:, -1]

  • : means select all rows.
  • -1 means select the last column, uses negative indexing

"Position" column has been dropped as "level" column is also signifying the same thing in numerical values.

4.
print(X)
print(y)

On printing these two variables you can see that X is a 2D array where as y is a 1D array. Printing here just to make you understand that y may need conversion to 2D array as some of the functions need 2D array as parameter.

5. Scaling data

SVR relies on kernel functions (like RBF, polynomial) that compute distances between points. If features are on different scales, one feature can dominate the distance metric, skewing the model. So, standardizing the features is required.

With SVR target scaling is also required because the ε-insensitive tube and penalty parameter C are defined relative to the scale of y. Scaling y ensures that ε and C operate in a normalized space

from sklearn.preprocessing import StandardScaler
from sklearn.svm import SVR
from sklearn.metrics import r2_score, mean_squared_error
scaler_X = StandardScaler()
scaler_y = StandardScaler()
X_scaled = scaler_X.fit_transform(X)
y_scaled = scaler_y.fit_transform(y.reshape(-1, 1)).ravel()

Dependent variable y needs to be changed to a 2D array for that reshape is used. Note that with reshape() one of the dimensions can be -1. In that case, the value is inferred from the length of the array. Here row is passed as -1 so NumPy will infer how many rows are needed based on array size.

ravel() is used to flatten the 2D array again. Otherwise, fit() method will give problem as that expects dependent variable to be a 1D array.

6. Fitting the model

reg = SVR(kernel='rbf')
reg.fit(X_scaled, y_scaled)

An object of class SVR is created, kernel is passed as 'rbf' which is also a default. For C and epsilon default values are used which are 1 and 0.1 respectively. Note that y_scaled was already changed to a 1D array using ravel(), if not done you'll get the following error.

A column-vector y was passed when a 1d array was expected. Please change the shape of y to (n_samples, ), for example using ravel().

7. Predicting values

First a single value is predicted.

#prediciting salary by passing level
y_pred_scaled = reg.predict(scaler_X.transform([[6]]))
# prediction will also be scaled, so need to do inverse transformation
y_pred = scaler_y.inverse_transform(y_pred_scaled.reshape(-1,1))
print("Prediction in original scale:", y_pred.ravel()) #145503.10688572

Note that X value has to be scaled as model is trained with scaled values. Also, predicted value has be inversely transformed to bring it back to original scale.

Predicting for the whole data. Since splitting is not done and there is no train and test data so X_scaled is used to predict values.

y_pred = scaler_y.inverse_transform(reg.predict(X_scaled).reshape(-1,1)).ravel()

8. Comparing test and predicted values

A dataframe is created and printed to display original and predicted values side-by-side.

df_results = pd.DataFrame({'Target':y, 'Predictions':y_pred})
print(df_results)

Output

	 Target	Predictions
0	45000	73416.856829
1	50000	78362.982831
2	60000	88372.122821
3	80000	108481.435811
4	110000	138403.075511
5	150000	178332.360333
6	200000	225797.711581
7	300000	271569.924155
8	500000	471665.638386
9	1000000	495411.293695

As you can see, model has lots of room for improvement, lack of proper data is one of the main reason here. For 1 million, prediction is way off the mark.

9. Seeing the model metrics such as R squared, mean squared error and root mean squared error.

#Metrics - R-Squared, MSE, RMSE
print("R2 score", r2_score(y, y_pred)) 
mse = mean_squared_error(y, y_pred)
print("Mean Squared Error", mse)
print("Root Mean Squared Error", np.sqrt(mse))

Output

R2 score 0.7516001070620797
Mean Squared Error 20036494264.13176
Root Mean Squared Error 141550.3241399742

10. Visualize the result

plt.scatter(X, y, color='red')
plt.plot(X, scaler_y.inverse_transform(reg.predict(X_scaled).reshape(-1,1)), color='blue')
plt.xlabel('Level')
plt.ylabel('Salary')
plt.title("SVR")
Support Vector Regression plot

That's all for this topic Support Vector Regression With Example. If you have any doubt or any suggestions to make please drop a comment. Thanks!

>>>Return to Python Tutorial Page


Related Topics

  1. Simple Linear Regression With Example
  2. Multiple Linear Regression With Example
  3. Polynomial Regression With Example
  4. R-squared - Coefficient of Determination
  5. Mean Squared Error (MSE) With Python Examples

You may also like-

  1. What is LangChain - An Introduction
  2. Local, Nonlocal And Global Variables in Python
  3. Python count() method - Counting Substrings
  4. Python Functions : Returning Multiple Values
  5. Marker Interface in Java
  6. Functional Interfaces in Java
  7. Difference Between Checked And Unchecked Exceptions in Java
  8. Race Condition in Java Multi-Threading

First LangChain Program: Ask Me Anything

In this tutorial we’ll try to write few programs to connect to different LLMs using LangChain. In the post What is LangChain - An Introduction one of the points discussed was the standard interface provided by LangChain to integrate to any LLM.

Options to connect to the LLM using LangChain

There are two options to connect to the LLM using LangChain-

  • Using the init_chat_model function
  • Using the LLM specific model classes like ChatAnthropic, ChatOpenAI, ChatGoogleGenerativeAI, ChatOllama and so on.

Langchain init_chat_model function example

For the examples I am going to use OpenAI’s Gpt model, Google’s Gemini model and qwen3-32b through Groq inference provider. For another example, I’ll also use "llama3.1" through Ollama.

In the examples user's query is send to the model which responds with the answer to that query.

  • Packages needed are-

    • python-dotenv
    • langchain
    • langchain-openai
    • langchain-google-genai
    • langchain-groq
    • langchain-ollama

    You can install them individually using pip install PACKAGE_NAME or, if you are creating a Python program then you can create a requirements.txt file and put all the above mentioned external package dependencies in that file and provide that file to pip install commamd.

    pip install -r requirements.txt
    
  • Getting and setting the API key

    For using OpenAI models, Gemini models and Groq you must first obtain API key from the respective API provider. You can create an .env file in your Python project and store the generated API keys there.

    • GEMINI_API_KEY = “YOUR_GOOGLE_GEMINI_KEY”
    • GROQ_API_KEY = “YOUR_GROQ_API_KEY”
    • OPENAI_API_KEY = “YOUR_ OPENAI_API_KEY”

    This .env file can then be loaded using load_dotenv() function.

1. Connecting to Gemini

You can initialize the model by specifying the model name and optionally, the model_provider in init_chat_model() function. The temperature parameter is used to control the randomness, creativity, and determinism of the model's output.

  • Low Temperature (e.g., 0.0 to 0.3)

    Makes the model deterministic and focused. You'll get a "to the point" answer.

  • High Temperature (e.g., 0.7 to 1.0+)

    Makes the model creative and more diverse.

from langchain.chat_models import init_chat_model
from dotenv import load_dotenv

load_dotenv()
model = init_chat_model(    
  model="google_genai:gemini-3.1-flash-lite-preview",
  temperature=0.3
)
response = model.invoke("Explain Agentic AI in 5 lines")
print(response)

Output

content=[{'type': 'text', 'text': 'Agentic AI refers to autonomous systems capable of setting their own goals, breaking them 
into tasks, and executing them with minimal human intervention. Unlike traditional AI that simply responds to prompts, these 
agents use reasoning and tools to navigate complex environments. They actively monitor progress, adapt their strategies in 
real-time, and make decisions to achieve a desired outcome. Essentially, they shift the paradigm from "AI as a tool" to 
"AI as a collaborative partner" that gets work done.', 'extras': {'signature': 
'EjQKMgG+Pvb7ue1hvNYKCnERjPRv7v99o5JJsdZbRGGB3ce3fntMxKjz0D2dXa5GBv3l5myR'}}] additional_kwargs={} response_metadata=
{'finish_reason': 'STOP', 'model_name': 'gemini-3.1-flash-lite-preview', 'safety_ratings': [], 'model_provider': 'google_genai'} 
id='lc_run--019d2edb-3426-7033-a08d-d6d29fc97de5-0' tool_calls=[] invalid_tool_calls=[] usage_metadata={'input_tokens': 9, 
'output_tokens': 95, 'total_tokens': 104, 'input_token_details': {'cache_read': 0}}

As you can see response contains lot of other information along with actual content. You can extract the content part using response.content.

2. Connecting to OpenAI

from langchain.chat_models import init_chat_model
from dotenv import load_dotenv

load_dotenv()
model = init_chat_model(    
    model="gpt-5.2",
    temperature=0.3
)

response = model.invoke("Explain Agentic AI in 5 lines")
print(response.content)

3. Using qwen3-32b model through Groq

Since Groq is the model provider here so you need to explicitly mention it using model_provider parameter.

from langchain.chat_models import init_chat_model
from dotenv import load_dotenv

load_dotenv()
model = init_chat_model(    
    model="qwen/qwen3-32b",
    model_provider="groq",
    temperature=0.3
)

response = model.invoke("Explain Agentic AI in 5 lines")
print(response.content)

Configuration with init_chat_model

Above example used the fixed model initialization but you can also configure models at runtime using init_chat_model. That makes it easy to switch providers without changing code.

You need to set the following parameters for that-

  • configurable_fields: Defines which fields can be changed at runtime (e.g., 'any' for all fields, or a list like ("model", "temperature")).
  • config_prefix: If set, allows runtime configuration via config["configurable"]["{prefix}_{param}"]

in the following code initially "gpt-5.2" is selected as the model but later using the config_prefix "llama3.1" is set as the model.

from langchain.chat_models import init_chat_model
from dotenv import load_dotenv

load_dotenv()
configurable_model = init_chat_model(    
    model="gpt-5.2",
    temperature=0.3,
    configurable_fields="any", # Allows all fields to be configurable
    config_prefix="my_config" # Prefix for environment variables to override defaults
)

response = configurable_model.invoke("What is the role of GPU in the rise of AI?", 
        config={
        	"configurable": { "my_config_temperature": 0.7, # Override temperature for this invocation
                          "my_config_model": "llama3.1", # Override model for this invocation
                          "my_config_model_provider": "ollama" # Override model provider for this invocation
                        }
     	})
print(response.content)

You’ll get the output through Ollama not through GPT, because of the configuration settings.

Using Chat model classes in LangChain

LangChain provides chat model classes too for integrating with various models, enabling developers to build intelligent conversational AI applications with seamless support for OpenAI, Anthropic, Hugging Face, and other large language models.

These classes wrap various model providers, allowing developers to switch between them with minimal code changes.

Core classes for chat models are usually prefixed with Chat and imported from their integration packages, such as langchain_openai and langchain_anthropic. For example,

  • ChatOpenAI: For OpenAI models.
  • ChatAnthropic: For Anthropic models.
LangChain ChatModel Classes

Examples using Chat model classes

  1. Using qwen3-32b model through ChatGroq
  2. from langchain_groq import ChatGroq
    from dotenv import load_dotenv
    
    load_dotenv()
    model = ChatGroq(    
        model="qwen/qwen3-32b",
        temperature=0.3
    )
    
    response = model.invoke("What is the role of GPU in deep learning, explain in 5 lines?")
    print(response.content)
    
  3. Using ChatGoogleGenerativeAI to connect to Gemini
  4. from langchain_google_genai import ChatGoogleGenerativeAI
    from dotenv import load_dotenv
    
    load_dotenv()
    model = ChatGoogleGenerativeAI(    
        model="gemini-3.1-flash-lite-preview",
        temperature=0.3
    )
    
    response = model.invoke("What is the role of GPU in deep learning, explain in 5 lines?")
    print(response.content)
    

That's all for this topic First LangChain Program: Ask Me Anything. If you have any doubt or any suggestions to make please drop a comment. Thanks!


Related Topics

  1. Prompt Templates in LangChain With Examples
  2. LangChain PromptTemplate + Streamlit - Code Generator Example
  3. Messages in LangChain
  4. Chain Using LangChain Expression Language With Examples
  5. RunnableSequence in LangChain With Examples

You may also like-

  1. Chatbot With Chat History - LangChain MessagesPlaceHolder
  2. Text Splitters in LangChain With Examples
  3. Count Number of Words in a String Java Program
  4. Ternary Operator in Java With Examples
  5. Java Multithreading Interview Questions And Answers
  6. Java Exception Handling Tutorial
  7. Python Conditional Statement - if, elif, else Statements
  8. How to Iterate Dictionary in Python

Thursday, March 26, 2026

Java Variable Types With Examples

In this guide, we’ll explore the different types of variables in Java, their scopes, visibility, and practical examples to help you master the concept.

We’ll also see what is meant by declaring a variable and what is meant by initialization of a variable in Java.

Declaring and Initializing a variable in Java

In Java, every variable must be declared before use. Declaration specifies the type of the variable, which can be either a primitive data type (such as int, double, char), or having class or interface as type (reference variable).

Examples of variable declaration

int age, number; // two int variables declared

double amount; // a double variable declared

Person person; // reference type variable declared

As you see here, in the first statement two variables of type int are declared. Note that you can declare two or more variables of same type as a comma separated list.

In the third statement a variable person is declared which is of type Person. Here Person is a class.

Java 10 introduced a new feature called local variable type inference where the type of the variable is inferred from the variable initializer. A new reserved type name “var” is added in Java to define and initialize local variables, read more about var type here- Var type in Java - Local Variable Type Inference

Initialization of a variable in Java

Initialization means providing initial value of the variable. Generally, both declaration and initialization are done in a single statement.

 
int age = 30;

char grade = 'A';

But that is not necessary, you can also declare a variable first and initialize it later.

 
int age;

...... 
......
age = 50;

Variables can also be initialized using expressions:

 
double amount;

amount = 67/9;

Here amount will have the value of 67 divided by 9.

Types of variables in Java

The Java programming language defines the following kinds of variables:

  1. Instance Variables (Non-Static Fields)–Instance variables are declared inside a class but outside any method, constructor, or block, and they are not marked as static. Each object of the class gets its own copy of these variables, meaning their values are unique to each instance.

    For example, if you have a class Person and two objects of it person1 and person2 then the instance variables of these two objects will have independent values.

    public class Person {
     private String firstName;
     private String lastName;
     private int age;
     private char gender;
     public Person(String firstName, String lastName, int age, char gender){
      this.firstName = firstName;
      this.lastName = lastName;
      this.age = age;
      this.gender = gender;
     }
     
     public String getFirstName() {
      return firstName;
     }
    
     public String getLastName() {
      return lastName;
     }
    
     public int getAge() {
      return age;
     }
     public char getGender() {
      return gender;
     }
    }
    
    public class InstanceDemo {
    
     public static void main(String[] args) {
      Person person1 = new Person("Ram", "Mishra", 23, 'M');
      Person person2 = new Person("Amita", "Chopra", 21, 'F');
      
      System.out.println("Values in object person1 - " + 
        person1.getAge() + " " + person1.getFirstName() + " " + 
        person1.getLastName()+ " " + person1.getGender());
      System.out.println("Values in object person2 - " + 
        person2.getAge() + " " + person2.getFirstName() + " " + 
        person2.getLastName()+ " " + person2.getGender());
    
     }
    
    }
    

    Output

    Values in object person1 - 23 Ram Mishra M
    Values in object person2 - 21 Amita Chopra F
    

    Here you can see how using the constructor of the class, variables are initialized for both the objects and output shows that each instance of the class has its own values for the fields.

  2. Class Variables (Static Fields)- A class variable in Java is any field declared with the static modifier. As the name suggests class variable is at the class level. Unlike instance variables, there is only one copy of a static variable per class, shared across all objects. Doesn’t matter how many instances (objects) of the class you have, the class variable will have the same value. You can access class variables directly using the class name, without creating an object.

    Java class variables example

    One common use of static fields is to create a constant value that's at a class level and applicable to all created objects.

    public class Employee {
     int empId;
     String name;
     String dept;
     // static constant
     static final String COMPANY_NAME = "XYZ";
     Employee(int empId, String name, String dept){
      this.empId = empId;
      this.name = name;
      this.dept = dept;
     }
     
     public void displayData(){
      System.out.println("EmpId = " + empId + " name= " + name + " dept = " + 
      dept + " company = " + COMPANY_NAME);
     }
     public static void main(String args[]){  
      Employee emp1 = new Employee(1, "Ram", "IT");
      Employee emp2 = new Employee(2, "Krishna", "IT");
      emp1.displayData();
      emp2.displayData();
     }
    }
    

    Output

    EmpId = 1 name= Ram dept = IT company = XYZ
    EmpId = 2 name= Krishna dept = IT company = XYZ
    
  3. Local Variables– Local variables are variables declared within a method, constructor, or block. They represent the temporary state of a method and exist only during the execution of that method. Once the method finishes, the local variables are destroyed, and their values are no longer accessible.

    Scope of Local Variables

    The scope of a local variable is limited to the block of code enclosed by curly braces {} where it is declared. This means:
    • A local variable declared inside a method is accessible only within that method.
    • If you declare a variable inside a nested block (such as an if statement or loop), its scope is restricted to that block.

    One more thing to note is that you can have a local variable with the same name as class level variable in the method, with in the method the local variable will take priority.

    Java local variables example

    public class InstanceDemo {
     // class level variable
     int x = 8;
     public static void main(String[] args) {
      
      InstanceDemo id = new InstanceDemo();
      
      id.display();
      System.out.println("value of class level variable x " + id.x);
     }
     
     public void display(){
      int x = 5;  // local variable
      boolean flag = true;
      System.out.println("value of local variable x " + x);
      if (flag){
       int y = 10; // nested scope variable
       System.out.println("value of local variable y inside if " + y);
      }
      // This will cause compile-time error
      //System.out.println("value of local variable y inside if " + y); 
     }
     
    }
    

    Output

    value of local variable x 5
    value of local variable y inside if 10
    value of class level variable x 8
    

    Here you see there is a class level variable and again it the method display() there is a variable with the same name x. With in the method value of local variable x takes priority and that is printed. Once out of the method, x that is recognized is the class level variable x.

    Another thing to note is the nested scope created by the if condition with in the display() method. Scope of variable y is in between the starting and closing braces of the if condition. Once you are out of if condition y won’t be recognized. Any attempt to print value of y outside the if condition scope will result in compile-time error.

  4. Parameters- Variables passed to any method are known as parameters. Any changes made to the primitive types parameter won’t change the original value.

    Java parameters example

    public class InstanceDemo {
     public static void main(String[] args) {
      
      InstanceDemo id = new InstanceDemo();
      int x = 10;
      id.display(x);
      System.out.println("value of x after method call " + x);
     }
     
     public void display(int x){
      
      x++;
      System.out.println("value of local variable x " + x);
     }
     
    }
    

    Output

    value of local variable x 11
    value of x after method call 10
    

    Here you have a variable x that is passed to display method as an int parameter. With in the method display() value of x is changed. But that change is local only and doesn’t change the original value of x. This is because copy of a variable is passed as a method parameter.

    If an object is passed as a parameter and any of that object’s field is changed, that change will be visible in other scopes too.

That's all for this topic Java Variable Types With Examples. If you have any doubt or any suggestions to make please drop a comment. Thanks!

>>>Return to Java Basics Tutorial Page


Related Topics

  1. Java is a Strongly Typed Language
  2. Primitive Data Types in Java
  3. Access Modifiers in Java - Public, Private, Protected and Default
  4. What Are JVM, JRE And JDK in Java
  5. Object Creation Using new Operator in Java

You may also like-

  1. String in Java Tutorial
  2. Array in Java
  3. Count Number of Words in a String Java Program
  4. Ternary Operator in Java With Examples
  5. Java Multithreading Interview Questions And Answers
  6. Java Exception Handling Tutorial
  7. ConcurrentHashMap in Java With Examples
  8. TreeMap in Java With Examples

What is LangChain - An Introduction

LangChain is an open-source framework which is designed to simplify the creation of LLM powered applications. Through its tools and APIs, it allows you to refine and customize information, making LLM generated responses more precise and contextually meaningful. For example, using LangChain you can link LLMs to external data sources and computation to enhance LLM model’s capabilities. By leveraging these capabilities, developers can build powerful AI applications like smart chatbots, advanced Q&A systems, summarization services.

LangChain Installation

LangChain framework is available as libraries in Python and JavaScript.

  • To install the LangChain package in Python you can use the following command
  • pip install -U langchain
        

    Prerequisite is Python 3.10+ installation for the LangChain version 1.2.13

  • For Typescript installation, you can use the following command
  • npm install langchain @langchain/core
        

    Prerequisite is Node.js 20+ installation for the LangChain version 1.2.13

Core components of LangChain

  1. Models

    LangChain’s standard model interfaces give you access to many different provider integrations. LangChain provides integration to nearly all the LLMs through a standard interface, including OpenAI, Anthropic, Google, to many open source models (like Meta AI’s LLaMa, Deepseek's Deepseek-LLM, Mistral) through HuggingFace, to locally downloaded LLMs through Ollama.

    Note that many LLM providers require an API key and you need to create an account in order to receive an API key.

    The simplest way to connect to any LLM is using the init_chat_model function in LangChain or you can use the LLM specific model classes like ChatAnthropic, ChatOpenAI, ChatGoogleGenerativeAI, ChatOllama and so on.

    Only thing you need is the integration package for the chosen model provider to be installed which is generally in the format langchain-LLM_PROVIDER_NAME. So, if you are using OpenAI then you need to install langchain-openai package.

    Refer this post- First LangChain Program: Ask Me Anything to see how to connect to different models programmatically using LangChain.

  2. Prompt Template

    LangChain provides a PromptTemplate class to create a prompt template- a dynamic prompt with some placeholders that can be given actual values later. There is also a ChatPromptTemplate class which acts as a prompt template for chat models.

    Refer this post- Prompt Templates in LangChain With Examples to know more about creating prompt templates in LangChain using PromptTemplate and ChatPromptTemplate classes.

  3. Tools

    LLMs have some limitations, firstly these models are trained up to some cutoff date and they have data up to that time only. Though many LLMs now have web browsing capability to bypass their training data cutoff. Another problem is not having domain data or company specific data.

    Langchain tools can give that functionality to agents by letting them fetch real-time data, execute code, query external databases, and take actions in the world. Under the hood, tools are callable functions with well-defined inputs and outputs that get passed to a chat model. The model decides when to invoke a tool based on the conversation context, and what input arguments to provide.

  4. Agents

    LangChain agents combine LLMs with tools to create systems that can complete complex tasks step by step. Unlike a basic application where you send a prompt and get a response from an LLM, LangChain agents can think, plan, and adapt.

    They integrate language models with tools to reason about tasks and choose the right actions.

    This allows them to iteratively work toward solutions with greater intelligence and flexibility.

  5. Chains

    As the name LangChain itself suggests, chains are the main concept of LangChain. They allow developers to link multiple components together into a single workflow. Each link in the chain can handle a specific task, from prompt formatting to retrieval and reasoning. This modular design makes it easy to build complex AI applications step by step.

    With the LangChain Expression Language (LCEL), you can define these chains declaratively, using the pipe (|) symbol to connect prompts, models, retrievers, and output parsers.

    Refer this post- Chain Using LangChain Expression Language With Examples to understand how to chain sequence of events to create a workflow.

  6. Memory

    For many AI applications you may need to retain information about previous interactions. For AI agents, memory is crucial because it lets them remember previous interactions, learn from feedback, and adapt to user preferences. LangChain provides both short term memory and long term memory.

    1. Short term memory- It lets your application remember previous interactions within a single thread or conversation.
    2. Long-term memory- It lets your agent store and recall information across different conversations and sessions. Long-term memory persists across threads and can be recalled at any time.

How does LangChain work

Now, when you have some idea about the components of LangChain, let us try to understand how LangChain can actually chain together different components to create a workflow for LLM powered applications.

For example, a typical Retrieval-Augmented Generation (RAG) application may have a workflow as given below-

  1. Receive the user’s query.
  2. Reformulate the query by sending it to the LLM which rephrases the user provided query into a concise search query.
  3. Retrieve the data relevant to search query, which could involve connecting to databases, APIs, or other repositories (e.g. Confluence). LangChain provides various document loaders like PyPDFLoader, TextLoader, CSVLoader, ConfluenceLoader for integrating data from numerous sources.
  4. Pass the retrieved documents to LLM to summarize the information
  5. Pass the retrieved information, along with the original query, to an LLM to get the final answer.

That's all for this topic What is LangChain - An Introduction. If you have any doubt or any suggestions to make please drop a comment. Thanks!


Related Topics

  1. First LangChain Program: Ask Me Anything
  2. LangChain PromptTemplate + Streamlit - Code Generator Example
  3. Messages in LangChain
  4. RunnableParallel in LangChain With Examples
  5. RunnableLambda in LangChain With Examples

You may also like-

  1. Structured Output In LangChain
  2. Output Parsers in LangChain With Examples
  3. Check if Given String or Number is a Palindrome Java Program
  4. Polymorphism in Java
  5. Difference Between Abstract Class And Interface in Java
  6. Java Automatic Numeric Type Promotion
  7. Java Pass by Value or Pass by Reference
  8. finally Block in Java Exception Handling